Download as pdf or txt
Download as pdf or txt
You are on page 1of 514

copyright

© Copyright 2010. Gecamin

All rights reserved. No part of this publication may be repro-


duced, stored or transmitted in any form or by any means,
electronic, mechanical, by photocopying, recording or other-
wise, without the prior written permission from Gecamin.

Cover and book design by Gecamin.


Printed and bound in Santiago, Chile.

disclaimer

Although all care is taken to ensure the integrity and quality


of this publication and the information herein, no respon-
sibility is assumed by the Publisher or the Authors for any
damage to property or persons as a result of operation or use
of this publication and/or the information contained herein.

i.s.b.n. 978-956-8504-28-1

Gecamin

197 Paseo Bulnes, 6th Floor


Santiago, Chile
Zip Code: 833 0336

Telephone: (56 2) 652 1500


Fax: (56 2) 652 1570
E-mail: info@gecamin.cl
Web: www.gecamin.cl

The first edition of 300 copies was printed in June 2010 by


Salesianos Impresores s.a., San Ignacio 1974, Santiago, Chile.
preface

Romke kuyvenhoven MININ 2010, held on 23–25 June in Santiago, is the fourth of a
Executive Editor series of international conferences on mining innovation initiated
in 2004 by Gecamin, the Mining Engineering Department of the
MININ 2010 Universidad de Chile and the Mining Center of the Pontificia
Universidad Católica de Chile.
4th International
Conference on Mining
The objectives of MININ conferences are twofold:
Innovation
1. To exchange the knowledge and experience on mining
innovation as applied to or derived from Sampling, Ore
Deposit Evaluation, Geomechanics and Geotechnics, Mine
Planning, Mine Unit Operations, Maintenance Planning,
Information and Automation Technologies and Integrated
Mine Management.

2. To promote the international collaboration and technical


exchange among professionals dedicated to develop,
operate and maintain production systems for the mining
industry.

A total of 53 technical papers written by authors from 12


countries and published in this conference proceedings, discuss
the emerging concepts, models, developments, technologies and
successful innovation practices in the mining industry.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
organisers

The MININ 2010 Conference was organised by the Mining


Engineering Department of the Universidad de Chile, the Mining
Center of the Pontificia Universidad Católica de Chile and Gecamin.

Mining Engineering Department, Universidad de Chile

The Mining Engineering Department was founded on 7 December


1853. Since the graduation of the first four engineers in 1856, the
Department has trained over 1300 professionals for one of the
most important Chilean industries. Leadership of the Universidad
de Chile graduates in such projects as block caving, heap leaching
and El Teniente converter, have brought the university a worldwide
recognition. The Department currently has 11 fulltime faculty and
conducts research in such areas as mining technology, mineral
resources evaluation, hydro-electrometallurgy and environment,
pyrometallurgy, mineral processing and mineral economics. The
Department offers Master programmes in Mining Engineering
and in Extractive Metallurgy and will start in 2010 a Ph.D.
program in Mining Engineering.

Mining Center, Pontif icia Universidad Católica de Chile

The Mining Engineering Programme was created in 1994 in


response to the growing demand for highly skilled engineers
capable of combining conceptual design and mine and/or
processing plant management. Originally acting as a coordinating
body and sharing academic and research resources with the other
Departments and Engineering Schools, now the Center offers full
fledged degree programmes in Mining Engineering, both Civil and
Industrial. In addition, the Center offers a range of postgraduate
programmes including the Ph.D. programme in Mining
Engineering and in Mineral Engineering. The main focus of the
Center’s research is mineral economics, with additional strong
interests in mine management and control, mining methods
and equipment. The Center has six full-time faculty supported
by several part-time professors and a number of technical and
other staff.

Gecamin

Gecamin is a private, Chilean company created in 1998 that


annually organises international technical events with the aim of
informing and inspiring mining industry professionals, fostering
the exchange of information, and sharing best practices and new
technologies applied in mining. The goal of each conference is to
bring together engineers, scientists, researchers, managers and

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
viii organisers

operators to enable a focused discussion on the latest developments


and innovations with the ultimate purpose of establishing
interdisciplinary networks of research and knowledge exchange.
Through these conferences and training programmes Gecamin
seeks to help the industry to openly address its most pressing
concerns and find more sustainable solutions.
Gecamin organises seminars in partnership with institutions of
strong technical excellence in mining such as the Universidad de
Chile, the Pontificia Universidad Católica de Chile, the Universidad
de Concepción, Chile, the Universidad Técnica Federico Santa
María, Chile, The University of Western Australia and The
University of Queensland, Australia, among others.
In 12 years of operation, more than 12,000 professionals have
attended our events and have been trained in areas of paramount
importance to the mining industry. These areas include the
following: Geology, Mining Unit Operations, Mine Planning,
Mineral Processing, Hydrometallurgy, Paste and Thickened
Tailings, Mine and Plant Maintenance, Automation and Control,
Water and Energy Management in Mining, Mine Closure,
Environmental and Social Impacts Assessment.
Each event organised by Gecamin features a great diversity of
technical papers presenting case studies, applications as well as
theoretical research and scientific findings. Every conference is
documented by the proceedings containing carefully selected peer-
reviewed papers. Prominent industry experts and academics bring
their knowledge and experience to our events ensuring high stan­
dards of the proceedings and the technical programmes.
The next few years are expected to bring a much more positive
economic outlook, allowing mining companies worldwide to
activate or reconsider alternatives for greenfield and brownfield
projects. Sustainability, water in mining and efficient energy
use will remain the key focus areas throughout the major part
of the industry. For this reason, Gecamin believes it is of great
importance to share experiences and discuss alternatives and
opportunities for improving operational processes and best
practices with colleagues from around the world.

Learn more about our events by visiting www.gecamin.cl


We are ISO 9001:2008 certified.
minin 2010 committees

Organising chairman

Committee Diego Hernández Xavier Emery


President and Chief Academic
Executive Officer Department of Mining Engineering,
Codelco Chile Universidad de Chile

executive director Omar Hernández


Carlos Barahona Deputy Head of Mining,
General Manager Environment and Infrastructure
Gecamin, Chile InnovaChile- CORFO

technical coordinator Rodrigo Pascual


Romke Kuyvenhoven Academic
Gecamin, Chile
Mining Center, Pontif icia
Universidad Católica de Chile
event coordinator
Isis Galeno
Javier Ruiz del Solar
Gecamin, Chile
Director

members Advanced Mining Technology


Centre, Universidad de Chile
Fidel Báez
Corporate Manager
Underground Mine Project
Codelco Chile

Advisory Carlos Ávila Peter Knights


Committee Planning Vice President BMA Chair and Head of Division
Minera Escondida Ltda., Chile of Mining Engineering
The University of Queensland, Australia

Pedro Carrasco
Technical Director Mauricio Larraín
Codelco, Chile Mining Resource and
Development Manager

Aldo Casali El Teniente Division, Codelco Chile

Director
Ricardo Maturana
Mining Engineering Department,
Universidad de Chile Mine Manager
Compañía Minera Maricunga, Chile

Julio Díaz
Mine Manager Sergio Peñailillo
Minera Esperanza, Chile Mine Manager
Barrick – Pascua Lama, Chile

Roussos Dimitrakopoulos
Canada Research Chair in Edson Ribeiro
Sustainable Mineral Resource Department Director for Strategic
Development and Optimisation Planning and Project Evaluation
under Uncertainty Vale S. A., Brazil
McGill University, Canada

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
x commit tees

Hernán Sanhueza Malcolm Thurston


HSE Manager Senior Vice-President Mineral
Compañía Minera Cerro Colorado Ltd., Resource Management
BHP Billiton, Chile De Beers Canada Inc., Canada

Malcolm Scoble
Chair in Mining and Sustainability
University of British Columbia, Canada

Technical Sabry Sabour Snehamoy Chatterjee


Committee Research Associate, Research Associate,
COSMO- Stochastic Mine COSMO- Stochastic Mine
Planning, Laboratory Planning Laboratory
McGill University, Canada McGill University, Canada

Mohammad Waqar Ali Asad Carlos Espinoza


Research Associate, Mine Planning Superintendent,
COSMO- Stochastic Mine Spence- BHP Billiton, Chile
Planning Laboratory
McGill University, Canada Mircea Georgescu
Vice-Rector in charge of Education
Ricardo Arias and International Relations
Engineering Manager University of Petrosani, Romania
Asesorías Profesionales
D.G.A.Min. Ltda., Chile Gustavo Guiñez
Senior Analist of Business
Victor Babarovich Processes, Information
International Events Systems Management
Technical Coordinator Anglo American, Chile
Gecamin, Chile

Ronald Guzmán
Brian Baird Adjunct Professor
Manager Technical Mining Center, Pontif icia
Services- Mine Planning Universidad Católica de Chile
BHP Billiton, USA

Warren Hitchcock
Jörg Benndorf Senior Principal Geotech Engineer
Senior Engineer Special Projects, BHP Billiton, USA
Central German Lignite Operations
MIBR AG, Germany Vladislav Kecojevic
Associate Professor, Mining
Alexandre Boucher Engineering Centennial Career
Assistant Professor Development, Professorship
Stanford University, USA in Mining Engineering
The Pennsylvania State University, USA

Raúl Castro
Academic John Kemeny
Mining Engineering Department, Professor, Department of Mining
Universidad de Chile and Geological Engineering
University of Arizona, USA

Xavier Emery
Academic Brett King
Mining Engineering Department, Managing Director
Universidad de Chile Strategy Optimization
Systems, Australia
commit tees xi

Claudio Lopes Pinto Julián Ortiz


Associate Professor Academic
Departamento de Engenharia Mining Engineering Department
de Minas, UFMG, Brazil Universidad de Chile

Ian Lowndes Sergio Pichott


Associate Professor and Reader in Senior Advisor in Geometallurgy
Mine Environmental Engineering División Los Bronces, Anglo
University of Notthingham, American, Chile
United Kingdom
Enrique Rubio
Antoni Magri Academic
Consultant Mining Engineering Department,
Magri Consultores, Chile Universidad de Chile

Eduardo Magri Peter Stone


Adjunct Professor Manager Optimization R&D
Mining Engineering Department, BHP Billiton, Australia
Universidad de Chile
Guillermo Turner-Saad
Hani Mitri Global Vice-President
Professor of Mining Engineering Metallurgy and Mineralogy
McGill University, Canada SGS Minerals Ser vices, Canada

Christian Moscoso Sebastiaan Van Dorp


Director Mineral Lead Process Engineer
Economics Program SK M Consulting, Australia
Mining Engineering Department,
Universidad de Chile Ernesto Villaescusa
Professor, Industry Chair in
Hernani Mota de Lima Mining Rock Mechanics
Professor Western Australian School of
Universidade Federal de Mines, Curtin University of
Ouro Preto, Brazil Technology, Australia

Alejandro Moyano Jeff Whittle


Director Underground President
Mining Program W hittle Consulting, Australia
Instituto de Innovación en Minería y
Metalurgia S.A. I M2, Codelco, Chile

Editorial Raúl Castro Romke Kuyvenhoven


Committee Universidad de Chile Gecamin, Chile

Xavier Emery
Universidad de Chile

Book Paula Barahona Media Alicia Bonilla


Designers Pablo Baratta Designers Magdalena Serrano
Gecamin, Chile Gecamin, Chile

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
foreword

Diego hernández The 4th International Conference on Mining Innovation, MININ


Chairman 2010, finds us in the midst of a deep and interesting analysis of
the industry, as we are at a historically unprecedented moment.
MININ 2010 During the coming decades —and only for the case of copper—
4th International consumption is estimated to exceed the accumulated consumption
Conference on Mining in the whole history of humanity so far.
Innovation
The considerable challenge of expanding production capacity to
satisfy the estimated demand for minerals is critically demanding
a more active technological development that will allow the
industry to respond to market needs in a sustainable manner and
in a context of greater social and environmental considerations.
In effect, it is envisioned that the complexity of challenges in
the mining sector will increase in such a way that technological
capacities in management and innovation will have to be
significantly higher. Only through this, will we be able to
approach projects in a competitive manner.
For example, in Chile, a country accountable for a third of
copper production, a considerable figure of 35% of its mines have
been in operation for over half a century and 70% of its productive
capacity is already mature.  Among other aspects, this means
decreasing ore grades, longer hauling distances, lower availability
of secondary enrichment and leaching material, and a greater
focus on primary minerals of lower grade.  In summary, increasing
costs and declining competitiveness.
In this context, the mining industry requires —among other
things— to improve the predictability of mining plan models; to
face the growing complexities and uncertainties of ore bodies; to
improve maintenance of fleet and equipment; to optimise training
in maintenance and of operators; to improve knowledge of rock
fragmentation to move towards more intelligent blasting; to
develop new paradigms in mineral transportation that lead us to
think of better conveyor and truck fleet configurations; and —why
not— to look out to other industries for the transfer of practices
and knowledge. 
Thus, we live in a time that is characterised by important
innovation and growth opportunities. And to make the maximum
use of this potential we must also keep in mind how innovation
processes are currently organised in mining.  Since several
decades, these processes are developed in complex networks
articulated by large mining companies that have organised
innovation and knowledge systems at local and international
level, which potentiate and complement each other.  Similarly,
part of the innovation processes that in the past were developed
within mining companies, are carried out today by technology
and knowledge-intensive service suppliers, which interact with

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
xiv foreword

the mining companies and play a key role in the competitiveness


of the industry.
We know of many examples in the history of mining of
successful technological development processes that have been
articulated around the needs of the industry. Today, once again
and with a real sense of urgency, we must get to trigger a new
innovation wave that will allow us to increase mineral production
and in this way contribute to the wellbeing of millions of people,
whose countries are well into the process of development.
acknowledgements

organising The organisation of the MININ 2010 Conference and these


committee Conference Proceedings are the combined effort of many
individuals who have put in long hours of hard work, dedication
MININ 2010 and talent. We would like to extend profound thanks to all those
4th International involved in the Conference organisation for their contributions
Conference on Mining
of time, advise and expertise to this project. We are particularly
Innovation
grateful to:

• The authors for their invaluable contributions, monumental


efforts of meeting deadlines, and willingness to share their
knowledge and experience.
• The technical reviewers for willingness to invest their personal
time in articles corrections, the critical process to ensure the
quality of this publication.
• The following sponsors (as of 27 May 2010, in alphabetical order)
for their generous support:

Gold Sponsor
BHP Billiton

Silver Sponsors
ABB, Datamine, Geovariances, Hewlett Packard Chile,
Leapfrog

Social Sponsor
BHP Billiton and Geoinnova Consultores

Official Material
Golder Associates and Maptek

Student Sponsor
BHP Billiton

Institutional Sponsors
Consejo Minero, Chile; cosmo Stochastic Mine Planning
Laboratory, McGill University, Canada; InnovaChile, corfo;
mirarco, Canada; Sociedad Nacional de Minería (sonami),
Chile; The Brazilian Mining Association (ibr am); The
Sustainable Minerals Institute (smi), and The University of
Queensland, Australia.

• The following media partners for their assistance in promoting


the conference and bringing the updated information to you in
a timely manner:

Official National Media Sponsor


Minería Chilena

Media Partners
El Inversor Energético & Minero, Maney Publishing Minergía
and Panorama Minero

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
xviii acknowledgements

• The Gecamin team for their hard work, professionalism and


continuous commitment to making this conference a success.
• Technical and Advisory Committees for their helpful advice and
assistance in promoting the conference.

And last but not least, we would like to thank you, readers and
participants, whose interest and enthusiasm made this event so
versatile and the whole experience so rewarding and enriching.
Economic Ranking of Copper Mining
Projects at Exploration and Early
Engineering Stages

abstract
Rodrigo riquelme Mining companies usually have several projects at different stages,
GeoInnova Consultores, Chile from exploration targets to mines in production. Once the project
has ended the advanced exploration stage, they proceed to an
Roberto fréraut engineering stage (conceptual, prefeasibility and feasibility) with
División Codelco Norte, Chile suitable studies to support the economic potential of each project
that are added in the business development plan. Based on those
studies, it is possible to rank the projects and make investment
decisions about them. In contrast, for projects at exploration or
early stages, available information and knowledge are weak,
then there is more uncertainty about economic feasibility and
prioritising of them is more complex.
In order for a mining project to successfully pass a mine
operation, knowledge of several factors is required, such as:
mineral resources, geological and geotechnical issues, energy and
water supply, etc. This work presents a simple methodology for the
economic prioritisation of mining projects at early stages, which is
based on the public information of copper deposits in Chile. Each
deposit is characterised by their mineral resources, considering
its tonnages and grades, for which their marginal distributions
are modelled and then combined to define a bivariate density
function of the grades and tonnages. From the bivariate density
function, it is possible to obtain the probability of occurrence of
a deposit with certain tonnage and grade. This allows comparing
and contextualising new deposits in size, with respect current
deposits and could be used to guide the investment strategy by
supporting the decision-making process for the projects involved.
The methodology is extensible to consider other factors beyond the
resources: the depth of emplacement, coproducts, mineralisation
style and distance between deposits, among others.
In Chile, there are more than 586 million known tonnes of
copper; Andina and Rio Blanco at Los Bronces are currently the
largest districts in the world. The largest amount of copper is
located in the Middle Miocene-Early Pliocene Metallogenic belt
in the central Andes mountain range.
4 E conomic R anking of Copper Mining Projec t s at E x ploration...

introduction
Mining companies usually have several projects at different stages, from exploration
targets to mines in production. Once the project has ended the advanced exploration
stage, they pass to an engineering stage (scoping, prefeasibility and feasibility) with
suitable studies to support the economic potential of each project, that are added in the
business development plan. Based on those studies, it is possible to rank the projects and
make investment decisions about them.
In contrast, for projects at exploration or early engineering stages there is less available
information (samples, studies, knowledge, etc.) that increases the uncertainty related
to their economic feasibility, therefore it is complex at this point, to establish a profit/
risk ranking of the projects.
In order for a mining project to successfully pass to a mine operation, knowledge of
several factors is required, such as: mineral resources, contaminants, mineralisation
style, ore body geometry, accessibility, elevation, geotechnical issues, geometallurgic,
metallurgical, process design, tailings disposal, equipment mining and process
supplies, operational services, energy and water supplies, human capital, manpower,
environmental condition (e.g glacier location), social and governmental factors, political
conditions, etc.
This work proposes a simple approach to ranking the prospect and projects in early
stage, based on tonnage and copper grade of currently known deposits in different stages,
from prospect to operation mine. The mineral resources information of mainly deposit
in Chile was recopilated from public sources of the last two years as: annual report,
financial statement, NI43 101 reports, ceo presentations, geological congress and public
communications and data web.
The deposits has not differenced between “discovered” or “known”, i.e. has been fully
delimited by exploration with others orebody called “open” where it is possible to find
more resources in certain directions [5] .

methodology
Copper mineral resources recollected data
A list of main copper mineral resources has been completed with information of deposit
in with characteristic different such as: prospect stage to mine operation, size, shape,
region, age, mineralisation type and geological context. Although the list does not include
the whole copper deposit of Chile, it considers the most important copper resources today.
A total of 61 deposits have been collected of which 46 are porphyry copper, Table 1. The
minerals resources data include all the classifications measured, indicated and inferred.
Some prospects, the information of recourses have been reported like range of tonnages and
copper grades in the official report, in these cases the averages of each one were considered.
The copper mineral resources of Division Salvador (Inca, Damiana and San Antonio)
and Cluster Toki (Toki, Genoveva, Quetena and Opache) are publicly reported globally
without details of deposits.
CHAPTER I 5

Table 1 Copper mineral resources by deposit in Chile

Name Deposit Mton Cu Metal Cu Deposit Clas- Belt geological Stage Source
grade content sification

Andina 16908 0.63 106.4 porphyry Cu Middle Miocene – mine/ Annual Report
Early Pliocene feasibility 2008

Antakena 38 0.83 0.3 Strata-bound Jurassic - Early prospect NI43 101


(Madrugador y cretaceous
Elenita)

Antucoya 590 0.38 2.2 Strata-bound Jurassic - Early feasibility Annual Report
cretaceous 2008 range

Aurora 7 1.24 0.1 Strata-bound Paleocene prospect Annual Report


2008 range

Brujulina 65 0.59 0.4 exotic Middle Miocene – prospect Annual Report


Early Pliocene 2008 range

Candelaria 391 0.55 2.2 IOCG Jurassic - Early mine Annual Report
cretaceous 2008

Caracoles 1100 0.50 5.5 porphyry Paleocene prospect Annual Report


Cu-Au-Mo 2008

Caracoles 900 0.55 4.9 porphyry Cu Paleocene prospect Annual Report


2008 range

Caserones 1350 0.33 4.5 porphyry Cu Early - Middle prefeasibility EIA report
Miocene

Casualidad 400 0.55 2.2 IOCG Late Eocene – prospect Annual Report
Virgo Early Oligocene 2008

Centinela 80 0.70 0.6 porphyry Cu Paleocene prospect Annual Report


2008 range

Cerro Casale 1285 0.35 4.5 porphyry Early - Middle feasibility Camus, F.,
Cu-Au Miocene 2005.

Cerro Colorado 372 0.63 2.3 porphyry Cu Paleocene mine Annual Report
2008

Chimborazo 236 0.6 1.4 porphyry Cu Paleocene prospect Long, K.R.,


1995

Chuquicamata 6535 0.61 39.9 porphyry Cu Late Eocene – mine/ Slides CEO
Early Oligocene feasibility

Cluster Toki 2648 0.49 12.9 porphyry Cu Late Eocene – scopy study Slides CEO
Early Oligocene

Conchi 550 0.61 3.4 porphyry Cu Middle Miocene – prospect Annual Report
Early Pliocene 2008 range

El Abra 1120 0.45 5.0 porphyry Cu Late Eocene – mine/ Annual Report
Early Oligocene prefeasibility 2008

El Morro 487 0.56 2.7 porphyry Late Eocene – prospect Slides CEO
Cu-Au Early Oligocene

El Soldado 71 0.56 0.4 Strata-bound Late Eocene – mine Annual Report


Early Oligocene 2008

El Telegrafo 1600 0.44 7.0 porphyry Paleocene prospect Annual Report


Cu-Au-Mo 2008

El Teniente 16898 0.55 93.6 porphyry Cu Middle Miocene – mine/ Annual Report
Early Pliocene feasibility 2008

El Tesoro 287 0.57 1.6 exotic Paleocene mine Annual Report


2008

Escondida 8913 0.63 56.2 porphyry Late Eocene – mine/ Annual Report
Cu-Mo Early Oligocene feasibility 2008

Esperanza 1204 0.45 5.4 porphyry Paleocene construction Annual Report


Cu-Au-Mo 2008

Franke 73 0.70 0.5 IOCG Jurassic - Early start up NI43 101


cretaceous

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
6 E conomic R anking of Copper Mining Projec t s at E x ploration...

Name Deposit Mton Cu Metal Cu Deposit Clas- Belt geological Stage Source
grade content sification

Hypogene 464 0.36 1.7 porphyry Middle construction Mineria


Project Cu-Au cretaceous Chilena
Andacollo

Inca de Oro 345 0.47 1.6 porphyry Paleocene prefeasibility Annual Report
Cu-Au 2008

Llano 115 0.46 0.5 exotic Paleocene prospect Annual Report


Paleocanal 2008 range

Lomas Bayas 287 0.27 0.8 porphyry Cu Paleocene mine web site

Los Bronces 2472 0.39 9.6 porphyry Middle Miocene – mine/ Annual Report
Cu-Mo Early Pliocene feasibility 2008

Los Pelambres 4860 0.56 27.2 porphyry Middle Miocene – mine Annual Report
Cu-Mo Early Pliocene 2008

Los Sulfatos 1200 1.46 17.5 porphyry Middle Miocene – prospect web site
Cu-Mo Early Pliocene

Mantos 138 0.66 0.9 Strata-bound Jurassic - Early mine Annual Report
Blancos cretaceous 2008

Mantoverde 155 0.51 0.8 IOCG Jurassic - Early mine Annual Report
cretaceous 2008

Michilla 62 1.46 0.9 Strata-bound Jurassic - Early mine Annual Report


cretaceous 2008

Mina Sur 23 0.49 0.1 exotic Late Eocene – mine Slides CEO
Early Oligocene

Minera Gaby 1195 0.37 4.4 porphyry Cu Late Eocene – mine Annual Report
Early Oligocene 2008

Mirador 28 0.72 0.2 porphyry Cu Paleocene prospect Annual Report


2008 range

Miranda 600 0.45 2.7 porphyry Cu Late Eocene – prospect Geological


Early Oligocene Congress

MM Central 1310 0.96 12.6 porphyry Late Eocene – feasibility/ Geological


Cu-Ag Early Oligocene construction Congress

MM Sur 47 1.48 0.7 porphyry Late Eocene – prospect Slides CEO


Cu-Ag Early Oligocene

MMN 214 0.87 1.9 porphyry Late Eocene – scopy study Slides CEO
Cu-Ag Early Oligocene

Mocha 250 0.5 1.3 porphyry Cu Late Eocene – prospect web site
Early Oligocene

Polo sur 375 0.46 1.7 porphyry Cu Paleocene prospect Annual Report
2008 range

Putilla 540 0.25 1.4 porphyry Cu Jurassic - Early prospect Annual Report
Galenosa cretaceous 2005

Quebrada 1030 0.50 5.2 porphyry Cu Late Eocene – mine/ NI43 101
Blanca Early Oligocene prospect

Relincho 521 0.45 2.3 porphyry Cu Paleocene prospect NI43 101

Rencoret 20 1.11 0.2 Strata-bound Paleocene prospect Annual Report


2008 range

Rosario 2664 0.89 23.7 porphyry Late Eocene – mine Annual Report
Cu-Mo Early Oligocene 2008

Rosario Oeste 746 1.06 7.9 porphyry Late Eocene – scopy study Annual Report
Cu-Mo Early Oligocene 2008

RT 7039 0.37 25.9 porphyry Cu Late Eocene – mine/ Slides CEO


Early Oligocene prefeasibility

Salvador 2526 0.45 11.3 porphyry Cu Late Eocene – mine/ Annual Report
Early Oligocene prefeasibility 2008

San Enrique 3000 0.70 21.0 porphyry Middle Miocene – prospect web site
Monolito Cu-Mo Early Pliocene

Sierra Gorda 2080 0.416 8.7 porphyry Paleocene prospect NI43 101
Cu-Mo
CHAPTER I 7

Spence 371 0.94 3.5 porphyry Cu Paleocene mine Annual Report


2008

Telegrafo 400 0.41 1.6 porphyry Cu Paleocene prospect Annual Report


norte 2008 range

Telegrafo sur 900 0.46 4.1 porphyry Cu Paleocene prospect Annual Report
2008 range

Ujina 1762 0.65 11.5 porphyry Late Eocene – mine Annual Report
Cu-Mo Early Oligocene 2008

Vizcachitas 1087 0.363 3.9 porphyry Middle Miocene – scopy study NI43 101
Cu-Mo Early Pliocene

Zaldivar 234 0.46 1.1 porphyry Cu Late Eocene – mine Annual Report
Early Oligocene 2008

Copper metal content approach


The metal copper content in the mineral deposit is frequently used for ranking of orebody or
comparison between them. Therefore, this approach has an inconvenience or defect, it does
not provide an economic potential idea of the deposit, i.e. it does not describe the quality of
such concentration. For this it is necessary to regard the tonnage and copper grade of each
deposit to compare deposits in a first approach, because the same metal copper content
does not guarantee the same economic feasibility. As an example, Figure ➊ presents
a log-probability plot of metal copper content and shows two projects of the Codelco
Norte district close to the same position. However, both deposits mm and Toki are clearly
different in the economic evaluation. mm resources represent the very high portion of
a porphyry deposit with apical high-grade breccias and Toki resources represent a low-
grade disseminated ore characteristic of porphyries, wide and pervasive. Resources and
Net present Value (npv) for mm & Toki are compared in Table 2.

Figure 1 Metal copper content distribution in Chilean deposits.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
8 E conomic R anking of Copper Mining Projec t s at E x ploration...

Table 2 Copper mineral resources and NPV of deposits in Chile

Ore Deposit Mton ore Cu % Mton fine Cu NPV MUS$


- TOKI 2648 0.49 13 <0–200
- MM 1310 0.95 12.4 600–900

The spatial distribution of copper grade could depend on the behaviour of the disseminated
stocks emplacement or concentrated in high grades mineralised bodies, product of the
sources of metals where the mineralisation pass through in veins or breccias for example.

analysis and results


Tonnages and grades – ranking of prospect
The deposits presented in Table 1 are used to plot the tonnage and average copper grade,
Figure  ➌. The lack of relation between both variables suggests that the tonnages of ore
and grades average of the deposits are independent.
The deposit Chuquicamata and Radomiro Tomic (rt) were represented separately
and in conjunction because from the geological point of view they are similar, but they
have different development projects for the future (underground mine versus open pit
respectively).
Andina, Teniente, Chuquicamata-rt and Escondida are by far the largest copper
resources deposits. Rio Blanco at Los Bronces (Andina, Los Bronces, Los Sulfatos and San
Enrique Monolito) as a mining district currently represents the greatest concentration of
copper in the world; however not all the resources can be ore reserves, due several factors.

Figure 2 Tonnage versus average copper grade of ore deposit in Chile.


CHAPTER I 9

Modelling the distributions

Two approaches were developed, an empirical one where it assumed that the frequencies
of tonnages and average copper grade represent their probabilities and where the other
model of probability density function is fit to a probability density function to the data.
For the tonnage an exponential distribution was used and a lognormal distribution for
copper grades, shown in Figure  ➌.

Figure 3 Histogram and fitting distribution for tonnage (right) and copper grades (left).

For the tonnage, the probability of a deposit being larger than x tonnage is given by the
expression:

(1)

Where X stands for a random variable related to the tonnage expressed in million tonne
(Mton) of ore.
For the grades, the probabilities of an Oligocene deposit have grades greater than is
given by the expression:

(2)

Where Y stands for a random variable related to grades expressed in percentage and is
the standard normal cumulative distribution function.
Using both expressions and assuming independence between X and Y, it is possible to
derive the probability of a deposit have grades greater than X and tonnage larger than Y
is given by the combined probability function:

(3)

Based on the combined probability, the copper mineral resources of deposits were sorted
descending, Table 3 . The empirical probability also has been calculated. The order of
the deposit makes sense; however, evidences some limitation to compare mass and
concentrated deposits for example: mm Sur is a deposit characterised for presence of
high grade breccias bodies separated by host rock weakly mineralised with the copper
porphyry in deep, it shows a high ranking. The ranking shows a group of deposits with
tonnage greater than 2000 million tonnes of ore and copper grade greater than 0.5 that
have probability lower than 5%.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
10 E conomic R anking of Copper Mining Projec t s at E x ploration...

Table 3 Ranking of Copper mineral resources of deposits in Chile

Name Deposit Mton Cu Metal Cu Empirical Probability Fitting Probability Model


Ore grade content Tonnage Grade Combinated Tonnage Grade Combinated Ranking
(a) (b) (a)*(b) (c) (d) (c)*(d)

Andina 16908 0.63 106.4 1.61% 32.26% 0.5% 0.0% 39.2% 0.0% 1
El Teniente 16898 0.55 93.6 3.23% 46.77% 1.5% 0.0% 52.0% 0.0% 2
Escondida 8913 0.63 56.2 4.84% 29.03% 1.4% 0.5% 39.0% 0.2% 3
Los Sulfatos 1200 1.46 17.5 30.65% 4.84% 1.5% 49.2% 0.8% 0.4% 4
MM Sur 47 1.48 0.7 90.32% 1.61% 1.5% 97.3% 0.7% 0.7% 5
Michilla 62 1.46 0.9 88.71% 3.23% 2.9% 96.4% 0.8% 0.7% 6
Chuquicamata 6535 0.61 39.9 8.06% 33.87% 2.7% 2.1% 42.2% 0.9% 7
RT 7039 0.37 25.9 6.45% 87.10% 5.6% 1.6% 86.4% 1.3% 8
Aurora 7 1.24 0.1 98.39% 6.45% 6.3% 99.6% 2.3% 2.3% 9
Rosario 2664 0.89 23.7 12.90% 14.52% 1.9% 20.7% 12.3% 2.5% 10
Los Pelambres 4860 0.56 27.2 9.68% 43.55% 4.2% 5.6% 50.9% 2.9% 11
Rosario Oeste 746 1.06 7.9 43.55% 9.68% 4.2% 64.3% 5.4% 3.5% 12
MM Central 1310 0.96 12.6 25.81% 11.29% 2.9% 46.1% 8.8% 4.0% 13
Rencoret 20 1.11 0.2 96.77% 8.06% 7.8% 98.8% 4.2% 4.2% 14
San Enrique 3000 0.70 21.0 11.29% 22.58% 2.5% 17.0% 29.2% 5.0% 15
Monolito
Spence 371 0.94 3.5 64.52% 12.90% 8.3% 80.3% 9.6% 7.7% 16
MMN 214 0.87 1.9 75.81% 16.13% 12.2% 88.1% 13.5% 11.9% 17
Ujina 1762 0.65 11.5 20.97% 27.42% 5.7% 35.3% 36.0% 12.7% 18
Cluster toki 2648 0.49 12.9 14.52% 61.29% 8.9% 20.9% 64.7% 13.5% 19
Antakena (Madru- 38 0.83 0.3 91.94% 17.74% 16.3% 97.8% 16.1% 15.7% 20
gador y Elenita)
Salvador 2526 0.45 11.3 16.13% 77.42% 12.5% 22.5% 72.2% 16.2% 21
Los Bronces 2472 0.39 9.6 17.74% 83.87% 14.9% 23.2% 82.8% 19.2% 22
Sierra Gorda 2080 0.416 8.7 19.35% 80.65% 15.6% 29.2% 78.3% 22.9% 23
Mirador 28 0.72 0.2 93.55% 19.35% 18.1% 98.4% 27.3% 26.9% 24
Franke 73 0.70 0.5 83.87% 21.97% 17.6% 95.8% 29.1% 27.8% 25
Centinela 80 0.70 0.6 82.26% 24.19% 19.9% 95.4% 29.8% 28.4% 26
El Telegrafo 1600 0.44 7.0 22.58% 79.03% 17.8% 38.8% 73.9% 28.7% 27
Conchi 550 0.61 3.4 48.39% 35.48% 17.2% 72.2% 42.2% 30.5% 28
Cerro Colorado 372 0.63 2.3 62.90% 30.65% 19.3% 80.3% 39.0% 31.3% 29
Caracoles 900 0.55 4.9 41.94% 58.06% 24.3% 58.7% 53.7% 31.5% 30
Mantos blancos 138 0.66 0.9 79.03% 25.81% 20.4% 92.2% 34.5% 31.8% 31
Caracoles 1100 0.50 5.5 35.48% 51.61% 18.3% 52.2% 62.3% 32.5% 32
Quebrada Blca 1030 0.50 5.2 38.71% 54.84% 21.2% 54.4% 62.3% 33.9% 33
Esperanza 1204 0.45 5.4 29.03% 74.19% 21.5% 49.1% 72.0% 35.3% 34
El Abra 1120 0.45 5.0 33.87% 75.81% 25.7% 51.6% 72.0% 37.1% 35
El Morro 487 0.56 2.7 53.23% 45.16% 24.0% 75.0% 50.9% 38.2% 36
Chimborazo 236 0.6 1.4 72.58% 37.10% 26.9% 87.0% 43.9% 38.2% 37
Caserones 1350 0.33 4.5 24.19% 95.16% 23.0% 45.0% 91.5% 41.2% 38
El Tesoro 287 0.57 1.6 69.35% 40.32% 28.0% 84.4% 49.1% 41.4% 39
Cerro Casale 1285 0.35 4.5 27.42% 93.55% 25.7% 46.8% 89.0% 41.6% 40
Casualidad Virgo 400 0.55 2.2 58.06% 48.39% 28.1% 78.9% 52.7% 41.6% 41
Telegrafo sur 900 0.46 4.1 40.32% 67.74% 27.3% 58.7% 71.0% 41.7% 42
Candelaria 391 0.55 2.2 59.68% 50.00% 29.8% 79.4% 52.7% 41.9% 43
Minera Gaby 1195 0.37 4.4 32.26% 88.71% 28.6% 49.3% 86.8% 42.8% 44
Brujulina 65 0.59 0.4 87.10% 38.71% 33.7% 96.2% 45.6% 43.9% 45
Vizcachitas 1087 0.363 3.9 37.10% 90.32% 33.5% 52.6% 87.1% 45.8% 46
El Soldado 71 0.56 0.4 85.48% 41.94% 35.8% 95.9% 50.4% 48.3% 47
Miranda 600 0.45 2.7 45.16% 72.58% 32.8% 70.1% 72.0% 50.5% 48
Relincho 521 0.45 2.3 51.61% 70.97% 36.6% 73.5% 72.0% 52.9% 49
Mocha 250 0.5 1.3 70.97% 56.5% 40.1% 86.3% 62.3% 53.7% 50
Mantoverde 155 0.51 0.8 77.42% 53.23% 41.2% 91.3% 60.3% 55.1% 51
Inca de Oro 345 0.47 1.6 66.13% 62.90% 41.6% 81.5% 68.1% 55.5% 52
CHAPTER I 11

Polo sur 375 0.46 1.7 61.29% 69.35% 42.5% 80.1% 71.0% 56.9% 53
Antucoya 590 0.38 2.2 46.77% 85.48% 40.0% 70.5% 84.5% 59.6% 54
Zaldivar 234 0.46 1.1 74.19% 64.52% 47.9% 87.1% 70.0% 61.0% 55
Telegrafo norte 400 0.41 1.6 56.45% 82.26% 46.4% 78.9% 79.4% 62.7% 56
Mina Sur 23 0.49 0.1 95.16% 59.68% 56.8% 98.6% 63.6% 62.8% 57
Llano Paleocanal 115 0.46 0.5 80.65% 66.13% 53.3% 93.4% 70.0% 65.4% 58
Hypogene Project 464 0.36 1.7 54.84% 91.94% 50.4% 76.0% 87.5% 66.5% 59
Andacollo
Putilla Galenosa 540 0.25 1.4 50.00% 98.39% 49.2% 72.7% 98.1% 71.3% 60
Lomas Bayas 287 0.27 0.8 67.74% 96.77% 65.6% 84.4% 97.1% 81.9% 61

This ranking allows comparing and contextualising a new deposit in size in relation to
current deposits. This combined probability could be interpreted like the undiscovered
deposit has a tonnage and copper grader greater than certain values.

Metallogenic belts
Chile is located along the subduction zone between the Nazca plate and the South
American plate. The collision between plates has generated magmatic and volcanic belts
in time, which may generate mineral deposits depending on several variables.
The metallogenic belt provides a spatial temporal context for the occurrence of mineral
deposits. The principal metallogenic belts [2, 4] in Chile (Figure  ➍) are:
• Jurassic – Early cretaceous: this belt includes the copper veins in plutons and strata
bound copper deposits located mainly in the Coastal Cordillera of Northern Chile. Some
examples are: Michilla and Mantos Blancos. It is related to a succession of porphyritic
stocks and dikes that were emplaced within Jurassic andesitic volcanics.

• Middle cretaceous: includes copper, iron, apatite, gold, silver and manganese veins,
as well copper skarn, large iron deposit and certain strata bound deposits and scarce
copper porphyries like for example: Andacollo.

• Paleocene: this belt includes gold, silver and copper veins, as well copper breccias and
porphyry: Sierra Gorda, Lomas Bayas and Spence (district Sierra Gorda).

• Late Eocene – Early Oligocene: large porphyry copper in Northern Chile. For example:
the districts of Chuquicamata, Escondida and Collahuasi.

• Early – Middle Miocene: Maricunga belt associated to emplacement of gold deposits


and porphyry copper – gold.

• Middle Miocene – Early Pliocene: the porphyry copper deposits of Central Chile as:
Andina, Teniente, Los Bronces and Los Pelambres.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
12 E conomic R anking of Copper Mining Projec t s at E x ploration...

Figure 4 Metallogenic Belts - copper


deposit in Chile. Adapted from
Sernageomin [4] .

The basic statistics by class of geological belt is shown in Table 4 . The belts with most
metal copper content are Late Eocene – Early Oligocene and Middle Miocene – Early Pliocene
(Figure ➎), therefore it is remarkable, that the last the average size deposit is more than
double that in Late Eocene – Early Oligocene. Then the deposit of Middle Miocene – Early
Pliocene, located in Central Chile, is more massive and concentrated than rest of the belt,
being more attractive from the exploration of point of view.

Table 4 Copper mineral resources by metallogenic belt

Geological Belt Deposit Average Tonnage Average Cu Metal Cu content


Number Mton % Mton
Jurassic - Early cretaceous 8 248 0.46 9.2
Middle cretaceous 1 464 0.36 1.7
Paleocene 20 561 0.48 54.2
Late Eocene – Early Oligocene 21 1896 0.58 229.4
Early - Middle Miocene 2 1318 0.34 9.0
Middle Miocene – Early Pliocene 9 5227 0.60 283.0
Grand Total 61 1691 0.57 586.4

The list of recollected deposit sums 586 million tonnes of copper in current known
resources in Chile at least.

Figure 5 Metal copper content separated by Metallogenic Belts – Copper deposit in Chile.
CHAPTER I 13

Broad copper deposit classification


The common types of Chilean copper deposits are:

Porphyry copper

These content mainly copper and associated with molybdenum, gold, silver, and other
elements. In general are centered in stock cylindrical porphyry, at least 100 metres of
diameter that correspond to porphyric apophysis above granitic plutons domes. The stocks
are result of overprinting intrusive fluids associated with certain types of molten igneous
rocks called magma. Also the host rocks are frequently altered with mineralised. The
mineralisation is present in stockworks with multidirectional veins of sulphides and
presence of quartz-sulphide with potassic - silicate alteration. Sericitic alteration defined
by quartz, sericite and pyrite is commonly overimposed in all or part of potassic zone and
in many cases produce remotion of total or partial metal [2] . A proportion part of the
world copper production comes from types of porphyry copper deposits.

Iron oxides copper and gold (IOCG)

With the Olympic Dam the conceptual posit model iocg arises. In Chile, Candelaria,
Manto Verde and San Antonio have been classified as it. The igneous rocks that host iron
deposit could continue the hydrothermal evolution with availability of copper and gold
subject to paragenetic additional come to sulphides later of copper and gold [3] . These ore
deposits of the Coastal Mountains have been emplaced in several pulses. Lower Cretaceous
plutonic complexes and magnetite-apatite and iron oxide copper gold [2] .

Strata-bound deposits or Chilean manto-type

The strata-bound deposits of copper with silver associated are hosted in volcanic rock.
These deposits are at least one magnitude order smaller than the porphyry copper.
Currently is possible to associate the copper mineralisation to emplacement intrusive
in volcanic sequence. Strata-bound deposits are volcanic rock and veins intrusive of the
mid to upper Jurassic in the coast mountains of the Antofagasta region and early cretacic
in central Chile [2] .
The porphyry copper has an average of 2190 Mton of ore and 0.57% copper grade, Table 5.
The other types deposits have tonnage in the range 6.5 Mton to 440 Mton.

Table 5 Copper mineral resources by metallogenic belt

Mton Cu grade Metal Cu content

Deposit Std. Std. Std.


Average N Min. Max P. 10 P. 90 Average N Min. Max P. 10 P. 90 Average N Min. Max P. 10 P. 90
Classification Dev. Dev. Dev.

- Porphyry Cu 2190 46 3684 28 16908 234 6535 0.57 46 0.17 0.25 1.48 0.36 0.94 12.5 46 21.9 0.2 106.4 1.1 27.2

- Strata-bound 132 7 206 7 590 ­– – 0.55 7 0.30 0.38 1.46 – – 0.7 7 0.7 0.1 2.2 – –

- Exotic 122 4 116 23 287 – – 0.54 4 0.05 0.46 0.59 – – 0.7 4 0.7 0.1 1.6 – –

- IOCG 255 4 166 73 400 – – 0.55 4 0.04 0.51 0.70 – – 1.4 4 0.9 0.5 2.2 – –

All Grps 1691 61 3311 7 16908 62 3000 0.57 61 0.17 0.25 1.48 0.37 0.96 9.6 61 19.6 0.1 106.4 0.4 23.7

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
14 E conomic R anking of Copper Mining Projec t s at E x ploration...

discussion
For this work the current mineral resources based on public information were used, however
it is necessary to consider the complete or global resources (pre mining) of the deposit i.e.
the resources already mined, residuals such as tailings and current resources, especially
for the giant ore deposits. Current and global resources approaches are supplementary
because:

• The current resources show the mining scenario, potentialities by copper industry and
asset accounting of the copper for Chilean state.

• The global resources describe and quantify copper resources deposit in Chile from a
historical perspective and provide an exploration-geology scenario of Chilean copper
deposits. This may be used as input data in the construction of predictive models for
the estimation of the potential undiscovered deposit.

Variables such as depth of emplacement, cover rock or overburden of the deposits are
key factors in the economic feasibility of the deposits, therefore should be considered along
copper grades and tonnage to ranking the deposits. Once these variables are incorporated
and the profitable/non-profitable known deposits will be possible to generate bands in
the scatter plot of Figure  ➌ of economic project and uneconomic project.
The currently prospect in Chile presents low copper grades values, which generates
opportunities to think in the future exploitation and reprocessing old tailings.
This bivariate approach, data sources used and global resources (pre mining) approach
could be incorporated in the future study about undiscovered deposit, like done by joint
way Geological Surveys of Argentina, Chile, Colombia, Peru and United States [1] . Also
cosimulation geostatistics in metallogenic belts could be considered in future studies
of undiscovered deposits as a way to integrate different information sources such as:
location of deposits, geological maps and discarded zones.

conclusions
This work presents a quick and simple approach for the economic prioritisation of mining
projects at early stages, based on the public available information of current mineral
resources about copper deposits in Chile. It could assist in helping the board of directors to
support exploration expenses providing an index for decisions: when to cease exploration
or to continue such effort, allowing for comparing and contextualising a new deposit in
size in relation to current deposits.
These tools may also apply not only to such variables as grades and tonnage, but also
to depth of emplacement, coproducts (molybdenum, silver and gold), mineralisation style
(oxides, sulphides) and distance between deposits, among others.
In Chile, there are more than 586 million known tonnes of copper, with Andina and
Rio Blanco at Los Bronces currently being the largest deposit and district, respectively, in
the world. The largest amount of copper is located in the Middle Miocene–Early Pliocene
Metallogenic belt in the central Andes mountain range.

acknowledgements
The authors thank Enrique Chacon from Codelco Norte for authorisation to publish this
work. We also would like to thank José Cáceres from Río Tinto Iron Ore and to Pedro
Carrasco of Gabgeo for his comments and recommendations.
CHAPTER I 15

references
Geological surveys of Argentina, Chile, Colombia, Peru, and the United States (2008) Quantitative
Mineral R esource Assessment of Copper, Molybdenum, Gold, and Silver in Undiscovered Porphyr y Copper
Deposits in the Andes Mountains of South America (Report 2008-1253). U.S. Geological Survey, Reston,
Virginia. [1]

Maksaev, J. (2001) R eseña Me talogénica de Chile y de los Procesos que De terminan la Me talogénesi s
Andina. [2]

Oyarzún, J. (2007) El Modelo iocg y el Potencial de Exploración Cuprífera de la Cordillera de la Costa del
Norte de Chile. Universidad de la Serena. Chile. [3]

Vivallo, W. (2007) Yacimientos metalíferos de Rocas y Minerales de la R egión de Tarapacá. Sernageomin


presentación http://www.sernageomin.cl. [4]

Singer, D., Berger, V. & Moring, B. (2008) Porphyr y Copper Deposits of the World: Database and Grade and
Tonnage Models (R eport 2008–1155). U.S. Geological Survey, Melon Park. [5]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Glaciers in High Mountain Mine
Explorations and Project Planning

abstract
Cedomir marangunic Often, explorations and mine projects in high mountains are
Geoestudios Ltda., Chile located in areas with some of the various glacier types. Since
glaciers are considered, among others, as valuable water resources,
the tendency of authorities in Chile, and the world, is to regard
them as untouchables, even by distant actions which might only be
dust sources. The recent Glacier Protection and Conservation Policy
issued by Chile's National Environment Commission extends this
concern even to small snow fields. Also, the presence of glaciers
might cause various hazards and corresponding risks to a mine
infrastructure, such as an advance (surge) of the ice mass, sudden
high flow discharges from proglacial lakes (glof phenomena),
debris loaded lahars flows from glacier covered erupting volcanoes,
and catastrophic and sudden slides of the entire glaciers. Because
of all the above, in mining explorations and projects in high
mountains, the possible presence of glaciers must be considered.
The presentation shows, briefly, what rock glaciers are. The general
procedure to evaluate the occurrence of glaciers is described, as
well as to estimate the influence area of an exploration or project.
The content of required glacier base line studies is indicated, as
per recent documents from Chilean authorities. Finally, recent
advances in glacier management are shown, which can mitigate
or compensate mining effects on glaciers.
18 Glacier s in High Mountain Mine E x ploration s and Projec t Planning

introduction
An increasing number of exploration and mining project extends into the high mountains
of the Chilean Andes, where glaciers are a common natural feature. In the Chilean
Andes there are over 3,700 of the well known white types of glaciers so far inventoried,
and over 2,200 rock glaciers. Extensive explorations so far performed on rock glaciers
(see Figure   ➊) shows them all to be of the ice-cored types [1] which consist of an ice
mass with a rock debris cover from fractions of a meter to a few metres thick, and not as
assumed by Brenning et al. to be a permafrost phenomena. [2, 3]
Like in most of the world, a growing tendency to protect natural resources demands
from the authorities legislations in order to make glaciers an untouchable water resource.
In this context, a 2008 modification of the Chilean regulations for the Environmental
Impacts Evaluation System (seia in Spanish capital letters) requires that all projects
affecting glaciers must submit an environmental impact evaluation to conama, Chile's
national environment commission [4] . Furthermore, a recent conama document [5] on
the country policy on glaciers defines glacier as … all perennial masses of ice, formed by snow
accumulation, regardless of their sizes or forms …(and) may show flow by deformation…, citing
an old definition which even its author later modified [6, 7] .

Figure 1 Trench at the surface of Monolito rock glacier in the Andes of central Chile,
flowing from upper right to lower left.

The above peculiar definition of glaciers, quite different from what the United Nations
organisations considers as glaciers [8] , implies that even small snow-fields (Figure   ➋)
surviving for a few years (there is no definition of what perennial means) must now be
considered as glaciers. This is more so, because the conama document [5] removes the
requirement of the United Nations definition of glacier as having internal deformation
(under its own weight, influence of gravity) which, in turn requires, a minimum
thickness of ice (roughly from 5,5 m to 16,7 m, depending on the temperature of the ice
mass) to overcome the 50 to 150 KPa shear strength of temperate to cold polythermal ice
of 900 kg/m3 density, or a lesser thickness of ice if the glaciers contains rock debris, as
in rock glaciers, thus increasing its unit weight.
CHAPTER I 19

Figure 2 Snow fields in the Andes Mountains of central Chile, Barriga valley, Aconcagua River basin.

Glaciers can be affected in various ways. For example:

• By a direct contact that changes the glacier mass, such as removals of part of the ice
mass, emplacing loads like waste dumps, sinking in under-glacier cavities, placing
roads on he surface of rock glaciers, and others.

• Changes in the characteristics of the glacier surface, which may commonly result

by deposition of anthropic dust (Figure   ) from sources that may be, even, tens of
kilometres distant (roads, construction works, pits, dumps, etc.).

• Changes in the local topography, thus altering wind and snow deposition patterns,
glacier surface insolation, nourishment from avalanching snow, etc.

• Changes in the drainage pattern, entering the glaciers, sub-glacial, or issuing from
the glaciers, as it modifies the water regime of the glacier, its mass balance, sliding
velocity at the base of temperate glaciers, and intra-glacier phreatic levels and overall
glacier stability.

• Induced ground accelerations (by large near-by blastings) may originate a general
instability and cause a glacier to slide violently.

• Changes in the local micro-climate induced by mining, or related, activities, like


formation of a tailings dam.

• Changes induced by forming pro-glacial lakes.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
20 Glacier s in High Mountain Mine E x ploration s and Projec t Planning

Figure 3 Increases of ablation rate on the surface of firn snow artificially


dusted with various concentrations of mainly silt size particles.

In Chile, large interventions of glaciers began in 1969 [9] , when a joint effort by endesa
(National Electric Company) and the Universidad de Chile, increased surface ablation of
part of the Coton glacier by 59%, and the flow of a tributary to the Cachapoal river in the
Región del Libertador Bernardo O'higgins, after two years of intensive drought. This was
achieved by aerial dusting (Figure  ➍) of a complex compound based on black smoke. A
second significant glacier intervention was initiated in 1981–1982 while opening the
Andina mine open-pit, in the Aconcagua river basin, which meant removing parts of rock
glacier. Both glacier interventions were then hailed in Chile and internationally as break-
through developments achieved by Chilean engineering. Other glacier interventions
followed, like at the Los Bronces and Los Pelambres mines [2, 10] and other mines and
projects. But in 2006 [11] the Región de Atacama corema (Regional Environmental
Commission) prohibited the Pascua-Lama mining project to intervene small glaciers
located within its mine area, the mine lay-out had to be modified, and an intensive
monitoring plan was demanded from the company [12] to certify that glaciers are not
affected by mine activities.

Figure 4 Aerial dusting in 1969 of a complex, black-smoke based compound on Chile's


Coton glacier, Región del Libertador Bernardo O'higgins, to increase surface ablation and river run-off.
CHAPTER I 21

Most glaciers in the Chilean Andes, as in other world mountains, are receding [13] at
various and changing rates since the peak of the last glaciation about 18,000 years ago,
because of well known natural causes originating climatic changes, to which anthropic
causes have been added, mainly since the industrial revolution started in the middle of
the XIX century. In this scenario, for a mine not affecting nearby glaciers, it is at best,
quite difficult to prove that no part of a glacier recession is caused by local, mining
induced, circumstances. In particular since other anthropic causes intervene, like the
smog from large urban conglomerates, or emissions or micro-climate changes derived
from other non-mining activities, but which wind blows into the mountains and glaciers.
Also, glaciers pose hazards and risks to any activity in the high mountains. Large
sudden and catastrophic slides of most, if not all, of a glacier have occurred in the
mountains, affecting whole valleys many kilometres downstream from the glacier
location site [14, 15] . glof phenomena, from Glacier Lake Outburst Flood, are also well
known in the Chilean mountains, one of them [16] affected the Copiapó river basin;
several others occurred in past [17] and recent years in Southern Chile, as well in the
recent past in other locations in the mountains of Central Chile and Argentina. Similar
but sudden floods, related to volcanic eruptions causing lahars flows (dense currents
because of debris loads) are also well documented, as when the Coñaripe village in 1964,
was nearly destroyed by a lahar from the Villarrica volcano, which again in 1971 affected
the outskirts of the Pucón city. These, and other, glacier related hazards, such as fast
advances (glacier surges) and related secondary effects, must be considered in any mining
project in the mountain areas where glaciers exists.

suggestions for an environmental impact


study where glaciers exist
The evaluation of a project's impacts on glaciers requires full information of their base
line. But to produce such information is of a considerable cost because of the extreme
environmental conditions from where it must be obtained. It is also a lengthy procedure,
as it usually requires at least one annual cycle of monitoring to obtain, for example, data
on glacier mass balance. Because of this it is important to limit the study area, but not in
such a way as to skip what may be a glaciated area. We suggest the following procedure:

• Draw an area of about 15 km radius (to include even distant areas of eventual dust
deposition) around all project facilities: roads, camps, plants, power lines, pipelines,
pits, dumps, tailings, dams, borrow materials pits, etc.

• On such a plot, draw the contour line that is 500 m below the lower limit of known
glaciers in the area (or on nearby areas). This reduced area, above the contour line, will
be the target area in which to identify glaciers, and should be considered a preliminary
project influence area for glaciers.

• Using aerial or satellite images identify and inventory all glaciers (white glaciers, rock
glaciers, glacieretes, etc., even perennial snow fields) present in such a preliminary
influence area. The inventory must comply with unesco recommendations [18, 19,
20, 21] , including a recent one to facilitate the use of digital techniques [22] . unesco
recommendations are the world standard for glacier inventories, used also in Chile by
Chilean Water Directorate. The use of glims (Global Land Ice Measurements from Space
[23] ) alternative to perform glacier inventories is not recommended, as it is based on
satellite images that may not have the required accuracy to identify small mountain
glaciers and snow fields.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
22 Glacier s in High Mountain Mine E x ploration s and Projec t Planning

• Next, make a preliminary evaluation of all likely impacts of projects activities during
exploration, construction, operation, and closure, on each particular glacier identified
within the preliminary influence areas. The effects may be of various types: dust
deposition, vibrations, excavations, dumps, drainages, etc.

• Once glaciers that will probably be influenced by project activity have been identified,
a base line study of each such glacier should be planned and conducted, as well as
planning for eventual mitigations and compensations measures and control and
monitoring of glaciers. conama's required content of the base line studies of glaciers,
as already indicated to various mining projects, follows summarily. A more detailed
description of the base line contents can be found in the recent glaciology manual [24]
of the Chilean Water Directorate (Dirección General de Aguas), the government
institution responsible for evaluating glacier studies for conama.

A summary of what a full base line study of a glacier should contain is:

• General description:
––As per unesco recommendations for glacier inventories.
––Temperature of the ice mass (cold, temperate, polythermal) – from temperature
sensors in drill holes.
––General stratigraphy-from drill cores (including sub-glacial material); mainly in
rock glaciers.

• Ice/snow (mass) balance:


––Glacier surface – from a net of control stakes, and snow pits for density, observed
at least twice a year.
––Glacier base – as per data from heat balance.

• Heat balance:
––Glacier surface – at least one point in the glacier, with data from a
meteorological station.
––Glacier base – with data on local geothermal gradient, and from glacier's
friction (estimated from sliding velocity at the base and load).

• Water balance: with liquid precipitation data, evaporation-condensation-sublimation


data, water discharge from glacier, ground infiltration, freezing within the glacier
mass, and changes in intra-glacier phreatic level.

• Velocities of movement and surface strain:


––Glacier surface movement – with changes of position of a net of stakes,
measured twice per year:
––Glacier basal sliding – with repeated observations of changes of a bore-
hole's inclination.
––Glacier surface strain condition – measuring deformations within ice
strain nets.

• Glacier thickness: from geophysical or electromagnetic exploration data, corroborated


with at least one bore-hole.

• Glacier general stability – with common geotechnical stability analysis methods, and
data on geomechanical properties of glacier's bed material.

• Biodiversity within the glacier and its sourroundings.


CHAPTER I 23

• Glacier variations:
––Recent  –  deduced from available aerial and satellite images, maps, and field work.
––Quaternary – from glacial geology studies within the glacier's basin.

solutions to environmental conflicts


Some impacts on glaciers are simple to manage, such as dust arising from mining
activities, whose control can be achieved by spraying water on works and transportation
vehicles (even covering them). Others may add substantial costs, like relocating dumps to
a non-glaciated terrain, but will not prevent a project from developing. What may prevent
a project from developing, considering the present tendency not to authorise affecting
glaciers, is when it becomes necessary to remove part, or a full glacier, to uncover the
underlying rocks.
A probable solution to the above dilemma can be found in the present research
conducted at codelco's (Chile's Copper Corporation) Andina Division, oriented to mitigate
and compensate effects of impacting glaciers. This research is part of a wider ranging
one, named glacier management research. The most pertinent research lines are:

• Relocating ice masses in such a way as to prevent formation of hot spots, typical of
the uncontrolled ice deposits, which tend to greatly increase melting rates within the
relocated mass

• Protecting the surface of a relocated ice mass in such a way as to achieve the extremely
low ablation rates typical of the ice-cored rock glaciers

• Artificially increase snow accumulation on a small white glacier, or a snow field, in


order to prevent its extinction

• Generate an entirely new glacier, self-sustaining, where none exists.

In the summer of 2007 about 32,000 tonnes of ice were relocated to a 16 m deep
deposit (Figure   ➎) on a previously prepared bed about 2,400 m2 in extent. For relocation,
standard methods of blastings and trucking ice material, developed after years of
experience in ice removal, were employed. The ice losses measured during the relocation
experience (excavation and transportation) amounted to less than 1% of the relocated
mass. The formation of hot spots, usually developed by bacterial leaching within the ice
deposit containing low-grade mineralised particles, was prevented with a lower layer
of inert rock debris, obtained locally from proper talus material, while the surface of
the ice mass was covered with about 1 m thick similar inert rock debris. The 1 m thick
top layer was decided after extensive testing of the thermal conductivity of the debris
material, heat transmission through the debris material of various thicknesses, and data
on heat balance on the test glacier surface. Temperature sensors were placed within the
relocated ice mass, as well as stakes to measure reduction of surface elevation; the stakes
were placed on holes drilled to the base of the dedicated deposit. After some initial rapid
reduction of the surface elevation, caused by densification of the relocated mass, in the
2008–2009 ablation period a total annual surface reduction of 0,16 cm/year was achieved,
similar to the value observed in the 1998–2009 period around several collars of 1998 drill

holes placed in the Monolito rock glacier, in the vicinity of the test area (see Figure  
➑). Thus, the relocated ice mass will survive for about a century, providing a steady
to 
water supply to the basin, as would a similar ice mass within the Ablation Zone of an
ice-cored rock glacier.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
24 Glacier s in High Mountain Mine E x ploration s and Projec t Planning

Figure 5 Control stakes at the surface of the special, 16 m thick ice deposit
covered with 1 m thick selected rock-debris layer, Andina mine.

Figure 6 Summer 2007-summer 2009 surface descent on a relocated 16 m thick ice mass,
covered with a 1 m thick selected rock debris.


Annual surface descents (cm)

Stake 2007–2008 2008–2009

N1 47,50 23,90
N2 63,00 23,59
N3 59,00 10,62
N4 23,00 7,62
N5 49,60 14,28
Mean 48,42 16,00

Figure 7 Annual surface descents within a relocated ice mass, resulting


from ice melting under the selected rock debris covering its surface.
CHAPTER I 25

Figure 8 1998–2009 average surface descent of the Monolito rock glacier, resulting from melting of ice
under the rock-debris cover, as measured around collars of the casings of 1998 drilled holes.

Artificial increase of snow accumulation on a snow field is presently tested, constructing


a 4 m high, 50% permeability, snow fence on the upwind extreme of the snow field. The
location and the fence characteristics were decided after one winter of measuring snow
accumulations, snow densities, and weather conditions (with an authomatic weather
station) at the Potrero Escondido valley, a tributary of the Rio Blanco, in the Aconcagua
river basin.
To generate a new self-sustaining glacier the basic requirement is to accumulate
enough snow during a winter period, in order for it to survive the summer ablation
season. The research in Andina is focused on using the extensive existing technology to
control snow avalanches to achieve this with simple means, diverting various avalanche
paths into a single deposition area, and doing this with earth trench-wall system. During
2008, avalanche studies were conducted in three mountain valleys to select an appropriate
area to implement the system (considering avalanches, access, snow conditions, available
borrow material, and others), and during the winter of 2009 snow and weather controls
were performed. A design of the avalanche diverting structures will be completed during
2010, and its construction, a small pilot scheme, will be in the summer months of 2011,
thus initiating the formation of a new glacier.
Other research lines at Andina, oriented to glacier management, is to achieve an
economic way of increasing surface ablation on a snow field (such that the cost of the
produced water is lower than the cost of achieving it), and the evaluation of overall glacier
stability (considering the likelihood of the occurrence of a catastrophic glacier slide) in
order to asses the Safety Factor of a glacier ice mass on an inclined slope; most of this
relates to the little known geomechanical properties of the material at the glacier bed
(moraine material under active glacier, of various extents (Figure   ➒) depending on the
bed slope and mass zone of the glacier have a low friction angle, of 11°, and an extremely
low cohesion, of 0,1 kg/cm 2), the water levels (pore pressure) within a temperate glacier,
and various particle-accelerations values (seismic and anthropic).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
26 Glacier s in High Mountain Mine E x ploration s and Projec t Planning

SLOPE OF GLACIER BED PORTION RESTING ON ROCK

ABLATION ZONE: 0º-19º No contact with rock

ABLATION ZONE: 20º–26º 10% in contact with bedrock

ABLATION ZONE: 27º–80º 50% in contact with bedrock


ABLATION ZONE: >80º 75% in contact with bedrock
ACCUMULATION ZONE: all 100% in contact with bedrock

Figure 9 Portions of the glacier bed resting on rock, the rest is on moraine
material. As per results of over 100 exploration holes drilled to the bed of glaciers.

There is very little research in the world so far on managing glacier. In Europe and in
Greenland some small-scale tests have been made protecting summer glacier surfaces
with a plastic cover to increase the surface albedo, and in China and USA to increase
snow precipitation in mountain areas.
Although at the Pascua-Lama mining project snow fences are also being tested [12] the
works at Andina's can be described as the world leading glacier management research [25].

conclusions
In Chile, the development of mining projects within the high mountain environments is
constantly increasing with time. Also increasing is a citizen demand for the government
to prohibit projects that may affects glaciers. In this scenario, a project's owner must
consider, at the earliest stage of a project development, whether glaciers are present
within the project areas, identify the glaciers, prevent affecting them (even during basic
explorations stages) and, later on, produce the base line studies of those glaciers likely to
be affected. In addition, to avoid delays in the project, it is convenient to explore, as soon
as possible, alternatives to mitigate effects on glaciers, and eventual ways to compensate
those unavoidable effects, such as the above described glacier management techniques,
or other techniques to be still developed.
In turn, it should be the duty of governmental organisations, such as conama, or
the new and still forming Ministry of the Environment, and the dga (General Water
Directorate), to solve the confusing aspects of the recent, 2009, Chile's policy on glacier
protection and conservation, stating clearly what, if anything, is permissible, what is not
accepted, and what is acceptable as mitigation and compensation measures.

acknowledgements
The author thanks codelco's Andina Division for permission to use information for
this publication.

references
Marangunic, C. & Marangunic, P. (2010) Physical characteristics of rock glaciers in the mountains
of central Chile. Internat. Glaciolog. Conference vicc 2010 Ice and Climate Change: A view from
the South, Valdivia, Chile. Abstract Book, #48(60), poster presentation (see poster in www.
geoestudios.cl\extras\descarga). [1]

Brenning, A. & Azocar, G. F. (2009) Impactos de la minería en glaciares rocosos chilenos. XII
Congreso geológico chileno, Santiago. S1_028, p. 4. [2]

Brenning, A., Bodin, A., Azocar, G. F. & Rojas, F. (2009) Importancia y monitoreo de glaciares rocosos
en los Andes chilenos. XII Congreso geológico chileno, Santiago. S1_029, p. 4. [3]
CHAPTER I 27

Diario Oficial de la República de Chile (2008) Modifica el Artículo 2º del Decreto Nº 95, de 2001,
que aprueba el texto refundido, coordinado y sistematizado del Reglamento del Sistema de
Evaluación de Impacto ambiental. Edición del 29 de Noviembre de 2008, p. 17. [4]

CONAMA (2009) Política para la protección y conservación de glaciares. Santiago. p. 9. [5]

Lliboutry, L. (1956). Nieves y glaciares de Chile: fundamentos de glaciología. Edic. U. de Chile,


Santiago. p. 471. [6]

UNESCO (1973) Hidrología de nieve y hielos en América Latina. Oficina de Ciencias de la unesco
para América Latina. Montevideo. p. 162. [7]

Global Terrestrial Observing System (2007) Assessing the status of the developments of standards for
the essential climate variables in the terrestrial domain. Progress Report to the 26th Meeting
of the Subsidiary Body for Scientific and Technological Advice (sbsta). p. 24. [8]

Marangunic, C. (1971) Artificial increase of glacier surface ablation by aerial dusting, Coton
glacier, Chile (Abstract). XV General Assembly of iugg, Moscow, Symposium on Snow and Ice
in Mountain Regions, Vol. 2, paper 54. [9]

Azocar, G. F. & Brenning, A. (2008) Interventions in rock glaciers in the Los Pelambres mine,
Región de Coquimbo, Chile. University of Waterloo, Ontario, Canada, Technical Report, Dept.
of Geography and Environmental Management. p. 14. [10]

COREMA (2006) Resolución de calificación ambiental 024. corema Región de Atacama, 15 de Febrero
2006. [11]

Barrick (2008) Plan de monitoreo de glaciares: Proyecto Pascua-Lama. Revisión 3. p. 50 + annexes. [12]

Dyurgerov, M. (2002) Glacier mass balance and regime: data of measurements and analysis. Institute
of Arctic and Alpine Research, Univ. of Colorado. Occasional Paper No. 55, p. 88 + annexes. [13]

Casassa, G. & Marangunic, C. (1993) The 1987 Rio Colorado rockslide and debris flow, Central Andes,
Chile. Bull. Assoc. Engineering Geolo¬gists, Vol. 30 No. 3, pp. 321–330. [14]

Marangunic, C. (1997) Deslizamiento catastrófico del glaciar en el Estero Aparejo. Actas del 4º
Congreso Chileno de Geotecnia, Vol. 2, pp. 617–626. [15]

Peña, H. & Escobar, F. (1987) Análisis del aluvión de Mayo 1985 del Río Manflas, cuenca del río
Copiapó. Publicación Interna 87/3, Sub-Depto. de Estudios Hidrológicos, Dirección General de
Aguas, Ministerio de Obras Públicas, p. 19. [16]

Peña, H. & Escobar, F. (1983) Análisis de las Crecidas del Río Paine, Región de Magallanes. Publicación
Interna N° 83/7, Sub-Depto. Estudios Hidrológicos, Dirección General de Aguas, Ministerio de
Obras Públicas, p. 78. [17]

UNESCO (1970) Perennial ice and snow masses – a guide for compilation and assemblage of data
for the World Glacier Inventory. Technical Papers in Hydrology No. 1. [18]

Müller, F., Caflisch, T. & Müller, G. (1977) Instructions for the compilation and assemblage of data
for a world glacier inventory. iahs(icsi)/unesco report, Temporal Technical Secretariat for the
World Glacier Inventory (tts/wgi), eth zurich, Switzerland. p. 23. [19]

Müller, F. (1978) Instructions for the compilation and assemblage of data for a world glacier
inventory; Supplement: Identification/glacier number iahs(icsi)/unep/unesco report,
Temporal Technical Secretariat for the World Glacier Inventory (tts/wgi), eth Zurich,
Switzerland. p. 8 + mapas. [20]

Scherler, K. (1983) Guidelines for preliminary glacier inventories. iahs(icsi)/ unep/unesco


report, Temporal Technical Secretariat for the World Glacier Inventory (tts/wgi), eth Zurich,
Switzerland. p. 16 [21]

Paul, F. (2009) Guidelines for the compilation of glacier inventory data from digital sources. World
Glacier Monitoring Service. p. 20. [22]

Racoviteanu, A. E., Paul, F., Raup, B., Singh Khalsa, S. J. & Armstrong, R. (2009) Challenges and
recommendations in mapping of glacier parameters from space: results of the 2008 Global
Land and Ice Measurements from Space (glims) workshop, Boulder, Colorado, USA. Annals of
Glaciology 53: pp. 53–69. [23]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
28 Glacier s in High Mountain Mine E x ploration s and Projec t Planning

Geoestudios (2008) Manual de glaciología. Dirección General de Aguas, Ministerio de Obras


Públicas. p. 341. [24]

Marangunic, C. (2010) Management of glaciers: experiences and results in Chile. Internat. Glaciolog.
Conference vicc 2010 Ice and Climate Change: A view from the South, Valdivia, Chile. Abstract
Book, #49(61), oral presentation (see presentation in www.geoestudios.cl\extras\descarga). [25]
SOMI: A Standard Data
Representation for Mining Industry

abstract
Patricio inostroza This document shows the advances in the development of a
Andrea nieto standard data representation for the mining industry: somi
Ana pezo
(Standard Objects for Mining Industry); its technical basis, general
Universidad de Chile characteristics, development methodology and work alignments
for the future.
Juan Claudio navarro Several industries have opted to use standards in data re­
Independent Consultant, Chile presentation, standards that have brought benefits that lower
transaction costs, assert market power over suppliers and
customers, promote alliances, reduce r & d risk and others. In
order to obtain the benefits mentioned above and more, somi
is created as a specific standard for the mining industry. With
this standard a common data representation is defined for the
applications and mining equipment information. This advance at
data level entails high impact benefits in operations. It intends
to tackle problems at it and management level in general terms,
thus, allowing interoperability between systems and streamli­
ning all integrations. It makes information available in a timely
manner, it facilitates standardisation of processes and system, it
simplifies it integrations even with proprietary applications or
pre-existing systems, bringing as a consequence costs reduction
and management and mine production improvements.
After the “Prospection for Standardising Mining Objects”,
industry representatives have substantiated the need for
standardisation at data level. somi then establishes the format
for data representation, while it determines the methodology that
shall allow the growth of the standard towards all the mining
productive processes stages.
30 SOMI: A Stand ard Data R epresentation for Mining Indu s tr y

introduction
Part of the objectives of mining is to reduce the costs of production, increase worker and
equipment productivity and open new deposits while maintaining existing deposits
operating [1] .
In order to achieve the aforementioned objectives despite the constant market fluctua­
tions and existing limitations of resources, mining companies have had to incorporate
state-of-the-art technology, develop new processes and pursue a timely use of information.

presentation of the problem


Most mining companies grapple with a vast array of niche-mining technical systems,
either an application or equipment, to support core functions including inter alia,
exploration, mineral resource management, mine planning and design, production
planning and control, dispatch, sample analysis and reporting, metal accounting and
plant information [2] .
Companies usually acquire or develop each technical system, in order to overcome
local problems in the production process. Old mainframe based (legacy) applications
and systems lock-in in conjunction with the local vision of a problem and its consequent
solution has created a technical complexity in their information technology structure.
Therefore a set of concerns appears regarding relevant operational issues such as an
architecturally sound, principle-based approach to developing strategy, integration,
selecting software products, integrating systems and delivering meaningful management
information.
In a heterogeneous computing environment, there is no commonality of data
representation. Even systems from the same producer may have dissimilar data formats.
In addition, different programming languages have different ways of representing simple
data types and aggregate data types. If file systems are to be shared among heterogeneous
systems and if processes on heterogeneous systems are to communicate, then there must
be a common data representation between systems and languages [3] .

SOMI: Searching for interoperability


As a response to the continuous problem related to integration in heterogeneous
environments, especially for the mining industry, somi is created. somi is a project that
defines a standard data representation, which facilitates the interoperability between
diverse technological solutions of the mining industry, regardless of technologies and
technological platforms that are being used. somi standard then develops a unique and
extensible pattern for data representation, whereby such data may be written or read by
equipment and it solutions that are compatible with the standard.
somi standard may be adopted gradually, since its partial use does not disable its
interaction with non-compatible applications or systems.

Benefits of the use of data standardisation


Benefits derived from using a standard data representation can be divided in both direct
it benefits and indirect benefits.
A din (German Institute for Standardisation) study shows that industries that have
adhered to standards have benefited from cost reduction, commercial power increase,
new alliances, r & d risk decrease, lower accidents rate, among others [4] .
CHAPTER I 31

If one only considers direct benefits, the impact in numbers can be negligible. As
an example, the comparison study carried out by the Executive Board of Directors of
Codelco in 2009, points out that the ratio of the budget assigned to tic in reference to
the total budget of companies in the mining sector fluctuates between 1.3% and 1.6%,
significantly below the investments made in industries such as finance and services
where there is a fluctuation between 4% and 8%. Other studies carried out internation­
ally about global companies, ratify such gap, showing figures of 2 and 7.4% for the
aforementioned industries [5] .
Sizable improvements may be achieved in medium and long term through the
following benefits:

• Use of timely information: The use of timely information (precise, reliable and on
time) facilitates making accurate and timely decisions, which in turn reduces the
opportunities for operational failures, allows for the optimum use of resources, and
promotes policies for preventive maintenances, among others.

• Enhancing processes and system standardisation: Handling data under a standard


format allows standardisation of processes and systems in two levels. Internally, in-
house developments of mining companies may be replicated rather easily at diverse
stages of the process or may be used at different divisions. At industry level, standard
practices and developments may also be easily exported.

• Facilitating integration: Integration between equipment and technological solutions


is facilitated by established patterns for data transmission. Different systems may
read data from diverse applications, including proprietary systems and/or commercial
software.

• Creating independence from proprietary solutions: Several mining companies require


applications that will assist them in productive and administrative work in their
operations. Many of them use proprietary data and closed information structures,
which represent a barrier at the time the data needs to be exported from such
application or require to be integrated to other systems.

A mature example for data standardisation is hl7 (Health Level Seven) Standard for the
Health Industry.
Prior to the existence of hl7, in the Health sector, each interface between systems
was custom designed which in turn required complicated integration processes between
applications from diverse suppliers. Upon the absence of a set of standards for the
exchange of data, high costs would be incurred for integration and an unsatisfactory
handling of information.
With the introduction of hl7 in 1987, the content and format of messages that flow
between diverse applications was established, thus, minimising the incompatibility between
health information systems and increasing the productive exchange of data between
heterogeneous applications, without being concern about its technological platform [6].
Today, this standard is broadly recognised and it is present in 57 countries and
with more than 2300 members. It has been developed, tested and validated by diverse
stake-holders of the industry, who identify the required versatility to respond to the
needs of hospitals independently of the level (hospital, municipal or provincial) or area
(administration of patients, laboratory, pharmacy, etc.) [7] .
The use of open and broadly disseminated standards generates as a collateral benefit
the development of the software market in the industry. This is exactly what has
happened with hl7, therefore, it is estimated that somi will generate a similar effect in
the mining software market.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
32 SOMI: A Stand ard Data R epresentation for Mining Indu s tr y

How SOMI can facilitate the integration


somi facilitates the integration of mining technological solutions through a standard
data representation that enables information exchange among those solutions. This
integration may take place from different perspectives: Vertical integration of processes,
Functional integration and Integration Machine to Machine (m2m).
Vertical integration of processes: somi considers data representations at individual min­
ing processes level, and also, from a global perspective, specifies the format of the data
exchange between major stages of the mining process: from geology to mine planning,
from mining planning to extraction, from extraction to the plant and from the plant
to sales. For example, the exchange of production plans between production planning
stages and extraction has also been specified by somi. This interoperability throughout
the productive chain facilitated by somi provides the tools to achieve a global vision of the
mining process, which finally contributes towards the production optimum global [8] .
Functional Integration: Currently, an option to achieve an important competitive advan­
tage for companies has been to integrate mes (Resource Planning Systems) systems [9].
somi also enables this type of integration: from the shop floor to the top floor. An example
in this context is the somi specification of consolidated information about production of
shifts and operators during a certain period, towards the human resources system of the
mining company.
Figure   ➊ shows the applications classified functionally in a scenario without
standards. Figure  ➋ shows the scenario with standards.

Figure 1 Interoperability of applications without a standard.

Integration m2m: somi is focused on the integration between it solutions, wherever they
may reside. Following this idea, somi may specifically be used as a model of exchange
of information between mining equipment, integration m2m. An example of the use of
somi with this focus is the prevention of collisions between mining operating equipment,
trucks, shovels and light vehicles.
CHAPTER I 33

Figure 2 Interoperability of applications with the standard.

Given what has been presented, one could conclude that the use of a mining data standard
is broad while being able to cover the stages of the productive chain, the functional
levels and diverse accessories. Furthermore, if the communication platform beneath
somi were also standard, then the market could supply in an expedite form mining
equipment and applications whose installation shall not require (or requires a minimum)
any configuration (Plug & Play).

construction methodology
The Mining Industry uses a wide range of information (Exploration, Mine Planning,
Extraction, Maintenance, Commercialisation, etc.). Given this, the process of definition
of this kind of standard is a long and complex project. For the standard to be under­
standable and easy to use the information has been structured in hierarchical modules
(packages). An iterative and incremental methodology has been adopted: the project has
been organised in a set of iterations, each one having by objective to define a subset of
the modules.

Figure 3 Activities of an iteration of construction of SOMI.

Figure  ➌ represents the activities developed in each iteration of somi, which are
described as follows:

• Identify Data Requirements: the elements of information that are necessary in the mining
processes are identified, what takes place through working sessions with representatives
of the mining industry, together with the study of existing documentation.

• Model Information: information models (uml) based on the requirements identified in


the previous activity are generated. The information areas are structured in packages
(Prospecting, Extraction, Processing, Commercialisation, etc.).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
34 SOMI: A Stand ard Data R epresentation for Mining Indu s tr y

As an example, Figure  ➍ shows the packages in which the mining process has been
organised:

Figure 4 Diagram of packages for the mining process.

The information entities are represented as classes (Equipment, VitalSign, VitalSignValue,


etc.) while the basic information elements are represented as attributes of the classes
(code, name, description, etc.). Figure   ➎ depicts a summarised vision of the class
diagram for Equipments and Vital Signs.

Figure 5 Class diagram for equipments and vital signs.

It is possible to represent hierarchy of objects using associations between classes. For


instance, an equipment (a truck, lhd, etc.) consists of parts (an engine, an on-board
computer, etc.), which are also represented as equipments that can consist of parts
as well.
CHAPTER I 35

• In addition to the modelling, this activity takes into account the documentation of
the information (packages, classes, attributes and associations), which results in a
data dictionary.

• Define Information Representation: the format of representing the information for


the entities modelled in the previous activity is defined, what represents the ultimate
purpose of the standard. somi design considers cases of use of the standard that involve
online transactional data exchange (extraction data, vital signs, etc.), and offline
structured and compound information exchange (block diagram, production plans,
production reports, ore structure, etc.). For the exchange of transactional information
it is contemplated the use of a format that allows expressing handily information
structures (associations between elements, hierarchy, etc.).

• Validate Standard: The definitions obtained from the previous activities are validated
with industry representatives.

The methodology described (Figure  ➌) defines in sequence the necessary activities for
one iteration of the development of somi. Two basic guidelines have been defined for the
modelling activity: modular structuring and extensibility.

Modular Structuring: Each package of Figure  ➋ is a somi module. Structuring the


information in individual, highly cohesive modules simplifies its understanding and
utilisation. Figure   ➏ shows that a somi module can be utilised in different contexts:
the equipment vital signs information (Fuel Level, Temperature, etc.) is relevant for
the production systems (fleet control, production control, etc.) as well as for the
maintenance ones.

Figure 6 Data standardisation used by applications in more than one mining activity.

Extensibility: The fact that the standard is extensible is a desirable characteristic for somi
to be adaptable to the evolution of technology and mining processes. For example, the
inclusion to the standard of a new vital sign in mining equipment is a task that must
be simple to carry out.

Information representation formats


As it has been pointed out, today somi comprises the use of different formats of
information, in order to appropriately respond to the diverse information requirements.
Transactional Information: Format csv (Comma-Separated Values) is considered. In
this particular case, information is structured in rows (records) with columns (fields)
separated by commas. The first row can be optionally used to show the column title (which
provides more clarity and flexibility of information). Figure   ➐ shows the Vital Signs
representation in csv format (the first row correspond to date and time, represented as a

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
36 SOMI: A Stand ard Data R epresentation for Mining Indu s tr y

millisecond number that go by from 01/01/1970 at 00:00 hrs). csv format allows efficient
information transference. This is vital when transferring transactional information.
Furthermore somi shall allow the use of compression mechanisms, increasing the
information exchange efficiency.

date,equipment,vitalSign,operator,value,measureUnit
1258481936345,LU5,FULE,103973,68,LTR
1258481936655,TR1,ENSP,142822,1864,RPM

Figure 7 Information representation in CSV format.

Structured Information: Format xml (Extended Markup Language) is considered. This


format adequately supports the requirements of structured information. It allows
expressing object structures, relationships, hierarchies, etc. The example in Figure  ➑
depicts the Production Report representation, from a Shift described in xml format.

<?xml version=”1.0” encoding=”UTF-8”?>


<ShiftProduction xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:no
NamespaceSchemaLocation=”..\SOMI - Schemas\SOMI-ShiftProduction.xsd”>
<ShiftProductionDetail sectorCode=”FG” date=”2009-11-17”
shiftCode=”B” equipmentCode=”TR1” operatorCode=”4568933” source=”09N-0”
destination=”09-OP-8” bucketsNumber=”3” observation=”7”/>
<ShiftProductionDetail sectorCode=”MI” date=”2009-11-17”
shiftCode=”3” equipmentCode=”LU5” operatorCode=”105044” source=”N1-H”
destination=”5A-AN” bucketsNumber=”13” observation=”1”/>

</ShiftProduction>

Figure 8 Information representation in XML format.

case study
One of the areas that somi has approached is information standardisation for the control
of mining fleets. The generic case is made up of mining equipments in operation that
transmit information from Operations and Vital Signs to a proprietary application of
Fleet Control, which filters and processes such data and makes the information required
by the mining company available. This architecture is shown in Figure   ➒.

Figure 9 Simplified computerised architecture for the control of mining fleet.


CHAPTER I 37

For this case, the somi standard is used to provide a format to data between mining
equipment and the proprietary application for the control of fleet and, between the latter
and the internal servers and applications of the mining company. In general terms, this
information is stored in some type of repository, which allows its historical query.
The data that is transferred in this case includes:

––Vital Signs of equipment: such temperatures, speeds, pressures, etc.


––Daily production plan: draw points, grades, buckets number, priorities, etc.
––Shift production report: equipment information, operators, tonnages, cycles, start
and end dates of cycles, etc.

conclusions and future work


As a result of intensive and successful prospecting work [10] , somi has been able to
develop a robust and extensible system of classes for describing mining objects. The need
for data standardisation was identified and verified directly with mining companies,
suppliers of applications and IT services. Now the project has gone from a theoretical to
a practical approach. The model of classes was also validated with participating mining
companies Codelco and Freeport-McMoRan Copper & Gold.
Mining objects were defined for operating processes which include: equipment, vital
signs, events, drawing points, shifts and operators, cycles, production plans and reports
among others. In accordance to this development experience, it is estimated that somi
could have in the market approximately 15 full mining objects packages in a period of
two years, such as: ore structure, maintenance, blasting, loading, mine to mill informa­
tion, etc.
Future somi developments are grouped in three areas:

• To continue with the standard definition, to develop the current achievements and to
extend the standard scope to all mine processes.

• somi Communication platform definition. This activity will be oriented to integration


with existing platforms.

• Invitation to other companies/organisations to participate in the development of the


standard.

acknowledgements
The prospection (Prospection for Mining Objects Standardisation, Corfo Code: 08CM01-30)
stage of somi was financed by Corfo-Innova, Corporación Nacional del Cobre de Chile,
(Codelco) and Freeport-McMoRan Copper & Gold Inc. Today, the project is sponsored by
mining companies, under the supervision of Universidad de Chile. Mining companies
also contribute with the expert knowledge in the sector. In addition we would like to
thank the special collaboration and comments of Alberto Caneo and Ezequiel Muñoz
from Codelco Chile and of Aldo Gómez and Sergio Silva from Freeport-McMoRan Copper
& Gold in the creation of this work.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
38 SOMI: A Stand ard Data R epresentation for Mining Indu s tr y

references
Peterson, D. J., LaTourrette, T. & Bartis, J. (2001) New Forces at Work in Mining: Industr y Views of Critical
Technologies. Rand Corporation, United States. [1]

On-line resource Achieving high performance in mining, Accenture, from www.accenture.com


(Accessed 30th January 2009), © 1996–2010 Accenture. [2]

Barkley, J. F. & Olsen, K. (1989) Introduction to Heterogeneous Computing Environments. nsit Special
Publication 500–176, Nov 1989. [3]

Knoop, H. (2000) Economic Benefits of Standardisation - R esult of a German Scientific Study. German
Institute for Standardisation. [4]

MIT (2009) IT Portfolio Investment Benchmarks, Center for information Systems Research. [5]

On-line resource HL7 Overview, Available from : http://www.neotool.com/resources/HL7-Overview


(Accessed 30th January 2009) Copyright © 2009 Corepoint Health. [6]

Health Level Seven org. (2009), HL7 Announces Newest Affiliate – HL7 Hong Kong, Press release, 8th
October, 2009, Health Level Seven, Inc. [7]

Burke, E. & Kendall, G. (2005) Search Methodologies: Introductory Tutorials in Optimisation and
Decision Support Techniques, Springer Science+Business Media, Inc. [8]

Salazar ,V. (2009) Análisis de la Integración de los Sistemas MES – ER P en indu strias de manufactura,
Universidad Nacional Experimental de Guayana, Puerto Ordaz, Venezuela, Seventh laccei
Latin American and Caribbean Conference for Engineering andTechnology. [9]

Inostroza, P., Pezo, A. & Nieto. A. (2009) somi : Towards a Standard for Mining Objects, ifacmmm2009
proceedings. [10]
Optimising Fragmentation for
Productivity and Cost

abstract
Douglas chapman With the development and integration of technologies such as gps
AMEC International, Chile equipment control systems, electronic detonators, fragmentation
measurement software and fleet management systems, it has
become much easier to develop, implement and measure the
benefit of drill and blast fragmentation optimisation programs.
These technologies improve the quality control of drill hole
placement, blast pattern timing, fragmentation consistency, and
mine equipment productivity, as well as aid in relating these
variables to operating costs, downstream process efficiency, and
element recovery.
The main item which will be reviewed in this paper is the
correlation between blast fragmentation size and its impact on
downstream productivity, and whether there is a point at which
it becomes uneconomical to put more energy and resources into
the drill and blast process as it relates to fragmentation reduction.
It is said that “the most economical crushing is done in the
pit during blasting”. This statement is true, to a point, and this
paper will look at “if and when” there is a break-even point where
the reduction of burden and spacing dimensions, the addition of
drilling resources, the addition of explosives and consumables,
and the addition of associated manpower, provides little or no
economical benefit to the related downstream processes.
With the understanding that with each of the above mentioned
variables there are many related factors which impact their
individual performance, a “best case scenario” production blast
will be reviewed and will not take into account specific and ever-
present environmental, geological and mine design restrictions.
42 Optimi sing Fragmentation for Produ c tivit y and Cos t

introduction
The hard rock mining production cycle begins with drilling and blasting. As the first step
in the mining process it requires a high degree of planning, execution and continuous
improvement in order to optimise this activity with the overall goal of providing a
consistently fragmented material, with a designed P-80 size (80% passing size) which
will maximise overall mine operation productivity, while also minimising the cost of
drilling and blasting. The importance of completing this task properly is that large
cost benefits can be obtained which flow through each subsequent step of the mining
and milling process. On the contrary, poor fragmentation has an even greater negative
economical impact in the form of lower fleet production rates, additional costs from
secondary blasting and crushing circuit power consumption, and a possible reduction
in process throughput and element recovery. This paper will review the steps involved
with the drilling and blasting process and look at the key elements required to maximise
the efficiency of an operational blasting program to ensure the overall drill and blast
process is optimising fragmentation, for productivity and cost. From a mining operations
standpoint, if a few key items are constantly refined, then the drill and blast process
will maximise the entire mining operation, and thus improve revenue flow and the roi.

methodology
Optimising fragmentation for cost
This act of drilling and blasting might appear simple enough. Essentially it hasn't changed
much since 1627, which is the first recorded use of black powder for rock blasting [1] .
Holes are made in a rock mass, a chemical agent is added and fired, and the end result
is many smaller rocks. The overall process may not have changed, however the level of
detail has. We now fully understand the importance of drilling the proper sized hole in
the correct location, loading it with an accurate quantity and type of explosive product,
and initiating the blastholes with the appropriate timing sequence. If all of these steps
are completed correctly, we should obtain the desired size range of rock fragments
which we have planned for. The overall process is however, much harder than it initially
sounds when you consider ever changing geological rock types and structures, aggressive
production schedules, expensive and demanding equipment fleets, inherently dangerous
blasting products, abrasive mining conditions, and lastly the need for a large manpower
requirement to design the blast, prepare the pattern in the field, drill the holes, load the
explosives and fire the shot. It is these variables that make it difficult to complete the
process quickly, accurately and consistently while optimising the overall cost on a day
to day basis.
The science of refining the drill and blast process to optimise the economics has been
studied for many years. In the 1960´s, A.S. MacKenzie presented his classic concept curves
which outline the cost effect on unit mine operations of improved blasting procedures and
fragmentation results [2] . These clearly relate improved shovel productivity, increased
truck load factors and reduced crushing requirements to improved fragmentation.
CHAPTER II 43

Figure 1 The effect of fragmentation on unit operation costs.

These six graphs of MacKenzie´s reference the two unit operations of drilling and blasting
and indicate how modifying their cost and related material fragmentation size, affects
the downstream performance of loading, hauling and crushing. The summation of these
graphs provides the starting point for analysing where the optimum fragmentation range
is, and how much cost should be allocated to achieving this size.
In 1975, Harries and Mercer released their refined version of the combined graphs,
without crushing, and detailed the need to sum the cost implications of drilling and
blasting and loading and hauling, to pinpoint the particle size which would provide the
lowest cost of those two processes, which is indicated by the minimum cost point in the
“Total” graphed line [3] .

Figure 2 Determining the optimum


fragmentation size for drilling, blasting,
loading and hauling costs.

Technology to improve fragmentation


Since the development of the fragmentation optimisation graphs, there have been a
number of very important tools which have been released into the mining industry. Many
of these applications have measurably improved the overall performance and efficiency
of the drilling and blasting process and are directly responsible for improving blast
fragmentation. The most important consideration of these tools is that they allow the

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
44 Optimi sing Fragmentation for Produ c tivit y and Cos t

drill and blast engineer to design, implement and shoot a blast pattern and measure
the results of this blast on downstream processes, on a real-time basis. Of these recent
inventions, there are a few noteworthy examples which must be mentioned as they are
generally considered a requirement for most mid to large scale mining operations and
should be continuously used to optimise the drill and blast operation.
In 1979, Modular Mining Systems Inc. released their first Dispatch® Mine Management
system which has evolved over time to measure the exact real-time hp-gps (High-
Precision Global Positioning System) location and dig rate of a shovel in the dig face,
thus providing direct comparison of blast designs and performance to shovel, truck
and crusher productivity. In 1992, the Aquila™ Drill System became the first product
to provide real-time gnss (Global Navigation Satellite System) positioning of mining
machines [4] , which has saved countless survey hours, improved drillhole collar and
toe accuracy and provided detailed information on penetration rates and production
usage in many different geological conditions. In 1997 Split Engineering developed the
commercially available Split™ image processing program [5] which is a fragmentation
analysis software for analysing photos taken at the dig face and crusher dump pockets
in order to accurately report P-80 and top size. This makes it very easy for the engineer
to measure fragmentation directly at the dig face and track material size variances
within the shot. However, one of the most important products of recent time which
was developed specifically for blasting is the programmable electronic detonator which
became commercially available in the late 1990´s and provides user defined millisecond
timing of all blastholes. One of the key benefits of this system is in the ability to develop
a precise and consistent “millisecond per meter of burden” blast initiation design which
can improve the fragmentation of a poorly drilled blast pattern, or maximise the effects
of a properly drilled blast pattern. Each detonator vendor also supplies design software
to assist with timing the pattern and to provide a permanent digital record of the blast
parameters which can be used to track blasting improvements. In all of these recent
inventions there is one important concept which each tool is designed for, and that is
to utilise the latest available technology to improve the accuracy of the drill and blast
process and thus improve fragmentation.

Optimising fragmentation size


In the field of drilling and blasting there are a number of standard equations and “Rules
of Thumb” which the mining engineer can utilise to develop a suitable blast design
for breaking rock and providing diggable material. These equations can be based on
geological information, mining method and design, mining equipment limitations,
fragmentation requirements, explosive properties, cost, plus other physical and
environmental controlling factors. Since the majority of these equations have been
developed through many years of trial and effect, they are viewed as the initial point for
designing a safe and efficient blast pattern. For the purposes of this discussion, and to
efficiently analyse the data which was provided for this report, we will review a few of the
basic R ules of Thumb which should be used to design a blast pattern which can optimise
fragmentation, and then be modified to obtain the desired results.

• Bit diameter (mm) = 17 x bench height (recommend 171 mm for 10 m bench)


• Burden (m) = 30 x bit diameter (recommend 5.25 m)
• Spacing (m) = 1.15 x burden (recommend 6.0 m)
• Sub-drill (m) = 7 x bit diameter (recommend 1.5 m)
• Stemming height (m) = 24 x bit diameter (recommend 4.0–4.5 m).
CHAPTER II 45

With of all of these available design considerations there is one important question which
the engineer must consider, what is this blast being designed for?. In order to optimise the
fragmentation for a specific purpose, the engineer needs to consider what downstream
processes will benefit the most from properly fragmented material and each blast
must be designed specifically to fit these needs, in order to optimise fragmentation for
productivity.
Below is a list of equipment and processes which will benefit from properly fragmented
material in order of importance.

• Loading and hauling units


• rom (Run of Mine) leach process
• Crusher feed
• Mill process feed
• Waste material.
Once the blast has been fired, a loading unit will need to dig it and a hauling unit
will need to move it. This is a primary condition which must be considered in order to
maximise the production schedule of the load and haul fleet. If a large electric shovel
is to be used with very large trucks, then the design fragment size can be much larger
than if a small loader is digging the muckpile. This design criterion must always be
considered when designing a blast with continual refinement to maintain and improve
mine production schedules.
If the blasted material will bypass a primary crushing circuit and be shipped Run
of Mine (rom) directly to a leach pad then it is critical that the ore fragmentation be
amenable to the diffusivity of the rock and leach solution in order to maximise element
recovery. Many instances have been documented where rom leach ore has been shipped
to the pad knowing that as little as 10–20% of the element could be recovered, due to the
large fragment size and poor quality of the blasted material. The blasting of rom leach ore
requires extra effort, quality control, tighter patterns and high explosive energy to ensure
the correct fragmentation particle size is achieved, since there is very little mechanical
energy being imparted on the fracturing process. Essentially, the blasting of the rock
will offer the only opportunity for providing the correct fragment size to maximise
ore recovery. After a P-80 vs. Cost graph is developed, similar to that of Figure  ➋, the
correct fragmentation size can be determined which incorporates the heap leach process
costs, the permeability and porosity depth of solution and recovery percent, to correctly
determine the proper fragmentation size for rom heap leach material. Blasting for leach
must also take into consideration the quantity of fines developed in the blast as this can
be detrimental to the overall leach pad performance.
The third item in terms of importance with respect to fragmentation is crusher feed.
The crusher is generally considered the bottleneck between the mine and the process
plant and is the unit that suffers the largest production shortfalls when it comes to
poorly fragmented material. The delays encountered when dealing with oversize create
the need for added equipment, manpower, stockpiles and secondary reduction of oversize
through chemical or mechanical means. When designing a blast for crusher feed, it is
often thought that smaller is better; however there is a limit to the efficiency which can
be gained from creating smaller material, as can be seen. As indicated in Figure  ➌,
below, which compares crusher throughput to P-80 for a heap leach gold mine, the graph
indicates that there really is no visible trend of crusher throughput improvements when
compared to various P-80 sizes between 10 cm and 16 cm. The reason for this is that the
crusher setting size is 175 mm and all P-80 measurements are smaller than this value.
These values will be used further in the paper and related to drilling and blasting costs

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
46 Optimi sing Fragmentation for Produ c tivit y and Cos t

to emphasise the major cost increase to incrementally reduce fragments to this size level,
for no measurable throughput increase.

Figure 3 P-80 vs. crusher throughput.

Regarding the mill process feed, there are numerous documented benefits of improving
mill throughput by reducing the fragment size of the blasted ore product, however due
to intermediate mill stockpiles and ore bins it is difficult to campaign a blast through
the complete comminution process and measure the absolute benefits. It is known that
by reducing the P-80 fragment size of the blasted material it is possible to reduce the size
setting of the primary crusher and therefore reduce the workload required by the process
grinding units and increase mill throughput. If sag mills are being used for grinding
within the process stream, as is common these days, then the process throughput can
experience substantial increases from the generation of smaller material from blasting
[6, 7] . It is safe to say that the drill and blast process should be viewed holistically as
the primary step in the entire mining comminution process.
With respect to the blast design for fragmenting waste material, this should be
maximised to provide the largest fragment size which the loading and hauling units
can efficiently move without impacting the scheduled production rate. Waste material
should have its own P-80 vs. drill and blast, load and haul cost graph developed, similar
to Figure  ➋, to accurately determine the correct size for the lowest cost per tonne. There
are numerous cost benefits which can be obtained by spreading the pattern, reducing
drilling requirements and utilising lower cost explosives and initiating systems.

results and discussion


Calculations
For a practical example of these concepts, consider a production blast of 100,000 tonnes.
For the purpose of this analysis, consider all things to be equal, except for the burden
and spacing dimensions and therefore powder factors. All blast design criteria such as
hole diameter, sub-drill depth, stemming height and blast timing will remain constant.
The ground will be solid and dry, the geological rock type will remain constant, all hole
collar coordinates will be at the proper location and all holes will have been drilled to
CHAPTER II 47

the correct depth, additionally there will be no variance in the density of the explosive
product which will be assumed to be a heavy anfo emulsion blend.

Drilling and blasting cost

For this analysis, as the burden and spacing is expanded, the P-80 fragment size will
be calculated and the associated cost of drilling and blasting will be calculated for the
entire 100,000 tonne pattern.
The values for this cost calculation are:

• Explosive cost = $0.54/kg


• Initiation cost = $15.00/hole
• Drilling cost = $6.00/meter
• Labour cost = $15.00/hole
• Miscellaneous cost = $1.00/hole.

Loading & hauling cost

Based on Figure   ➌, consider the average P-80 value of 12.31 cm as being associated
with the average dig rate of 2,136 tonnes per hour for a Caterpillar 994 loader [8] . The
burden and spacing pattern design which closely provides a calculated P-80 of 12.31 cm is
5.0 m x 5.75 m. With each increase and decrease of 0.25 metres in burden from this point,
and thus associated increase and decrease in spacing (based on S = 1.15 x B), the dig rate
for the 994 will increase or decrease by 1% respectively and therefore the operating hours
will also increase and decrease. For this analysis there will be no variance in haulage
fill factors based on fragment size.
The operating costs used for the loading and hauling calculation are:

• Loading cost = $0.283/tonne


• Hauling cost = $0.673/tonne

Blast design criteria (from a current gold operation)


• Blast Size: 100,000 tonnes
• Bit diameter: 251 mm (9 7/8”)
• Explosive product: heavy anfo emulsion blend, density of 1.25, rws 115 [9]
• Bench Height: 10.0 m
• Sub-Drill: 1.0 m
• Stemming Height: 4.0 m
• Burden: Varied from 2.75 m – 10.00 m (by 0.25 m increments)
• Spacing: Varied from 3.00 m – 11.50 m (S = 1.15 x B)
• Design: Slightly rectangular staggered pattern
• Rock Factor: 2.6

Fragmentation prediction equations


• Rock Factor: A=0.06 x (rmd + jps + jpa + rdi + hf) [2]
• rmd = Rock Mass Description
––Powdery/Friable = 10
––Blocky = 20
––Totally Massive = 50

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
48 Optimi sing Fragmentation for Produ c tivit y and Cos t

• jps = Joint Plane Spacing


––Close (< 0.1m) = 10
––Intermediate (0.1 – 1.0m) = 20
––Wide (> 1.0m) = 50

• jpa = Joint Plan Angle


––Horizontal = 10
––Dip Out of Face = 20
––Strike Normal to Face = 30
––Dips in to Face = 40

• rdi = Rock Density Influence


= 25 x Density (t/m3) – 50

• hf = Hardness Factor
––Young's Modulus < 50 HF = Y/3
––Young's Modulus > 50 HF = UCS/5

Uniformity Index

n = {2.2 – (14 x B/D) x √(1 + (S/B -1)/2) x (1-W/B) x (L/H) x PS [2]

B = burden
D = blasthole diameter
S = spacing
W = standard deviation of drilling accuracy
L = charge height
H = bench height
PS = 1.1 for staggered pattern

Mean fragmentation size (Kuznetsov's Equation)

X 50 = A (Qe / Vo) -0.8 x Qe1/6 x (115/RWS) 0.633 [2]

A = rock factor
Qe = mass of explosive
Vo = volume of blast hole (burden x spacing x bench height)
rws = relative weight strength of explosive

P-80 Fragmentation Size


1/n
X 80 = X 50 / (0.4306) [2]

results
After calculating the fragmentation results for blast patterns which vary in size from
2.75 m x 3.00 m (B x S) through 10.0 m x 11.50 m, by 0.25 m increments, and determining
a cost for each individual blast pattern, the Total Cost of Drill and Blast data was plotted
on a Total Cost versus P-80 (cm) graph. The loading and haulage costs for various
fragmentation base production improvements were then calculated for a Caterpillar
994 loader and Caterpillar 793 truck fleet combination as Total Cost of L&H, and plotted on
CHAPTER II 49

the same graph. The summation of these two graphed lines was calculated and plotted
as the Total Cost of Drill and Blast and Load and Haul as indicated in Figure   ➍.
As detailed by Harries and Mercer in 1975, the lowest point on the Total Cost of Drill and
Blast and Load and Haul will determine the optimum fragmentation size for providing
the lowest cost in the drill and blast and load & haul process. In this case it is calculated
to be a product of a 6.50 m x 7.50 m pattern design, thus providing a P-80 of 18.02 cm.

Figure 4 P-80 vs. D&B, L&H cost.

As mentioned in the calculation assumptions and as displayed in the mine data,


the average crusher feed P-80 was measured at 12.31 cm. If this value is put into the
calculations above, it can be determined that the combined drill and blast and load and
haul cost per tonne required to produce this size is $1.40, as interpolated from Table 1 .

Table 1 Calculated cost for 12.31 cm P-80 vs. drill and blast, load and haul cost

Burden (m) Spacing (m) P-80 (cm) D&B Cost L&H Cost Total $/tonne

4.75 5.5 11.80 $53,324 $89,086 $142,410 $1.42


12.31 $50,698 $89,968 $140,667 $1.40
5.00 5.75 12.81 $48,125 $90,833 $138,958 $1.39

As seen from the lowest point in Figure ➍, which is the optimised size to provide the
lowest cost per tonne for drill and blast and load and haul, the lowest cost per tonne is
$1.31, as shown in Table 2 .

Table 2 Optimum range for P-80 vs. drill and blast, load and haul cost

Burden (m) Spacing (m) P-80 (cm) D&B Cost L&H Cost Total $/tonne

6.00 6.90 15.69 $33,420 $98,564 $131,984 $1.32


6.25 7.20 16.84 $30,747 $100,706 $131,506 $1.31
6.50 7.50 18.02 $28,381 $102,944 $131,420 $1.31
6.75 7.75 19.17 $26,449 $105,284 $131,690 $1.32

Therefore, it can be determined that for the annual crusher throughput of ore, which was
15,263,000 tonnes in 2009, the potential cost savings in drill and blast and load and haul
is roughly 1.2 million dollars for providing properly fragmented material. Now, the next
step is to incorporate the crushing cost per tonne with the mill grinding cost per tonne
to further refine the fragmentation size requirement for the entire comminution process.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
50 Optimi sing Fragmentation for Produ c tivit y and Cos t

conclusions
The end result of this analysis determines that using standard blast design calculations
and actual drill and blast and load and haul costs it is fairly straight forward to calculate
the particular P-80 fragmentation size which will provide the lowest drill and blast and
load and haul combined cost. Using available tools and technology, these values can be
verified in the field and the required calculation constants can be refined to develop an
accurate drill and blast fragmentation prediction model to help further optimise this
process. From this model it is also easy to see that there is a point where smaller fragment
sizes do not provide any further economical benefit to the drill and blast and load and
haul combined operations due to the very large incremental cost associated with small
size fragmentation production. With the inclusion of process information such as crusher
power costs, rom leach recoveries, mill grinding information, mill throughput values
and recovery curves, a specific optimised fragmentation size can be determined which
can optimise the entire comminution process, and obtained through proper blasting
design and technique. It is also possible to optimise blast designs while reducing the
major capital outlay for machinery and consumables by properly managing blast hole
diameters with the burden and spacing design. In some cases the complete removal of a
primary crushing unit can also be achieved.
With the ability to fine tune drill and blast parameters and obtain detailed operations
cost and productivity data on a day to day basis, the drill and blast engineer has the
power to make a substantial difference in the productivity and profitability of the mine
operation and should do so on a daily basis. The most important factor to realise, apart
from safety, when blasting to optimise fragmentation is that quality counts, and if the
blast pattern is drilled and loaded exactly as it was designed, you should get the results
you are looking for.

references
Society of Explosives Engineers, Inc.,(The Hi s tor y of E x plosives, re trie ved 09 December 2009 from
http://www.explosives.org/HistoryofExplosives.htm. [1]

Hustrulid, W. (1999) Blasting Principals for Open Pit Mining, Volume 1 – General Design Concepts, crc
Press, Taylor & Francis Group, Boca Raton, Florida, USA., pp. 43, 109–115. [2]

Duncan, W. & Mah. C. (2004) Rock Slope Engineering – Civil and Mining, 4 th Edition, Spon Press, New
York, ny, USA, p. 247. [3]

Caterpillar,(AQUIL A Drill Spec.pdf, re trie ved 10 December 2009 from http://www.cat.com/cda/


files/600041/7/AQUILADrillSpecalog(pdf).pdf. [4]

Split Engineering,(llc, Origins, retrieved December 2009 from http://www.spliteng.com/company.asp. [5]

Grundstrom, C., Kanchibotla, S., Jankovich, A. &Thornton, D. (2001) Blast Fragmentation for Maximising
the sag Mill Throughput at Porgera Gold Mine, 28 th isee Proceedings, International Society of Explosives
Engineers, Cleveland, Ohio, USA. [6]

Mosher, J. B. (2005) Comminution Circuits for Gold Ore Processing, Developments in Mineral Processing,
Mike B. Adams (Editor) Elsevier B.V. [7]

Caterpillar Inc., (2008) Caterpillar Performance Handbook, Edition 38, Caterpillar Inc., Peoria, Illinois,
USA., pp. 12–129. [8]

Orica Mining Services,(Fortan TM Advantage System.pdf, retrieved 18 December 2009 from http://www.
oricaminingservices.com/ContentPage.aspx?SectionID=10&PageID=47&CultureID=3. [9]
Truck Driver Training at
Esperanza Project

abstract
Rodrigo díaz Esperanza, as the first copper-gold project in Chile to process low
Minera Esperanza, Chile grade sulphide ore, considers that developing an effective training
process for unskilled labour is a strategic task for constructing its
company culture and achieving productivity targets.
The training process at Esperanza Project includes extensive use
of simulators and relies heavily on feedback relative to behaviour
and culture from the instructor to the apprentices. This paper
presents the results obtained from the instruction of a group of
truck drivers. The process took six and a half months and it success
rate was 81%.
52 Tru ck Driver Training at E speran z a Projec t

introduction
Esperanza Project is owned by Antofagasta Minerals (70%) and Marubeni Corporation
(30%). Antofagasta Minerals is a large Chilean mining group listed in London as
Antofagasta plc and part of the ftse100 index. The company is committed to communities
and environment in a sustainable mining strategy. This includes using sea water pumped
inland from the coast through a 140 km pipeline and hiring local labour. The company,
as the first low-grade sulphide copper producer in Chile, considers that training a high
performance workforce is a strategic task in order to achieve a competitive position in
terms of labour productivity.
This paper presents the results obtained from the instruction of 59 apprentices in truck
operation. At the end of the training, 48 students qualified as truck drivers capable of
operating with minimum supervision, 5 students in the group needed close supervision
to operate (but they are expected to succeed by extending training time) and 6 students
failed to qualify. The training process included extensive use of techniques such as
standardised supervision feedback and simulation.

training methodology
The training process at Esperanza Project is based on three main tasks: Knowledge and
Skills Development, Behaviour Development and Simulator Use. In addition, this training
process included first a recruiting stage based on a competency model.
The completion of these three main tasks takes up to six months. At the end of
this period, apprentices are able to command a truck in production under no direct
supervision. Furthermore, this also leads our apprentices to be hired as mine operators
as long as their initial six month contract is extended with no expiration date.

recruiting
Recruiting took place before training. It included filters, technical instruction and a
personality assessment as shown in Figure  ➊.
1st filter: 2nd filter:
resumé vs IQ test, ability to learn test, No end of
start
start pass? process
job description safety proneness test and
matching personality test

Yes

medical and panel interview:


8 weeks technical course, Yes mine operations, HR
sensometric personality assessment and pass?
evaluation guided visits to mine site and psychologist

No

evidence assessment No
by mine operations, HR
end of
pass? process
and psychologists

Yes

hired by Minera Esperanza

Figure 1 Recruiting process flow sheet.

The process started with cv reviewing and filtering because of the large amount of
candidates. The search was focused on graduates of Chilean technical high schools from
CHAPTER II 53

the same region of the mine location and whose health condition was compatible with
a mining job. This is health and body condition suitable to work 12 hours a day on a
seven-days-in seven-days-off roster at 2300 metres in altitude. People with chosen cvs
were asked for psychological testing including: iq and ability to learn, safety proneness
and personality.
After psychological testing, candidates went through eight weeks of technical
courses on hydraulics, electricity and diesel engines at a mining technical education
institute. Esperanza voluntarily signed an agreement with ctm, a training institution
from Arturo Prat University. This is part of Esperanza commitment to communities
and its voluntary initiative for local hiring and training. At the same time, a detailed
personality assessment and a panel interview conducted by staff from mine operations
and a psychologist lead to final apprentice selection.
A competency dictionary was used for recruiting. This tool allowed Mine Operations
and Human Resources departments to agree on what personal skills to search for in
apprentices. This process was supported by a psychology professional, which was essential
for describing value-based behaviour in such a manner that could be observed in practice
with limited subjective judgment.
Instructors were specifically trained by an organisational development professional
in the use of the competency dictionary for behaviour evaluation.

Truck Driving Training


After recruiting, the training in truck operation is divided in two stages. The first stage
goes from zero to 600 hours of operation and the second goes from 600 up to 2000 hours.
The focus of the first 600 hours is on developing the basic knowledge and skills to operate
380 tonnes haul trucks in productive tasks with a required safety standard. A flow sheet
of the training process is presented in Figure  ➋.
knowledge skills

Emergency Esperanza
Simulator control + no Certified
Company Safety , mine critical errors skills
communications and Non - supervised Truck
Induction checklist
operation Operator
truck knowledge Truck spare Supervised skills
seat operation checklist

1st behaviour 2nd behaviour Hiring as


evaluation evaluation operator

behaviour
0 180 600 1500 to 2000

Figure 2 Truck driver certification diagram.

After the first 600 hours the emphasis is upon increasing operator productivity. The
students that complete a skill checklist qualify as certified truck operator. The time to
achieve the certification varies because learning speed among people is different, but
it is expected to occur when students reach 2000 hours of operation. Skills checklist is
focused on productivity and it includes:

• Perform manoeuvres at both sides of shovel at loading face in less than 15 seconds.

• Understand and explain what are the systems of the truck and how they work.

• Understand and explain how production is measured an how to control it.

• Explain how production is related to tire lifetime.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
54 Tru ck Driver Training at E speran z a Projec t

The discussion in this paper is focused on the first 600 hours of training because in this
stage the students learn, from an almost zero starting point, how to operate a truck in
productive operations.
Operator training is based on three tasks. The last two of them run in parallel:

Knowledge and skill development

Technical knowledge is acquired using our training manual. This manual integrates the
oem technical guide with Esperanza operation procedures and safety culture. Skills are
developed through supervised practical experience. Initially, an apprentice will spend
some hours at the truck spare seat, but he will gradually move to the pilot's seat -under
supervision of an experienced operator- after gaining experience at the simulator and
completing a test drive at the training course in a real truck. Despite this, apprentices will
not operate the truck without supervision until they complete their simulator assignment
and the instructor assesses their practical skills with a checklist.

Operation at simulator cabin

At the simulator, apprentices gain experience using trucks in productive operations,


but in a virtual world. This provides a chance to increase operator self-confidence in
positioning truck at shovel and dumping. Students practise at the simulator through
many sessions during their training process. In order to successfully complete their
assignment, students must control operation emergencies and they must score zero
critical errors.
The emergency control part of the assignment consists in successfully overcoming
a break failure, an auto-retarder failure and an engine fire. The second part of the
assignment consists in reducing critical errors to zero during a given simulator
session. To understand what a critical error is, it is necessary to explain that the
simulator contains a list of warnings stored at the software, that are triggered during
the simulation to alert that the truck operation is out of the normal parameters. Critical
errors are the subset of all errors, which are classified as critical because they can produce
a major accident or breakdown. Critical errors are written on a list that is previously
known by students.

Behaviour development

The instructors evaluated and provided personal feedback to all their students using
a behaviour development checklist. This feedback is provided through interviews that
take place in the middle of the training process and at the end of it, and is based on 20
specific behaviours related to safety, learning skills and teamwork. This strategy gives
the students the opportunity to work on their personal gaps. All instructors were trained
by a professional in order to improve their feedback skills, in the sense to base it strictly
on facts and observations but also to provide it in an empathic way.

results
Knowledge and skill development
All the students scored 100% at the technical knowledge tests. The necessary technical
knowledge is contained in our manuals and it covers topics including safety, resting
CHAPTER II 55

in a mining lifestyle, truck technical specifications, controls and instruments and


operation procedures. The 100% score was obtained using an iterative process. After a
first evaluation, the instruction is focused on subjects not successfully completed. Then
a new evaluation is performed until the students reach the maximum score. This process
ensures that all the apprentices have sufficient knowledge before they seat at the pilot
position and that they know only what Esperanza defined that is relevant, instead of
other information from uncontrolled sources.
Development of practical skills is achieved through supervised operation as shown
in Table 1 :

Table 1 Training time

Item unit value

Number of apprentices operating under minimum supervision at 780 hours of training apprentices 44
Average simulator sessions per apprentice session 13.1
Average time per apprentice of lectures in the classroom hour 144
Average time per apprentice at truck spare seat hour 231
Average time per apprentice of driving under direct supervision hour 253
Average time per apprentice operating with no supervision hour 152
Average total training time per apprentice hour 780

During practical training hours, mine instructors use a skill checklist to observe the
students in a standardised and objective way while they drive the trucks. This checklist
controls more than one hundred items related to:

• Safety, health and environment


• Tire usage and care
• Truck safety devices
• Instruments and indicators
• Truck systems and controls
• Brake system knowledge and usage
• Hauling
• Loading and dumping cycle
• Cycle optimisation
• Jigsaw system usage
• Parking
• Shift change time.
This tool also makes possible to provide precise and focused feedback to students in
relation to their skill gaps.

Simulator tool
Skill development can be derived from Figure  ➌ by comparing the average number of
errors per session scored by students during the simulated truck operation. Average total
errors decreased from 30 at the first three sessions to six at the last three. Furthermore,
critical errors per session fell from 6.4 to 1.3.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
56 Tru ck Driver Training at E speran z a Projec t

number of errors
35

29.5
30

25

20

15

10

6.4
5.6
5

1.3

Critical error Total error

three first session average three last session average

Figure 3 Error evolution per simulator session.

It is important to mention that the 1.3 value for critical error in the three last sessions is
more a breakdown symptom rather than an error. This was generated when the students
experienced an auto-retarder failure, which the only way to detect it was through an
abnormal increase in the engine rpm.

40

35

30
number of errors

25

20

15

10

number of session

53 student average student 1 student 33 student 28


student 34 student 15 student 42

Figure 4 Learning speed at simulator.

Differences in learning velocity can be derived from Figure  ➍. The Y-axis shows the
number of total errors and the X-axis the progress through sessions. The bold line shows
students average total errors per session. Different colour lines under the bold one, at
the lower left corner of Figure  ➍, show the performance of some of the best students
individually. This is a powerful tool to identify the best performing students in terms of
relative learning velocity. Figure  ➍ also shows evidence that in average, the complete
group of students improved performance at the simulator.
In Figure  ➎, the bold line shows the decrease in average total errors per session of
all students. In contrast, thin lines represent the results at the simulator of some of
the worst performing students. Instead of the fact that in the figure lower performing
students showed progress in time, in reality their results at the simulator tended to be
more erratic compared to those of their peers.
CHAPTER II 57

90

80

70

number of errors
60

50

40

30

20

10

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33

number of session

53 student average student 54 student 55 student 57


student 58 student 59 student 56

Figure 5 Student erratic performance at simulator.

Behaviour assessment
Figure  ➏ shows the results of the behaviour assessment carried out in December 2008
and its evolution compared to March 2009 (bold line). The X-axis shows the 20 different
behaviours that were evaluated by mine instructors. The Y-axis shows the frequency in
which the desired behaviours were observed and must be read as it follows:

• Number 4 means that searched conduct was observed at least 80% of times
• Number 3 means that conduct was observed more at least 50% of times but less than 80%
• Number 2 means that conduct was observed less than 50% of the times but at least 20%
• Number 1 means that conduct was observed less than 20% of the times.
Results are presented as the average score obtained by all the students.

4
observed frequency

December 2008
March 2009
1

be haviour

Figure 6 Behaviour evolution.

From Figure  ➏, most of the students, on average, showed improvement in their


behaviour and culture after the instructor provided them standardised feedback.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
58 Tru ck Driver Training at E speran z a Projec t

conclusions
The final result for the group of apprentices was:

• 6 students did not complete the process successfully and were not selected for
permanent hiring.
• 53 students completed the process successfully and were hired permanently.
The six apprentices that were not selected showed poor results compared to those
that were hired, as shown in Figure  ➐. In addition, there is a correlation between
their behaviour and other results: students who were not selected showed the worst
performance at the simulator (erratic and with many errors, as shown in Figure  ➎),
and most of them did not qualify at the simulator to drive a truck by themselves under
no supervision.

3
frequency

Ma

6 student
average March 2009

53 student
average March 2009
1

behaviour type

Figure 7 Behaviour, students not selected for permanent hiring.

Behaviour and feedback are important components of a company culture. Providing


supervision with standardised feedback tools supplies a significant source of evidence
for the training process.
Regarding the use of simulators in truck driver training, we believe this technique
contributes to improving truck driver training for the following reasons:

• It increases student confidence in a virtual environment before operating a real truck.


• It allows students to practice and measure truck operating procedures before driving
a real truck.
• It exposes apprentices to operation emergencies that cannot be reproduced in a
real truck.
• It improves truck operation practices by decreasing the number of errors in operation.
• It provides a reliable source of evidence for the labour training process.

references
Minera Los Pelambres (2008) Manual de Entrenamiento para Tareas con Camiones de Extracción Caterpillar
797. Gerencia Mina, Minera Los Pelambres. [1]
Thermal Fragmentation: Reducing
Mining Width when Extracting
Narrow Precious Metal Veins

abstract
Donald brisebois The mining of high-grade, narrow vein deposits is an important
Jean-Philippe brisebois field of activity in the precious metal mining sector. The principle
Rocmec Mining, Canada factor that has undermined the profitability and effectiveness
of mining such ore zones is the substantial dilution that occurs
when blasting with explosives during extraction.
In order to minimise dilution, the Thermal Fragmentation
Mining Method enables the operator to extract a narrow
mineralised corridor, 50 cm to 1 metre wide (according to the
width of the vein), between two sub-level drifts. By inserting
a strong burner powered by diesel fuel and compressed air into
a pilot hole previously drilled directly into the vein, a thermal
reaction is created, spalling the rock and enlarging the hole to
80 cm in diameter. The remaining ore between the thermal holes
is broken loose using low powered explosives, leaving the waste
walls intact. This patented method produces highly concentrated
ore, resulting in 400%–500% less dilution when compared to
conventional mining methods.
The mining method reduces the environmental affects of
mining operations since much smaller quantities of rock are
displaced, stockpiled, and treated using chemical agents. The
fully mechanised equipment operated by a two-person team (one
thermal fragmentation operator, one drilling operator) maximises
the effectiveness of skilled personnel, and increases productivity
and safety.
The Thermal Fragmentation is currently employed in three
mining operations in North America.
60 T hermal Fragmentation: R edu cing Mining W idth...

introduction
The mining of high-grade, narrow vein deposits is a predominant field of activity in the
precious metal sector. These types of deposits are located throughout the globe and have
a significant presence in mining operations. The principle factor that has undermined
the profitability and effectiveness of mining such ore zones is the substantial dilution
that occurs when blasting with explosives during extraction and the low productivity
associated with today's common extraction methods. The Thermal Fragmentation Mining
Method has been conceived to mine a narrow mineralised corridor in a productive and
cost efficient manner in order to solve these particular challenges. The following describes
this mining method in depth and outlines its successes in improving the extraction
process of such ore bodies.

description of technology
A strong burner powered by diesel fuel is inserted into a 152 mm pilot hole drilled into
the vein (Figure ➊) using a conventional longhole drill. The burner spalls the rock
quickly, increasing the diameter of the hole to 30–80 cm (Figure ➋) producing rock
fragments 0–13 mm in size. The leftover rock between fragmented holes is broken loose
using soft explosives and a narrow mining corridor with widths of 30 cm to one metre
is thus extracted (Figure ➌). Since the waste walls are left intact, the dilution factor
and the inefficiencies associated with traditional mining methods are greatly reduced.

Figure 1 The method (creating the opening).


CHAPTER II 61

Figure 2 Fragmented hole (60 cm wide).

Figure 3 Stope (80 cm wide).

The burner
The burner (Figure ➍), powered by diesel fuel and compressed air, creates a thermal
cushion of hot air in the pilot hole, which produces a thermal stress when coming in
contact with the rock. The temperature difference between the heat cushion and the
mass of rock causes the rock to shatter in a similar manner as putting a cold glass in hot
water. A spalling effect occurs [2] and the rock is scaled off the hole walls and broken
loose by the compressed air.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
62 T hermal Fragmentation: R edu cing Mining W idth...

Figure 4 The burner.

The fragmented rock


The process of fragmenting the rock is optimal in hard, dense rock. The spalling
process produces rock fragments 0–13 mm in size. Figure ➎ illustrates the size of the
fragmented ore. The finely fragmented ore requires no crushing before entering the
milling circuit and can be more efficiently transported since it consumes less space
than ore in larger pieces.

Figure 5 The fragmented rock.

Tonnage comparison with alternative method


The method produces highly concentrated ore, resulting in 400%–500% less dilution
when compared to conventional mining methods. Table 1 compares the quantity of
rock extracted when mining a 50 cm-wide vein using the thermal fragmentation mining
method as opposed to a shrinkage mining method.
CHAPTER II 63

Table 1 Tonnage calculation; comparing thermal fragmentation and shrinkage methods

Tonnage Calculation
Thermal Fragmentation Shrinkage
(40 m by 20 m Ore Block)

Width in situ (m) 0.5 0.5


Mining Width – Final Result 0.5 1.8
Planned Dilution 0% 260%
Height (m) 20 20
Length (m) 40 40
Density 2.8 2.8
Total Volume (t) 1120 4032

The table above shows approximately four times less rock needs to be mined for the
equivalent mineralised content. This method of extraction allows mine operators to solely
extract mineralised zones, thus significantly reducing dilution factors and optimising
mine operations as a result. The technology enables the operator to mine ounces and
not tonnes.

Drift development and stope layout


Drift development is performed directly into the ore at intervals of 15 to 20 metres
(Figure ➏) in accordance with the geology of the ore body. Using a re-suing method,
the ore is blasted and recovered in the first cut, then, the waste is blasted and hauled
away in the second cut.
Following the creation of two sub-level drifts, a pilot hole is drilled between the two
levels and enlarged by way of thermal fragmentation. The unit is designed to operate
in a compact underground environment, in a drift as small as 1.8 m wide by 2.8 m high
(Figure ➐). The company also produces a unit measuring 0.8 m wide by 2.1 m high,
which is capable of working in smaller sublevels.

Mining Width: 50 cm
Total: 1062 Tonnes

Figure 6 Stope layout.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
64 T hermal Fragmentation: R edu cing Mining W idth...

Figure 7 Equipment dimensions.

Other applications - drop raising


The thermal fragmentation equipment is also used to create the centre cut in traditional
drop raising. The burner can enlarge a 152 mm pre-drilled pilot hole into an 80 cm cut on a
20 metre distance in approximately four hours total. By creating this large centre cut quickly
and efficiently, larger sections can be blasted with minimal vibrations (Figure ➑), thus
avoiding damage to the surrounding rock (Figure ➒). The number of blast holes needed
and explosives are reduced and the risk of freezing the raise is minimised.


Figure 8 Thermal cut (80cm) with blast hole. Figure 9 Drop raise (20 m x 0.9 m x 1.2 m).

environmental impact analysis


There is a growing need to develop sustainable mining methods that minimise the
environmental footprint left behind by mining operations. While developing the Thermal
Fragmentation Mining Method, important efforts were made to address and reduce the
environmental effects that mine operations have on the surrounding areas. Using the
method, mine development is performed directly into ore, resulting in less waste rock
being extracted and displaced to the surface. By solely extracting the mineralised zone,
only the necessary excavations are made. As shown in Table 1 , four times less rock needs
to be mined for the equivalent mineral content.
CHAPTER II 65

As a result of less rock being mined, fewer tonnes need to be processed at the mill
to extract the precious metals. The quantity of chemical agents needed in the process
is greatly reduced and the quantity of energy needed to process the ore is also greatly
diminished. The reduced quantity of energy for hauling and processing the ore results
in fewer greenhouse gases being emitted. The mining residue that remains once the
precious metal contents are removed is four times less abundant, using the example
above, meaning much smaller tailing areas need to be constructed, maintained, and
rehabilitated once mining operations have ceased. The space needed to host the mine
site is greatly reduced, the alterations to the landscape are significantly diminished, and
the result is a cleaner and more responsible approach to mine operations.

Productivity and safety


The shortage of skilled personnel in the mining community has made it essential to find
ways to increase productivity per worker while improving working conditions in order to
attract and retain skilled miners.

Productivity

The work group required to operate one thermal fragmentation unit consists of a two
person team (one thermal fragmentation operator, one drilling operator). Table 2 shows
the time needed to extract an ore block using the thermal fragmentation mining method
in comparison to using a shrinkage mining method.

Table 2 Tonnage calculation; comparing thermal fragmentation and shrinkage methods

Tonnage Calculation
Thermal Fragmentation Shrinkage
(40 m by 20 m Ore Block)

Width in situ (m) 0.5 0.5


Mining Width – Final Result 0.5 1.8
Planned Dilution 0% 260 %
Height (m) 20 20
Length (m) 40 40
Density 2.8 2.8
Total Volume (t) 1120 4032
Number of Personel 2 2
Productivity per 12 hrs Shift (t) 30 30
Tonnes Extracted per 24 hrs 60 120
Days Required to Extract Ore Block 18.7 33.6

The table above demonstrates that for the equivalent amount of mineral content, it takes
approximately half the time to mine the ore zone using the thermal fragmentation
mining method than when using a shrinkage mining method. Furthermore, since less
rock needs to be mucked and hauled from the stope, fewer personnel are needed for
handling the ore.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
66 T hermal Fragmentation: R edu cing Mining W idth...

Mechanisation and employee safety

Each unit is completely mechanised, reducing the risk of injuries and strain caused
by manual manipulation of heavy equipment. The operator stands at a safe distance
from the stope, virtually eliminating the risk of flying debris and falling loose rock
from the waste walls. Furthermore, unlike shrinkage mining methods, smaller
excavations are made (0.5 m compared to 2 m) so the occurrence of falling loose rock
is greatly diminished.

economic analysis
By rendering a greater number of narrow, mineralised zones that are economical to
extract, the mining method has the potential to convert a substantial portion of the
mineral resources of an operating company into mineral reserves. A large number of
mines currently in operation today contain narrow, precious metal veins throughout
the ore body, but unless these veins are of significant width (usually 1 m or greater) or
very high grade they are often overlooked. As the mine operator develops the zones to be
extracted, high grade, narrow ore bodies are often uncovered, but not extracted since it is
uneconomical to mine such ore bodies using conventional mining methods (shrinkage,
long hole, room and pillar, etc.) Table 3 below demonstrates the cost savings per ounce of
using the thermal fragmentation mining method in comparison to the long-hole method.
The study was done by Canadian Institute of Mining using 2001 exchange rate figures [1].

Table 3 Estimate cost comparison between underground thermal fragmentation and long hole

Tonnage calculated on the basis of a 60 m by


Thermal drilling 3024 t Long-hole drilling 3024 t
60 m reserve block
Grade in situ (g/t) 35.00 35.00
Width in situ (cm) 30 30
Minimum width (cm) 30 140
Planned dilution 0% 367%
Geological reserves 3,024 14,112
Reserve grade (g/t) 35.00 7.50
Mining

Wall dilution 5% 35%


Stope recovery 79% 90%
Ore development 544 2,540
Planned mining reserve 1,961 14,606
Grade (g/t) 33.25 4.88
Mill recovery 96% 96%
Produced ounces 2,013 2,198
CHAPTER II 67

Thermal Drilling Long-hole Drilling


Unit Cost $/M Total Cost $/M Unit Cost $/M Total Cost $/M
Development
Drifts 1,000.00 180,000.00 1,000.00 180,000.00
Subdrifts 1,000.00 120,000.00 1,000.00 120,000.00
Raises 1,000.00 60,000.00 1,000.00 120,000.00
Drawpoint 1,000.00 – 1,000.00 60,000.00
Mining cost ($/t) 113.50 222,600.00 19.00 277,516.00
Mucking 8.00 15,690.00 4.00 58,424.00
Transportation 12.00 23,535.00 6.00 87,636.00
Milling 16.00 31,380.00 20.00 292,122.00
Environment 2.00 3,922.00 2.00 29,212.00
Backfilling – – 5.00 73,030.00
Total – 657,127.00 – 1,297,940.00
$ per tonne – 335.06 – 88.86
$ per ounce – 326.49 – 590.59
US$ per ounce 0.65 212.22 – 383.88

As the analysis above shows, it is approximately 45% less costly to mine a narrow vein
ore body using the Thermal Fragmentation Mining Method than using a conventional
mining method. Overall profitability of mine operations is increased since more precious
metals can be economically mined for the same level of development expenditures.

conclusion
Many variations and adjustments have been made to conventional methods of mining
narrow precious metal veins, but the serious shortfalls brought upon by dilution remain.
The Thermal Fragmentation Mining Method is a new and innovative way of mining
narrow vein ore bodies and a foremost solution to solving the problem of ore dilution by
reducing it by a factor of four to one. It uses a unique tool, a powerful burner, to mine
with precision, a narrow mineralised corridor in an effective and productive manner.
The technology is positioned to meet the growing challenges of skilled labour shortages,
tougher environmental guidelines, and the depletion of traditional large scale ore deposits
mined using conventional methods. As the technology continues to develop and spread
through the mining community, the objective remains to optimise the productivity
and profitability of mining narrow, high-grade, precious metal ore bodies and to make
a substantial, lasting contribution to this sector of activity.

references
Canadian Institute of Mining (2003) Thermal Rock Fragmentation – Applications in Narrow-vein Extraction.
Vol. 96, #1071. CIM Bulletin, Canada. pp. 66–71. [1]

Calaman, J. J. & Rolseth, H. C. (1968) Sur face Mining First Edition. Chapter 6.4 Society for Mining
Metallurgy and Exploration Inc., Colorado, USA. pp. 325–337. [2]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
A Review of Controlled Recirculation
and Its Potential Use for Deep
Underground Block Caving Mines

abstract
Ernesto arancibia In the next years various block caving underground mines
Codelco, Chile in Chile and around the world will start to operate at deeper
conditions as ore reserves near the surface are exhausted. The
Raúl castro new block cave will be subjected to more challenging atmospheric
Claudio gutiérrez conditions including increase on temperature and increase on
Universidad de Chile mine resistance to the flow of air. Those conditions will mean that
ventilation may have a higher impact on mine costs due to the
increase on energy and the cost to build the infrastructure to carry
the fresh air to the production faces. In the last years and based
on deep coal mine ventilation systems, there have been attempts
to use alternatives for air circulation in block caves including
controlled circulation and air filtering (El Salvador Mine) [1] . In
this paper we present a methodology to calculate first the airflows
and costs using controlled circulation for metalliferous mines which
allow comparing different solutions for re-circulated systems in a
block cave.
70 A R e vie w of Controlled R ecirculation and It s Potential Use...

introduction
In the next years various block caving underground mines in Chile and around the world
will go deeper as ore reserves near the surface are exhausted. That is the case of some
of Codelco mines such as Chuquicamata underground, the New Mine Level and Sur-
Sur in Andina. This is also the case of many other operations around the globe which
mean that mine conditions will be challenging for the current and next generations
of mining engineers.
From a ventilation point of view as mines start to operate at deep conditions, the
energy requirements increase as the air needs to cover a longer way to its destiny. On
the other hand, the infrastructure required to move the same amount of air become
more expensive as shafts and ramps specially designed for ventilation purposes become
a large part of mine investment. All this must be taken into account as energy costs and
infrastructure costs have been steadily increasing since 2005.
In other parts of the world (out of Chile) ventilation problems due to deep underground
mines are of different nature. In the case of South African mines the deep mines carry
out problems related to the increase on temperature. In those conditions fresh air from
the surface undergoes high pressure increasing its temperature significantly. Thus the air
has to be refrigerated to be usable in any production area with a high cost on ventilation.
A solution to reduce the amount of energy on this process was to use controlled
recirculation for the refrigerated air [2]. In the case coal mines in United Kingdom during
the middle eighties mining was carried out as deep as 11 kilometres from the seashore.
The large distance put serious difficulties for any increase on production or deepening
given the large investments required on ventilation [3] . The solution was to use controlled
recirculation to increase the flow of air in the production face to produce turbulent
conditions. This was required to avoid forming layers of high methane concentration
without incurring in high investment solutions.
As noted in the literature, recirculation of air has been used when there is a high cost
associated to ventilation. In the case of Chilean block caving mines, it is expected that
the new mines will be located deeper so the ventilation costs both in infrastructure and
operation will become more relevant to future operations. However to be used in large
metal mines, first as a good idea and then on an engineering project, the fundamentals
of controlled recirculation needs to be presented and investigated.

current state of ventilation


In order to establish the benefits of controlled recirculation for block caves it is necessary
to define the current methodologies to determine the amount of fresh air. In the case
of Chilean mines the state regulation defines the limits for pollutant concentration.
In the engineering practice the amount of air for a production drift in a block cave is
dictated for the diesel and dust concentration. In the case of using diesel equipment the
required air is calculated using 2,83 m3/s per hp, which is equivalent to 100 cfm per hp [4].
To control the dust concentration, the criteria is to calculate the amount of air to dilute
the dust taking care that the speed of air is not high enough to produce the re-entry of
dust in the flow. The small particles are considered as gases which could only be diluted
through fresh air. In practice the speed for minimal dust concentration is in the order of
300 to 400 fpm [4] . Using both criteria, the amount of air to ventilate a production drift
is 23,100 cfm for gases and 31,500 cfm for dust considering a 231-hp lhd. This indicates
that due to dust 8,400 cfm of extra fresh air are required per production drift.
If the dust is removed from the air through filters, there is room for decreasing the
CHAPTER II 71

amount of fresh air in the system. In this work we postulated, that by using controlled
recirculation there is room for engineering improvements considering the regulations
used by the industry.

Basic recirculation model


The basic controlled recirculation model is due to Mcpherson [5]. As indicated in Figure ➊
in a recirculated circuit there are five fluxes to be determined: one main intake (Q1), a
mixed intake to the face (Q2), a return (Q3), a recirculation cross-cut (Q4) and a main return
(Q5). There are also two positions for the fan sites, in the cross-cut (A) and in the return (B).

Figure 1 Basic recirculation model.

For the case of block caves we considered that the main pollutants are dust and CO while
the main requirement is oxygen. In the following the main calculations are presented.

Dust concentration

The dust concentration is given by:

(1)

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
72 A R e vie w of Controlled R ecirculation and It s Potential Use...

(2)

where:
Q1: Intake fresh air flow, m3/s.
C1: Intake dust concentration, mg/m3.
C2: Dust concentration to the face mg/m3.
C4: Dust concentration after the emission source mg/m3.
G: Emission source mg/s.
E: Dust filtering efficiency (% in mass).
F: Recirculated fraction defined as Q4/Q2, having values between 0 y 1.
P4: Contaminant penetration in branch 4, equal to 1-E.

In a traditional ventilation circuit to control the dust, the engineer has to decide on the
fresh air in the intake considering the emission source. In a recirculation system there are
more variables such as the recirculated fraction and filter efficiency. Given those variables
the dust concentrations could be larger or smaller than in traditional ventilation. [6]
Wu et al. [6] used the same model but considered the filtering and fan in-line with the
flow (case B in Figure ➊). In their approach the focus was to find the critical filtering
efficiency where despite the re-circulated fraction the concentration in the face (C 2) was
smaller than in the traditional case. Then the objective was to find the filtering system
with a similar efficiency. Stachulak [2] used the same approach for metalliferous mines in
Canada. However none of the authors analysed the case with a better filtering efficiency.

Carbon monoxide concentration

Let's consider a diesel engine which generates CO. In this case we considered two branches
in the face: one that direct the air to the engine and another that pass through the drift.
In this case the CO concentration is given by:

(3)

(4)

Where:
Cq: CO concentration in the lhd exhaust (ppm in volumen).
Q: Airflow consumed by the lhd in m3/s.
C1: CO concentration at the mine intake ppmv.
C2: CO concentration at the face in ppmv.
C4: Co concentration after the source emission in ppmv.
F, Q as above
CHAPTER II 73

Oxygen concentration

In this case the O2 concentration is given by:

(5)

(6)

Where:
C1: O2 concentration at the intake in ppmv
C2: O2 concentration at the face in ppmv
C4: O2 concentration after the source emission

A difference with respect to the CO case is that, in the O2 case as the recirculation
increases the O 2 concentration decreases while the concentration at the return is
independent of the fraction. In traditional ventilation systems, the O2 values take values
near the ones in the surface so the O2 requirements do not determine the flow. For the
re-circulated system the same principle applies.

methodology for calculating


recirculation parameters
In the case of recirculation the method to calculate the required flow is to equal C4 to
the regulation. This requires to calculate the source emission and to define recirculation
fraction and intake flow.
In order to establish the source emissions we considered the case of a production drift
with a production rate of 2,000 tonnes per day and a operating 231 hp's lhd. In this case
we built a simplified model where it was assumed that the calculated flow was enough
to comply with the regulations for CO and dust. The dust emissions are shown in Table 1
while the engine generated gases emission (CO and dpm) rate is shown in Table 2.

Table 1 Emissions for total dust and under 10 μm dust

EPM5 1400 mg/t

Etotal 3000 mg/t

Table 2 Emissions of gases

CO Generation 0,0004 m 3/s

Diesel exhaust fumes flow 0,2172 m 3/s

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
74 A R e vie w of Controlled R ecirculation and It s Potential Use...

For the case of dust (pm5) Figure ➋ shows the possible solutions for a drift considering a
dust removing efficiency of 95%. As the recirculation factor increases the intake reduces
and the mixed intake flow increases.

Figure 2 Airflow for different recirculation factors in the case of dust.

For the case of CO Figure ➌ shows the possible solution for a drift. As noted the
intake flow is constant while the intake flow and crosscut flow increases with the
recirculating fraction.

Figure 3 Airflow for different recirculation factors for the case of CO.

Combining the two graphs it is possible to define the solution that considers the CO and
dust concentration.
CHAPTER II 75

Figure 4 Airflow requirement considering dust and CO. The intercept is the optimal airflow.

case study
The case study for the application of recirculation corresponds to a block caving system.
For the analysis we used a production block having five drifts located at different depths
from the intake. The production drifts are 120 m long and in each drift a 231-hp lhds
is operating. The ventilation circuits considered are a case base study (typical) and a
recirculated system for a production level as shown in Figures ➎ and ➏.
For the case of the re-circulated system we used the above equations to estimate
the intake and recirculation fraction which comply with the regulations for dust, Co
and DPM. In the case of the base case this was done in the usual way. Afterwards we
simulated the ventilation circuits in the software VenetPC to estimate the head in
pressure required to move the air.
In the re-circulated system the dust filtering device was located in the branch noted
as re-circulated flow in Figure ➏. It is assumed that this device is in series with the
flow adding resistance to the circuit. This resistance is obtained by dividing the loss in
head pressure caused by the filtering device at the required airflow divided by the square
of the airflow. Afterwards we simulate the airflow adding a fan to the circuit in order to
calculate the annual costs as in a typical ventilation engineering study.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
76 A R e vie w of Controlled R ecirculation and It s Potential Use...

Figure 5 Isometric view of a typical ventilation circuit.

Figure 6 Isometric view of a controlled recirculated system.

For this exercise we considered a total of 21 combinations for three different distances
from the intake. The first case was a production level at 600, the second at 2,200 and the
third at 4,000 m from the intake. For each case we considered the traditional ventilation
solution and six different ways of filtering the dust from the air.
The solutions for filtering the dust were first classified by particle size efficiency. Thus,
solutions that were more effective in capturing coarse dust (> 10 μm) were considered as
pre-cleaning stages, while devices having 50% efficiency for dust smaller than 10 μm
were considered as definitive dust filters. The systems considered for pre-cleaning stages
were cyclones and settling chambers. Systems considered for the control of under μm
particles were bag-house filters, cartridge filters and electrostatic precipitators (esp).

results
The results of the simulations are shown in Table 3 . This table indicates the possibility
to use recirculation and to estimate the costs on energy for the different solutions. In the
case of a particular application, the decision to which alternative is the best solution will
require estimating the costs or saving on the infrastructure for ventilation.
CHAPTER II 77

Table 3 Summary of results

conclusions
Controlled recirculation systems are currently not in use in Chilean underground mines.
It is expected that for future mines controlled recirculation would have to be evaluated
to alleviate the costs on ventilation which would be more relevant at deep conditions. In
this paper we present the equations for calculating the pollutant concentrations which
could help to determine the intake and recirculation fraction in a recirculated system.
Solutions for capturing the dust were considered including cyclone, settling chambers
and dust filters.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
78 A R e vie w of Controlled R ecirculation and It s Potential Use...

acknowledgements
This research was fund by Codelco through the chair of Mining Technology at the
Universidad de Chile. Thanks to Dr. A Tamburrino at the Universidad de Chile for
reviewing the equations contained in this paper.

references
Gonzalez, G. & Arancibia, E. (2000) Limpieza y recirculación de aire de ventilación, Informe Final, im2
s.a. pp. 1–48. [1]
Stachulak, J. (1991) Controlled Air Recirculation Consideration for Canadian Hard Rock Mining. PhD Thesis,
Department of Mining and Metallurgical Engineering McGiII University, Montreal, pp. 13,
16–17, 74–93, 306–312. [2]

Robinson, R. & Harrison, T. (1987) Controlled R ecirculation of Air at the Wearmounth Collier y, British
Coal Corporation, 3rd US Mine Ventilation Symposium, pp. 3–11. [3]

Morales, A. (2003). Control de polvo en mina el Salvador, Memoria para optar al título de Ingeniero Civil
de Minas, Universidad de Chile, p. 25. [4]

Mcpherson, M. (1993) Chapter 4: Subsurface Ventilation Systems, Chapter 9: Ventilation Planning


& Chapter 20: The Aerodynamics, Sources and Control of Airborne Dust, Subsurface Ventilation
Engineering. [5]

Wu, H. W., Gillies, A. D. S. & Nixon, A. C. (2001) Trial of controlled partial recirculation of ventilation air
at Mount Isa Mines, Mining Technology: imm Transactions section A, pp. 86–96. [6]
Free and Semi Controlled Splitting
Network Optimisation Using GAs to
Justify the Use of Regulators

abstract
Enrique acuña Systematically reducing ventilation costs without impinging
Stephen hall on production can significantly improve the profitability of
mirarco – Laurentian an underground operation. The main ventilation systems of a
University, Canada mine are usually designed using a combination of main fans,
booster fans and a set of regulators. Depending on how these
Ian lowndes
ventilation devices are located and operated within the mine, a
University of
cost effective distribution of the air can be achieved to supply the
Nottingham, UK
airflow requirement of the planned activities at minimum cost. A
common design decision faced by ventilation engineers is where
it is appropriate to install regulator doors in concert with booster
fans to effect an optimal distribution of air within the ventilation
circuit. This paper presents an application of genetic algorithms
to evaluate and optimise a mine ventilation network employing
a free splitting (non-regulated) and a semi controlled (regulated)
approach to determine a lower and upper bound solution for
the main ventilation system. This enables the generation of the
analysis to decide where to locate the regulators and if it is cost
effective or not. A case study is presented to test the proposed
methodology and to generate an estimation of the benefit of the
use of regulators.
80 Free and Semi Controlled Split ting Ne t work...

introduction
The objective of ventilation system is to provide safe, healthy and acceptable working
conditions within underground development and productions zones. Mine ventilation
systems are usually divided in two areas known as the primary and the secondary
ventilation systems. The primary or main systems are responsible for bringing the air
from surface to the vicinity of working faces and the back to surface. Normally the main
system is composed by the main and booster fans. The secondary or auxiliary ventilation
systems are responsible for delivering fresh airflow from the primary system to the ‘dead-
ends’ where most of mining production and development occurs. Ventilation is one of
the key components of the mining systems to enable planned activities to safely perform
according to planner's schedules.
Considering ventilation cooling and heating for an underground operation in deep and
extensive mechanised mine can account for up 40% of the total energy budget of a mine [3] .
Consequently ventilation is becoming under tight cost control within underground
operations, not only because of its costs but also because of the availability of energy
which depending on the mine site can be an issue.

background
Mine ventilation optimisation is currently a manual, computer assisted task through
the iterative use of ventilation solvers. This approach is widely used within the mining
industry, but about the application of this trial and error method cannot guarantee an
optimal solution. Decisions about mine ventilation systems are commonly based on
effectiveness mostly rather than effectiveness and efficiency at the same time. From
the effective or feasible solutions obtained the one reporting the lowest cost is termed
the optimal solution. Even if the main approach for mine ventilation optimisation today
is closer to an art than a science, a few attempts have tried to generate a more rigorous
mathematical approach by mixing the use of ventilation solvers with operations research
techniques [1, 2, 4] . These approaches have found good feasible and optimal solutions
when possible within certain constraints.
The flow distribution of air within mine ventilation circuits can be classified into
three main types: free splitting, semi-controlled and controlled networks. Free splitting
networks are where only the pressure of the fans is allocated to supply the airflow
requirements at a minimum cost. Semi-controlled networks are where the pressure of the
fans and additional regulator resistances are allocated to supply the airflow requirement
at minimum cost. Finally, controlled networks are where the direction of the airflow and
the amount of airflow is known in every single branch of the network and pressures of
fans and regulators are determined to provide the defined airflow at minimum cost.
Operational mine ventilation systems are often a hybrid of free splitting and semi
controlled networks. Consequently, the objective of the study presented in this paper was
to present a comparative analysis of the performance of two representative model mine
networks employing in turn free splitting and semi controlled flow distributions. This
study identified the upper and the lower bound of the optimised energy cost and the
operational conditions of the mine in order to properly deliver the airflow requirements
in the working faces at minimum cost and an evaluation of the contribution of regulators
to the achievement of a practical and optimal solution.
In previous research studies [2, 4] a semi controlled network approach was considered
to optimise a mine ventilation system. Whereas, Acuña et. al. (2009) [1] focused on the
application of a free splitting approach to determine an optimised solution. Unfortunately,
CHAPTER II 81

each of these studies employed different model networks, which makes it difficult to
produce a direct comparison of the promising results produced by each study. As Calizaya
et. al. (1987) and Lowndes et. al. (2005) [2, 4] claimed optimality in their results; their
networks will be used in the development of this study.

statement of problem
The mine ventilation network optimisation problem may be formulated as a non-linear
optimisation problem where the objective function (value function) is to minimise the
airpower or the energy cost [6] as presented in Equations (1) to (7) .

Min (1)

where b is the number of branches in the network; Days is the number of days that the
fan is going to be working, usually on a yearly basis, i.e., Days = 365. EC is the energy cost
($/kWh). 24 is used to transform the days into hours and 1000 to transform W into kW
because of the EC units, P = (P1, …, Pb) is the vector of pressures, Q = (Q1, …, Qb) is the
vector of airflow resulting from the pressures, η = (η 1, …, η b) is the vector of efficiencies
for a combination of pressure and airflow if a fan is located in the branch j; F is the
feasible region, and S is the whole search space;
subject to

(2)

where Qj is the airflow quantity through branch j, n is the number of nodes in the
network, a ij = 1 if branch j is connected to node i and the airflow goes away from node i,
a ij = -1 if branch j is connected to node i and the airflow goes into node i, a ij = 0 if branch
j is not connected to node i;

(3)

(4)

(5)

where HLj is the friction pressure drop for branch j, Rj is the resistance factor for branch
j, HRj is the pressure drop of the regulator in branch j. HFj is the fan pressure in branch
j, HNj is the natural pressure in branch j, m is the number of chords (meshes) in the
network (m = b – n + 1), b ij = 1 if branch j is contained in mesh i and has the same direction,
b ij = -1 if branch j is contained in mesh i and has the opposite direction, and b ij = 0 if
branch j is not contained in mesh i;

(6)

(7)

where Lj and Uj are the lower and upper bound for the airflow in each branch of the
network and HRj is the additional resistance that can be installed in each branch.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
82 Free and Semi Controlled Split ting Ne t work...

The objective function and the pressure balance constraints (as defined by Kirchhoff's
second law) are nonlinear. The objection function is cubic in terms of the airflow and
the pressure balance constraint is quadratic. In practical terms, the vector P is the set of
pressure values for the fans that will deliver at least the required airflow for the mine. In
other words the goal is to find the set of pressures and locations for fans and regulators
that deliver the airflow requirement at minimum cost.
As formulated by Wu and Topuz (1998) [6] the problem correspond to a semi controlled
network formulation where regulators can be allocated in all the branches of the
network. In the particular case of this study, and given the approaches developed by
Calizaya et. al.(1987) and Lowndes et. al. (2005) [2, 4] , regulators are only where airflows
are fixed. This constraint is inherited from the use of the ventilation solver for the
optimisation process.
If Equation (8) is added to the semi controlled formulation then it becomes the free
splitting problem. The condition imposed is to forbid the installation of any regulator.
Unfortunately Equation (8) is also the reason why the free splitting approach is
expected to perform poorly compared to the semi controlled one: given an additional set
of constraints the solution can only deliver equal or less value for the objective function,
and then more costly in terms of energy cost.

(8)

where HR j is the additional resistance that can be installed in each branch.

genetic algorithm (ga) model


Genetic Algorithms (ga s) are search algorithms inspired by Darwin's theory of evolution.
To apply a ga , an initial population of individuals is created at random, with each
individual representing a solution to the problem. Next, pairs of individuals are combined
by means of a crossover operation to produce offspring for the next generation. This leads
to an intense search in one area of the solution space. In order to explore more areas
of the solution space, a mutation process is also used to randomly modify some of the
individuals of each generation. The process is repeated until the stopping criterion is
reached, and the best solution found is retained.
As presented by Acuña et. al. (2009) [1] , there are three main parameters that need to
be defined within a free splitting mine ventilation network:

• Number of fans to install in the mine


• Operational point or duties (pressures and airflows delivered) of the fans
• Location of the fans within the potential locations.
As presented by Calizaya et. al. (1987) [2], the same numbers of decisions have to be made
but now with the regulators, in order to obtain a semi controlled mine ventilation network.
In terms of the ga representation, or chromosome, of the problem there is no
difference as presented in Table 1 . For both the free splitting and the semi controlled
the representation is the same because the ga handles the choice of the pressure and
the ventilation solver the choice of the regulators. This is the case only for regulators
that have to be located where a fix airflow quantity is set. If a regulator would have to be
allocated in a branch where the airflow quantity is not fixed then the ventilation solver
will not be able to handle he regulator of that branch.
In the case of the ga the value function, also called fitness function, is the airpower
cost of the solution and if the solution is not feasible then a large preset value is added to
CHAPTER II 83

force the search on the feasible side of the solution space. For each generation (iteration)
of the ga a new population of solutions with the same size of the original population
is created. The original population and the new population compete and then only
the best solutions are kept to form a new population of the original size. The ga code
was developed using the ga lib genetic algorithm package, written by Matthew Wall at
the Massachusetts Institute of Technology and integrated with the ventilation solver
3-d-canvent developed by Natural Resources Canada.

Table 1 Chromosome representation

852 (Pa) 200 (Pa) 0 (Pa) 300 (Pa)


Fan pressure at Fan pressure Fan pressure at Fan pressure at
position 1 at position 2 position p – 1 position p

case study
Two case studies are presented in this study. The first one corresponds to the sample
ventilation network presented by Calizaya et. al. (1987) [2] with a size of 17 branches
and two fan locations. Apparently it does not correspond to any real mine and the
network schematic is presented in Figure  ➊. The second one corresponds to the network
presented by Lowndes et. al. (2005) [4] with a size of 242 branches and 16 fan locations.
The network schematic is presented in Figure  ➋.
The first case study represents a simple mine ventilation network with one intake and
one exhaust. The main fan is located in the exhaust and a booster fan can be located
underground in single available position. Three working faces are identified with three
different airflow requirement varying from 10 m3/s to 12 m3/s. The objective was to decide
if the booster fan has to be allocated and what are the operational points of both fans in
order to deliver the airflow requirement at minimum cost.
The second case study is based on the mine ventilation network of the former El Indio
Mine located East of la Serena in the Andes Mountains in the North of Chile. The mine
was owned by a subsidiary of Barrick Gold Corporation and produced mainly gold, but
copper and silver. The mine is currently closed after shutting down during 2001. The mine
had seven adits from which six were used as intake and one as the main exhaust [5] .
There were nine working faces (also called stopes) labelled S1 to S9 with and airflow
requirement dependent of the activities taking place within it, which can be either
9.6 m3/s or 23.2 m3/s. The objective was to find the better location for booster fans and
operational points to support the operation of the main exhaust fan by better distributing
the airflow and reducing the operational cost. Sixteen locations, labelled L1 to L16, were
available for the additional booster fans from which only three could be selected [4] .
The execution of the combined ga/canvent algorithm was tested using the two mine
ventilation networks. The advantage of these networks is that they had a previously
determined optimal solution using regulators only in the working faces where fixed
requirement were set because of mining activities, which can be used as the upper bound
of the benefit that can be obtained. This allows two things: firstly, check the goodness of
the solutions generated by the free splitting approach and secondly, allows a comparison
with the semi controlled approach to establish the value added by inserting regulators
in the system.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
84 Free and Semi Controlled Split ting Ne t work...

Figure 1 Sample mine ventilation network diagram.

Figure 2 El Indio mine ventilation network diagram.

PRESENTATION AND DISCUSSION OF RESULTS


Free splitting results
For both networks a series of several solution executions were performed and the best
identified and presented in Tables 2 and 3 . In the case of the sample network the
runs were developed using a pressure ranging from 0 to 2 kPa for each fan with 1 Pa
increments. The incremental step is very small to better search in the solution space
because the small size of the problem allows the algorithm to do a more intensive search
within a reasonable time. In the case of the El Indio network the runs were developed
using the same pressure range but with a step of 50 Pa and the only three locations were
available for the fans: L3, L10 and L12, in order to obtain the benefit of the regulators
for that solution [4] .
CHAPTER II 85

Table 2 Free splitting solution for sample network

Fan Pressure (kPa) Airflow (m3/s) Airpower (kW)


Main 1.19 43.14 51.34
Booster 0.4 29.45 11.78
Total 63.12

Table 3 Free splitting solution for El Indio network

Fan Pressure (kPa) Airflow (m3/s) Airpower (kW)


Main 0.852 149.54 127.41
L3 0.3 104.31 31.29
L10 0.4 121.34 48.54
L12 0.7 40.32 28.22
Total 235.46

Semi controlled results


Tables 4 and 5 present the results obtained in the initial studies [2, 4] . As expected
both solutions have a lower airpower cost than the free splitting approach. The reason is
the additional capability of the ga to allocate regulators on the working faces and then
be able to better distribute the airflow through the network in a more cost effective way.
Unlike the common belief that a network with more resistances is more expensive to
operate, if the additional resistances (in this case regulators) are properly allocated then
the overall operational cost of the main ventilation system may be improved.

Table 4 Semi controlled solution for sample network

Fan Pressure (kPa) Airflow (m3/s) Airpower (kW)


Main 1.128 41.72 47.06
Booster 0.374 28.13 10.52
Total 57.58

Table 5 Semi controlled solution for El Indio network

Fan Pressure (kPa) Airflow (m3/s) Airpower (kW)


Main 0.852 147.45 125.63
L3 0.2 89.62 17.92
L10 0.15 45.82 6.87
L12 0.5 34.41 17.21
Total 167.63

Comparison in terms of airpower cost


From an analysis of the airpower costs presented in Table 6 it is concluded that the
improvement that can be generated by the optimisation of ventilation network is not
only the determination of the fan operational points, but also the identification of the
correct regulator installed in the vicinity of the working faces that can have a significant
impact in the operational cost of the main ventilation system of a mine. For the simple
network the improvement achieved was nearly 9% and for the more complex El Indio
network nearly 29%. The improvements were measured in terms of airpower consumed
by the mine which is equivalent to measure it in terms of the cost of the energy that has
to be provided for the fans to operate.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
86 Free and Semi Controlled Split ting Ne t work...

Table 6 Improvements generated by regulators

Network Free splitting Semi controlled % Improvement


Sample 63.12 57.58 8.8 %
El Indio 235.46 167.63 28.8 %

Although these improvements are impressive and encouraging, they are not for free. In
order to properly set the regulators in the working faces with the right pressure drop
across them several constraints have to be satisfied. First the working faces or the serial
branches next to them have to make it possible to install the regulator as required and
second, guarantee the regulator will not be damaged through the operation. Assuming
that the first constraint is satisfied for all working faces, the second one is the key point
to actually obtain the calculated improvements.
To get the regulator operating as required two main activities are required: first to
build the regulator and second to maintain it with a suitable frequency that will allow
it to perform properly. As a third alternative to reduce the maintenance cost, training
can be offered to operators of the diesel and other equipment that can get in contact with
the regulator, so they can understand why it is important that they take care of it. As
observed underground the real impact of this type of measures can be very questionable.
In terms of the business case to decide whether to install the regulators underground
or not, the cost have to be estimated as usual but now a measure of the improvements
can be obtained to evaluate the benefits of such modifications to the ventilation network.
Given the costs and the benefits a measure of the value of adding regulator, as the net
present value (npv), can be calculated and a decision made with the right information.

conclusions
This study presented two optimisation approaches for the mine ventilation network
optimisation problem based on genetic algorithm integrated to a ventilation solver. Both
are particular instances of the general mine ventilation network optimisation problem
formulated. It has been shown that the application of genetic algorithms to the solution
of the two model networks employed by this study is able to solve the problem and identify
good feasible solutions for both the free splitting and the semi controlled approach. The
difference between these two solutions can be used as a measure of the benefit of adding
regulators in a mine and then used to evaluate if they are cost effective or not. Further
work is required to generate a technique that can allocate some or all the regulators in
the available locations, whether these are fixed quantity branches or not.

acknowledgements
This research was partially supported by mitacs accelerate, Canada. The authors would
like to thank Stephen G. Hardcastle and Gary G. Li from Natural Resources Canada for
providing the 3-d-canvent ventilation solver for this study. The genetic algorithm was
developed using the ga lib genetic algorithm package, written by Matthew Wall at the
Massachusetts Institute of Technology.
CHAPTER II 87

references
Acuña, E., Hardcastle, S., Maynard, R., Fava, L., Hall, S. & Dunn, P. (2009) The Application of Genetic
Algorithms to Multiple Period Ventilation Systems for Multi-level Mine Operations. Proceedings of the
Orebody Modelling and Strategic Mine Planning Symposium, pp. 265–270. [1]

Calizaya, F., McPherson, M. J. & Mousset-Jones, P. (1987) An Algorithm for Selec ting the Optimum
Combination of Main and Booster Fans in Underground Mines. Proceedings of the 3rd US Mine Ventilation
Symposium, pp. 408–417. [2]

Hardcastle, S. & Kocsis, C. (2007) The Ventilation Challenge – A Canadian Perspective of Maintaining a
Good Working Environment in Deep Mines. Challenges in deep and high stress mining, pp. 519–525. [3]

Lowndes, I. S., Fogarty, T. & Yang, Z. Y. (2005) The Application of Genetic Algorithms to Optimise the
Performance of a Mine Ventilation Network: the Influence of Coding Method and Population Size. Soft
Computing, Vol. 9, pp. 493–506. [4]

Lowndes, I. S., & Yang, Z. Y. (2004) The Application of GA Optimisation Method to the Design of Practical
Ventilation Systems for Multi-level Metal Mine Operations. Mining Technology, Vol. 113, pp. 43–58. [5]

Wu, X. & Topuz, E. (1998) Analysi s of Mine Ventilation Systems Using Operations R esearch Me thod s.
International Transactions in Operational R esearch, Vol. 5(4), pp. 245–254. [6]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
The Economic Optimisation of
Advanced Drilling Grids for Short
Term Planning and Grade Control
at El Tesoro Copper Mine

abstract
Eduardo magri Blast hole sampling is often done very poorly, mainly because
Julián ortiz drilling rigs are designed to drill efficiently rather than to allow
Universidad de Chile for high quality sampling. Furthermore, sampling may interfere
with the attainment of production targets. Poor blast hole
Ricardo líbano sampling generates misclassification of the blasted material, and
Minera El Tesoro, Chile significant losses per annum. Nevertheless, almost every mining
operation continues to sample their blast holes using a variety of
ill conceived method.
The use of advanced reverse circulation (rc) drilling for short
term mine planning has the following advantages: (1) Sampling
is done well in advance with proper sampling equipment, thus
reducing sampling errors and biases considerably; (2) Short term
planning can be done well ahead of blasting, therefore ore/waste
classification can be done with more sophisticated methods,
incorporating additional variables; and (3) The advanced drilling
grid can be sparser than the blast hole grid, requiring less sample
preparation and analysis. The only apparent disadvantage is
the associated cost of the extra rc drilling required, which is
very small compared to the extra profits obtained by reducing
misclassification.
El Tesoro mine's short term planning standard practice consists
on the use of advanced rc drilling in an 8 x 8 m grid. With the
deepening of the open pit, the moisture content of the rock has
increased causing sample recovery problems with rc drilling. The
drilling contractor has been forced to change rc by conventional
drilling (dth) and therefore the sampling error has increased,
questioning the validity of the current drilling grid. The optimum
(dth) drilling grid is investigated in this paper.
The methodology applied indicates that the optimum advanced
drilling grid continues to be 8 x 8 m and that the extra sampling
error caused by conventional drilling causes misclassification
losses of the order of 5 million dollars over the next four to five
years.
90 T he E conomic Optimi sation of A dvanced Drilling Grid s for S hor t Term...

introduction
Short term planning must conciliate the operational performance of a mine with
the medium and long term plans. It is fed by the results of grade control, where the
destination of the blocks to the processing plant, low grade stock or waste dump is
defined, depending on their grade, mineral characteristics and other geometallurgical
variables. Usually, grade control is based on blast hole sampling [1] . F. Pitard [2, 3]
mentions many weaknesses of this type of sampling, among them:

• Poor sample recovery in the first part of the blast hole due to fracturing caused by the
subdrill of the bench above.

• Sampling of the subdrill that is considered a delimitation error.


• Blast hole sampling usually takes second priority relative to production as it interferes
with the drilling operation. Sampling procedures are not properly carried out resulting
in poor quality samples with associated high levels of errors and often systematic
biases. This results in significant losses, due to misclassification, which unfortunately
are not reflected in financial balances, as these losses are invisible.

Poor blast hole sampling has been studied in several applications and the losses involved
may reach several million dollars per year [4] .
There are several ways of solving this problem: firstly, one could assign to the sampling
procedures the importance they deserve and apply rigorously the best practices and
procedures defined by the sampling theory [2, 3] . Also, sophisticated methods can be
used to account for the variability in the sample information to optimize grade control
practices [1, 5] . Alternatively, assuming blast hole sampling will never receive the
necessary attention, one could consider looking at the grade control problem from a
different perspective. We propose and analyse the results of using advanced rc drilling
for sampling purposes. If the operation is done in advance, sample quality can be better
controlled, many variables can be estimated for planning purposes, and a sparser grid is
necessary to satisfy the requirements of short term planning.
In the next sections we describe a mining operation where this is implemented, the
methodology applied and the results that were obtained. We conclude the paper with
some discussion about the conditions under which this methodology can be applied and
potential extensions of the study presented here.

case study
El Tesoro is located 190 km East of Antofagasta and 70 km South of Calama in the
Región de Antofagasta, Chile. El Tesoro Mine (met) extracts and processes 800,000 tonnes
of ore monthly. The process consists of open pit mining using 7.5 m benches followed by
heap leaching in dynamic heaps and solvent extraction combined with electro-wining.
Production started in 2001. Short term planning was based on blast hole sampling
using an automatic sampler manufactured by Metal Craft, equipped with dust recovery
and a static cone divider. This methodology was far superior to conventional blast hole
sampling with radial buckets, tubes, etc. It was replaced in 2002 by advanced rc drilling
since it decreased the productivity of the blast hole drilling equipment. rc was used for
drilling one or two benches well ahead of production. Short term planning models are
built including a total of eight variables that permit the calculation of parameters such
as acid consumption, recovery, etc.
At present, short term mine planning at met is based on the information of advanced
drilling samples, made specifically for this purpose, in a grid wider than the one for
CHAPTER II 91

blasting. Currently the spacing of the advanced drilling grid is 8 × 8 m, and uses reverse
circulation and conventional (down the hole) drilling, depending on the presence
or absence of moisture. This is performed by a contractor. Since July 2008, with the
deepening of the open pit, moisture has caused serious rc sample recovery problems
and therefore conventional (dth) drilling with sample capturing and riffle splitting is
being carried out. Current sampling methodology implies higher levels of error relative
to rc sampling.
The aim of this paper is to define an optimum advanced drilling grid applicable to the
current conditions and compare results to those that could be obtained by improving rc
sampling to cater for the extra moisture rock content. In this paper, the impact of the
increased sampling error in the economic performance of the mine is evaluated and the
optimum spacing for the advanced drilling grid is determined in order to maximise the
profit of the operation. Geostatistical simulation is used to construct possible scenarios
over which the evaluation of different sampling grids and errors is performed.
A portion of the met main ore body, corresponding to the next five production years
was considered for this study. Conditional simulation models were built for total copper
(CuT) and carbonate (CO3) grades within their corresponding geological and geotechnical
units (Table 1).

Table 1 Geological and geotechnical units definition

Geological Unit Description Geotechnical Description


Code Geological Unit Unit Code Geotechnical Unit

0 CuT < 0.1 0 (GE) Barren Gravel


2 0.1 < CuT < 0.3 1 (CC) Calcareous Conglomerate
3 0.3 < CuT < 2.0 2 (GM) Marginal Gravel
4 CuT > 2.0 3 (FM) High Grade Fines
4 (GAL) High Grade Gravel

methodology
In order to assess the economic impact of assigning an incorrect destination to selective
mining units during short term planning, from a specific advanced drilling grid,
multiple scenarios replicating the information provided by the rc or dth samples are
required. These scenarios are used to replicate the planning procedures, which consists
in taking advanced drilling samples within a given mesh, and assessing the benefits of
sending each block to one of the possible destinations.
Assigning a block to a destination (plant or waste dump) generates a possible error,
since the block grade and geometallurgical variables must be estimated from the limited
information provided by the advanced drilling samples. The only way to calculate the
proportion of blocks misclassified and associated losses is by knowing their true grades.
This is not feasible in practice, thus we call for simulated models through geostatistical
tools. These models are generated by considering a dense simulation grid (at a 2 m
spacing), which reproduces the actual spatial continuity of the grades and preserves their
geological characteristics, since they are computed within the geological units defined
by the geology team of the deposit in the long term model.
Samples at different grids are extracted from these models, emulating the advanced
drilling. Conventional drilling sampling error is added for the different evaluations.
Estimation of the block grades and classification to different destinations is done based
on the sample information, which can be compared against the correct destination,

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
92 T he E conomic Optimi sation of A dvanced Drilling Grid s for S hor t Term...

obtained from the block averaged values of the densely simulated grids, providing a basis
for comparison of the different grids and errors.
This procedure allows quantifying losses in the short term plan due to:

• Information density: spacing of the samples from the advanced drilling campaign.

• Information quality: presence of sampling error in samples from the advanced drilling
campaign.

Details are provided next. A volume equivalent to approximately five years of production
was defined for the study, between the following coordinates: East 491300–492300, North
7461400–7462700 and elevation 2015–2120.
The available sample data from the long term drill holes is shown in Figure   ➊. These
data are used to condition the dense simulation.

Figure 1 Long term sample data in the study area. Left: CuT composites; Right: CO 3 composites.

Ten models are built at a resolution of 2.0 × 2.0 × 7.5 m for CuT and CO3, according to the
geological and geotechnical units defined in the long term geological model, using the
Sequential Gaussian Simulation program (sgsim) from gslib [6] .
For CuT, grades are only simulated in the geological units two, three and four, as unit
zero represents waste. CO3 is simulated considering geotechnical unit one (high grade)
and all other lower grade units are grouped into one population.
Each simulated model is built following the steps presented next [6, 7, 8] :

• For each geological unit (except unit zero):


––A representative distribution is built for CuT grades by cell declustering.
––The representative distribution is transformed into a standard Gaussian distribution.
––A variogram is calculated and modelled for the main anisotropy directions.
––Ten realisations are constructed by conditional simulation.

• For geotechnical unit one and for the remaining units, a similar procedure is followed
for the CO3 grade.

• The simulated models are combined using the geological model that describes the
extent and distribution of the geological units. Depending on the unit at a given
location, the simulated grade is assigned from the model corresponding to that same
geological unit.

The simulated grades within each unit are then combined into a single model, according
to their extent in the long term model. These models are then block averaged to
6.25 × 6.25 × 7.5 m, which corresponds to the selective mining unit size of the model. All
realisations are conditional to the long term ddh and rc drilling composites available.
CHAPTER II 93

Simulated values are obtained from the dense grid emulating different regular
sampling grids of 6 × 6.8 × 8.10 × 10.12 × 12, and 14 × 14 m (see Figure ➋, for an example
of the result).
A normally distributed relative error of 16% is added to the CuT values and of 20% to
the CO3 values. These relative errors were obtained from the analysis of field duplicate
samples as performed currently by the drilling contractor.

Figure 2 Simulated CuT values with error representing advanced


drilling samples at a 14 × 14 m grid over the study area.

Based on the information from each grid including the sampling error, the block grades
are estimated by ordinary kriging in 6.25 × 6.25 × 7.5 m units using the estimation
parameters normally used for the short term (Figures ➌ to ➎). Each block is then
evaluated based on the estimated CuT and CO3 grades to decide its destination to the
processing plant or waste dump.
The profit of each block is computed accounting for the income of processing it and the
costs involved, which include the acid consumption, expected recovery, operational costs
at the mine, at the processing plant and other costs involved in the sales.
The revenue (Rev) is calculated as a function of CuT:

Where:
• Volume of a block, Vol = 6.25 × 6.25 × 7.5 m3
• Density of the ore, Dens = 2.27 tonne/m3
• Copper price, PCu (US$/lb)

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
94 T he E conomic Optimi sation of A dvanced Drilling Grid s for S hor t Term...

Figure 3 Block averaged


simulated CuT grades
representing the truth.

Figure 4 Samples
representing the advanced
drilling at a grid of 14 × 14 m
with sampling error.

Figure 5 Estimated model


from the advanced drilling
samples, used for short
term planning.
CHAPTER II 95

And the metallurgical recovery is a function of CuT and CO3 grade, calculated with the
following expression:

Where R ∞, k, m, C1, C2, and C3 are constants obtained from a regression model (Table 2).
Costs of processing a block are calculated as:

Where the cost per pound (in cUS$/lb) is a combination of the following costs:

These costs are calculated as:

Where WO ratio is the Waste-to-Ore ratio of the mine, CMine is the operational cost of
extracting one ton of material from the pit, CProc is the cost of processing one ton of ore, which
includes primary and secondary crusher, agglomeration, stocking, leaching and exhausted
heap disposal. CAcid is the cost of one ton of sulphuric acid. Finally, csxew represents the
solvent extraction, electro-winning, management, general expenses, royalty and sales costs
(Table 2). Acid consumption is calculated as a function of CuT and CO3 grades (Table 3).

Table 2 Economic and process parameters for profit calculation

Parameter Value

R∞ 82.1217
k 2.7292
m 1.1016
C1 0.2173
C2 1.5954
C3 3.8137
WO ratio 3.0
CMine 1.02 (US$/ton)
CProc 2.9564 (US$/ton)
CAcid 145.0 (US$/ton)
CSXEW 46.1 (cUS$/lb)

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
96 T he E conomic Optimi sation of A dvanced Drilling Grid s for S hor t Term...

Table 3 Calculation of acid consumption in (k/ton)

CO3 < 0.4% 0.4% ≤ CO3 < 0.8% CO3 ≥ 0.8%

CuT< 1.0 % 19 22+16*(CO3-0.4%) 27+16*(CO3-0.4%)


1.0 % ≤ CuT < 1.3% 23 27.5+16*(CO3-0.4%) 30+16*(CO3-0.4%)

29.5+15.4*0.8* 32+15.4*0.8*
CuT ≥ 1.3% 27.5+15.4*0.8*(CuT-1.3%)
(CuT-1.3%)+16*(CO3-0.4%) (CuT-1.3%)+16*(CO3-0.4%)

For each block, estimated CuT and CO3 grades are calculated by block kriging and the
profit assessed. If the profit is positive, the block is assigned to the processing line;
otherwise, it is sent to the waste dump.
This result is compared against the maximum possible profits obtained from the true
block grades. In this case, these are obtained by averaging all the densely simulated
values excluding sampling error. Although this assessment is not achievable in practice,
since we never know the true block grades, the densely simulated models provide possible
truths, which allow us to calculate a maximum referential value for comparison purposes.
The final profit with each sampling grid and sampling errors are obtained by subtracting
the drilling and chemical analysis costs, which will depend on the grid used for the
advanced drilling campaign.

results
A comparison of the profits achieved considering different advanced drilling grids are
shown in Table 4 and Figure ➏, under two scenarios: assuming that samples do not
have an additional sampling error, which corresponds to rc drilling; and considering
a sampling error (16% relative error for CuT and 20% for CO3), which corresponds to
the current dth drilling. The first scenario shows the effect of information quantity,
while the second scenario adds the effect of poor data quality. For comparison purposes,
the maximum achievable profit is computed from the dense simulations, to provide a
maximum value (unattainable).
The current status is represented by a 8 × 8 m grid with dth sampling error, as this is
the current practice of the contractor in charge of the advanced drilling.
Results show that just by improving the sampling quality, the profit could increase
from 669,32 to 674,50 million dollars, which amounts to approximately 5.2 million dollars
over some four to five years of production. Furthermore, it is apparent that the current
8 × 8 m sampling grid is appropriate to reach the maximum profit.

Table 4 Profits achieved under different sampling grids and considering or not a sampling error (in million dollars)

Advanced drilling grid (m)


Scenario
6×6 8×8 10 × 10 12 × 12 14 × 14
Optimum (unattainable) 709.06 709.06 709.06 709.06 709.06
Samples without added error 671.89 674.50 666.57 657.22 640.56
Samples with added error 667.58 669.32 658.01 649.49 633.14

Other simulation results indicate that the variability of daily and monthly production
units is of 19% and 11% respectively for the rc scenario. The existence of dth sampling
errors increases these uncertainties by 0.5% and 0.2% for the daily and monthly cases
respectively.
CHAPTER II 97

Figure 6 Profit calculation for the different sampling grids and errors.

conclusions
Grade control is a critical step in short term planning and usually provides an oportunity
for easy improvements that translate into millions of dollars. In this paper, we have
discussed the use of advanced drilling as a tool to improve the grade control practices.
Furthermore, we have shown a practical methodology to assess the impact of the
grade control practice based on the drilling grid spacing and the sample data quality.
Geostatistical techniques of simulation have proven mature to provide the answers to
this evaluation and should be increasingly incorporated into the evaluation of mining
practices and as a decision-making tool.
In the case study presented at El Tesoro Mine, the sampling error of conventional
drilling samples generates a significant impact in the economic performance of the
mine, since it contributes to increasing the misclassification of blocks that should be
sent to the plant, which end up in the waste dump, and blocks that should be classified
as waste, that are processed. The cost amounts to over five million dollars, as compared
to the same amount of information, but without the error associated to the dth samples.
In addition to this conclusion, it could be confirmed that the 8 x 8 m grid is appropriate
for grade control, given the heterogeneity of the CuT and CO3 grades in the deposit.
The main recommendation resulting from this study is to return to rc drilling, and
try to solve the moisture related sampling problems. Also, it should be emphasised that
quality control of the samples should always be a priority in order to ensure that the
maximum achievable profit is obtained from the operation.

acknowledgements
Authors are indebted to the enlightened management of the El Tesoro Copper mine
for its concern about quality sampling and best short term planning practices. The
permission to publish this work is gratefully acknowledged. Authors also are thankful
to the Department of Mining Engineering at Universidad de Chile.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
98 T he E conomic Optimi sation of A dvanced Drilling Grid s for S hor t Term...

references
Deutsch, C. V., Magri, E. & Norrena, K. (1999) Optimal grade control using geostatistics and economics:
methodology and examples. sme Annual Meeting & Exhibit. [1]

Pitard, F. F. (1993) Pierre Gy's Sampling Theory and Sampling Practice – Heterogeneity, Sampling Correctness
and Statistical Process Control. Second Edition, crc Press. [2]

Pitard, F. F. (2009) Pierre Gy's Theory of Sampling and C. O. Ingamells' Poission Process Approach – Pathways
to Representative Sampling and Appropriate Industrial Standards. Doctoral Thesis, Aalborg University,
Campus Esbjerg, Denmark. [3]

Magri, E. & Ortiz, J. (2000) Estimation of Economic Losses due to Poor Blast Hole Sampling in Open Pits, in
Geostatistics 2000. Proceedings of 6th International Geostatistics Congress, Kleingeld, W. J. and
Krige, D. G., eds., Cape Town, South Africa, Vol. 2, pp. 732–741, 10–14. [4]

Glacken, I. M. (1997) Change of support and use of economic parameters for block selection. Geostatistics
Wollongong '96, Kluwer Academic Publishers, Vol. 2. [5]

Deutsch, C. V. & Journel, A. G. (1998) gslib – Geostatistical Software Librar y and User's Guide. Oxford
University Press, Second Edition. [6]

Goovaerts, P. (1997) Geostatistics for Natural R esources Evaluation. Oxford University Press. [7]

Isaaks, E. H. & Srivastava, R. M. (1989) An Introduction to Applied Geostatistics. Oxford University


Press. [8]
Probabilistic Risk Analysis of Mine
Production Plans

abstract
José castro The mine planning process requires a lot of technical and economic
Mauricio barraza data, models and assumptions to estimate ore reserves and set
Mauricio larraín
the production plans. To make the decision whether to follow a
codelco, El Teniente certain mining strategy or another is not only a matter of which
Division, Chile
alternative offers a greater expected value, but also which are the
chances to achieve a result. In other words, what decision makers
need to know to make a well informed decision is the complete
frequency distribution of results rather than a single value.
This paper reports the use of a probabilistic risk analysis
methodology applied to assess the copper production of mining
plans taking into account the mine layout capacities and
geotechnical hazards as the main uncertainties.
102 Probabili s tic R i sk Analysi s of Mine Produ c tion Plan s

introduction
The mine planning support the transformation of mineral resources into ore reserves,
with a special importance on generating the production schedule to determine the
projected benefits of the mining business [1] . The mine planning process, from a
technical point of view, can be broken down into three steps: resources estimation, project
management and operations management [2] . The resource estimation stage presents
procedures for classifying geological mineral resources and defines levels of acceptable
risk of ore reserves into a mining plan [1, 3] . In addition, there are studies about the
effect of uncertainty in the estimation of mineral resources and economic valuation of
mining plans [4] . However, the project and operations management are less standardised
and have less developed tools to address how these risks affect the business outcome.
In recent years there has been a strong development and application of operations
research tools aimed at the economic optimisation of production schedules [5–7] . These
tools allows for production schedules that tend to the economic optimisation in the ore
reserves definition and the use of existing infrastructure for a set of input parameters.
So these tools assume that the parameters are deterministically known. However,
when the model is influenced by random or not perfectly known parameters there is
no guarantee that the optimised results will in fact be obtained in reality [11] . The
projected production capacity does not consider the production processes variability or
the geotechnical-geomechanical hazardous effects on the outcome of the business. So,
in the other hand, these models need to reflect the ability of production systems and
facilities to absorb this kind of deviations from the production schedule undertaken.
For this reasons it is necessary to incorporate in the mine production schedule the
concept of risk analysis, for which there are widely known methodologies available [8 –10].
It is possible to use the Monte Carlo simulation techniques to evaluate the mine production
scheduling using random input parameters. Thus, the probabilistic analysis of production
schedules and at-risk production estimation is presented as a powerful tool to evaluate
business scenarios and enable better decision making.

methodology
Risk analysis
The risk assessment process is described in Figure  ➊. In the first stage we make a
preliminary deterministic evaluation of the production schedule. Then we identify and
analyse the main technical risks using a qualitative point of view. If it is possible to avoid
or minimise some risks by changing planning criteria or assumptions, we add into the
mining plan some actions to control both the probability of occurrence and the effect
of identified events. In this sense, it is considered open area extraction rates, mining
macrosequence, ground support, rock mass conditioning, infrastructure maintenance
and others. Therefore the quantitative risk evaluation allows us to estimate the residual
risk of the production schedules.
CHAPTER III 103

Figure 1 Risk assessment methodology.

Later, we make a stochastic evaluation of the production schedule. The main results are
summarised in two indexes: ore reserves sent to milling plants (measured in t/d) and
annual production of copper content in concentrate (measured in tonnes).
The first index reflects the risk associated with milling plants ore supply. In this case,
if the results are not acceptable, the mitigation plan is to add short scale mining projects
into the production schedule using the existing infrastructure. The quantitative risk
assessment allows us to estimate the time period and production capacity required for
the mitigation projects that support the milling plants ore supply.
The latter index reflects the risk associated with the production of copper content in
concentrate, for which no mitigation measures exist because there are no higher copper
grade reserves than previously incorporated in the production schedule. In this case, the
quantitative risk assessment allows us to estimate the range of possible production outcomes
and determine, in statistical terms, the confidence level of the different feasible values.
Finally, we apply this methodology to evaluate some different mining plans according
to the strategic business planning.

Risks identification and analysis


In the first stage, we identified the main technical productive risks and opportunities for
the five-years mining plan. Then we prioritise the most relevant ones for the quantitative
risk evaluation stage. This result is provided in Tables 1 and 2 .

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
104 Probabili s tic R i sk Analysi s of Mine Produ c tion Plan s

Table 1 Main technical productive risks of the five-years mining plan

Source Description Controls Effect

Rock Burst and Draw rates, mining Detention of drawbell


Seismicity sequence, ground support incorporation; repair of damages
Geotechnical-
geomechanical Infrastructure and reserves
Ground support, mining
Pillar Collapses loss; detention of draw bell
sequence
incorporation
Drawbell
Project management Open area lower than planned
incorporation delay
Haulage and Maintenance management,
processing system addition of new resources Production capacity variability
Operational
reliability (trains, wagons)
Mine production capacity
Mine production
Maintenance management variability (drifts, crushers, ore
system reliability
passes)
Drawbell incorporation lower than
Project Start-up delay Project management
planned

Table 2 Main technical production opportunities of the five-years mining plan

Source Description Controls Effect

Marginal economic Drawpoint sampling, Add open area, but lower copper
Resources
resources ground support status grades
Sewell Milling Plant Infrastructure Higher overall milling capacity,
Operational
Operation maintenance upper operational flexibility

According to Table 1 , the main sources of risk are geotechnical-geomechanical (collapses,


rockburst), project management (start-up delays) and operational (mine growth,
equipment and production systems performance). According to Table 2 the main sources
of opportunities are the mineral resources (marginal economic resources) and operational
flexibility associated with the Sewell milling plant operation.

Risk evaluation
The quantitative risk assessment was done using a network flow representation of the
global production process. A simple schematic illustrating the production process of the
El Teniente mine is provided in Figure  ➋.
CHAPTER III 105

Figure 2 Simple schematic of El Teniente mine process.

The mine is composed of several productive sectors that use panel caving mining. There
are two haulage systems: the Teniente 5 Norte Railway to Sewell milling plant (20,000
t/d capacity) and the Teniente 8 Railway to Colón milling plant (131,000 t/d capacity). All
the pulp is floated in the Colón plant and the copper concentrate is melted and refined in
Caletones melting plant. A portion of the product is sold as copper concentrate.

Figure 3 Simple schematic of the quantitative risk evaluation.

A simple schematic illustrating the quantitative risk evaluation is provided in Figure  ➌.


To represent the production process we built a mathematical model with the same basic
constraints of the deterministic production scheduling model used to simulate the mining
capacities [12] adding the main haulage and milling capacity constraints. So we take
into account all source of uncertainty shown in Tables 1 and 2 adding randomness or
probabilistic distribution functions at input parameters. This parameters and functions
are showed in Table 3. So then we use the Montecarlo simulation method to generate 300
realisations of each production plan.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
106 Probabili s tic R i sk Analysi s of Mine Produ c tion Plan s

Table 3 Main parameters of stochastic evaluation

Description Effect Model parameters Comments


Detention of drawbell Frequency: 0–2 events per Bernoulli process; independent
Rock Burst and
incorporation; repair of sector in 5 years; effect: process for each mining sector
Seismicity
damages 3–9 months of detention
Infrastructure and Probability of occurrence: Defined in geotechnical
Pillar Collapses reserves loss; detention 10–60% vulnerable zones only
of draw bell incorporation
Drawbell Open area lower than Schedule fulfillment: Triangular distribution;
incorporation planned 70–110% independent process for each
delay mining sector
Production capacity T8-Colón system: Triangular distribution;
Haulage and
variability 125–131–133 Kt/d; T5 independent process for each
processing
Norte-Sewell: 10–15–20 haulage-processing system
system reliability
Kt/d
Mine production Mine production capacity Drifts: 1,500–3,500 t/d; Triangular distribution;
system reliability variability (drifts, crushers: 8,000–11,000 t/d independent process for each
crushers, ore passes) system
Project start-up Delay in drawbell 1 month (advancement) to Triangular distribution; applied
delay incorporation 12 month (delay) to mining proyects in planning
and implementation stages
Marginal Add open area, but lower Upper height of draw than Marginal economic criteria,
economic copper grades planned: 250–450m limited by technical
resources considerations
Sewell milling Higher overall milling No change Sewell plant lifetime extension
plant operation capacity, upper
operational flexibility

case study: el teniente division five-year


mining plan
The five-year plan of El Teniente mine presents two business scenarios: a full underground
operation called s3 (131,000 t/d) and a joint underground open pit operation called m2
(138,000 t/d).
The second scenario corresponds to the incorporation of the Rajo Sur project (37 Mt @
0.7%Cu ore reserves) to the mining plan, which allows the ore supply of Colón and Sewell
milling plants using the Teniente 5 Norte haulage system.
In both scenarios the mitigation projects Extensión Norte Sur Andes Pipa and
Extensión Fw Pipa Norte (7,000 t/d capacity) are incorporated to secure the milling plants
ore supplies from 2012. Then we estimate the copper content in concentrate production
in the period 2010–2014 for both scenarios, as shown in Figure  ➍.
CHAPTER III 107

Figure 4 Histogram of copper production of the five years plan for full underground
scenario (S3) and joint underground open pit scenario (M2).

Figure  ➍ shows that copper production has an expected value of 2,136 Kt and 2,200 Kt
for the s3 scenario and the m2 scenario respectively. The preliminary production schedule
estimation (deterministic, optimised and without risk considerations) is 2,203 Kt and
2,254 Kt respectively. Therefore, the quantitative risk evaluation shows that these values
correspond to the maximum potential output of each scenario and not the expected value.
Moreover, the distribution of values is asymmetrical with respect to the expected value
because the effect of some events (especially geotechnical-geomechanical) generates only
production losses to the mining plan. Also, the strategic bottleneck of the system has a
capacity close to the one used in the deterministic-optimised schedule.
The distribution range of values for the s3 scenario is double the range of the m2
scenario (200 Kt and 100 Kt respectively) and the at-risk production (95% confidence level)
is 55 Kt and 28 Kt respectively. It is possible to conclude that the mixed underground open
pit scenario has an upper copper production of 64 Kt in terms of the expected value and
a lower at-risk production of 27 Kt than the full underground scenario.

conclusions
The current methods of production scheduling rely on economic optimisation tools that
do not consider the risks in the performance of the production processes. It becomes
necessary to establish methodologies and develop tools to estimate and incorporate the
effects of these risks in the production schedules.
The methodology used to assess the technical productive risk in El Teniente mine let us
to estimate the at-risk production from an objective and systemic perspective, comparable
to that used in the standard-deterministic production scheduling tools.
In summary, the main benefits associated with the mining plans risk assessment and
the quantitative risk evaluation is that it allows us to compare and analyse alternative
mining plans in terms of expected value and at-risk production, providing better evidence
to decision-makers and defining more robust productive undertakings.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
108 Probabili s tic R i sk Analysi s of Mine Produ c tion Plan s

acknowledgements
The authors extend their sincere thanks to all their colleagues of El Teniente mine that
helped them during the development of this work. Also, the authors want to acknowledge
the permission given by Codelco Chile to publish this technical paper.

references
Norma Codelco Chile N°31. (2007) Categorización de R ecursos y R eser vas Mineras, Internal Report. [1]

Horsley, T. P. (2002) Dollar Driven Planning: The Corporate Perspective to Operational Mine Planning. amc
Reference Material. [2]

The JORC Code (2004 Edition) Australasian Code for Reporting of Exploration Results, Mineral
Resources and Ore Reserves. [3]

Dimitrakopoulus, R., Martinez, L. & Ramazan, S. (2007) Optimizing Open Pit with Simulated Orebodies
and Whittle Four-X – A maximum Upside/Minimum Downside Approach, Orebody Modelling and
Strategic Mine Planning, pp. 201–206. [4]

Araneda, O., Gaete, S., De la Huerta, F. & Zenteno, L. (2003) Optimal Long Term Planning at El Teniente
Mine. Copper 2003, Santiago, Chile. [5]

Chanda, E. K. (2007) Network Linear Programming Optimization of an Integrated Mining and Metallurgical
Process, Orebody Modelling and Strategic Mine Planning, 2nd edition, pp. 149–155 [6]

Trout, L. P. (2005) Underground Mine Production Scheduling Using Mixed Integer Programming,
25th APCOM Conference, Brisbane, Australia, pp. 395–400. [7]

Directriz Corporativa para la Gestión de Riesgos en los Planes Mineros (2007) Codelco Chile Internal
Report. [8]

AS/NZS 4360. (2004) Risk Management. [9]

Stamatelatos, M. (2000) Probabilistic R isk A ssessment: What Is It and Why Is It Wor th Per forming It?
nasa Office of Safety and Mission Assurance. [10]
Carvalho Junior, J., Costa, J. & Koppe, J. (2007) A Probabilistic Approach in a Linear Programming Model
Applied to Mine Planning, Proceedings 33rd apcom, Santiago, Chile, pp. 245–252. [11]

Castro, J. (2007) Simulation Model of Block /Panel Caving Produc tion Capacit y at El Teniente Mine.
Proceedings 33rd apcom, Santiago, Chile, pp. 207–211. [12]
Optimisation of Construction
and Production in a Block Cave
Operation

abstract
Bryan maybee The scheduling of mining activities is a complex task. Due to
Stephen hall the requirement to satisfy a large number of interdependent
MIRARCO  –  Mining constraints, the optimisation of this daunting endeavour has the
Innovation, Laurentian potential to significantly increase the value of a project. This is
University, Canada
especially evident in the planning and construction of large scale
mining operations, where bringing the project into production
even a single year earlier can increase the value by hundreds of
millions of dollars. This paper describes a Schedule Optimisation
Tool (sot) that has been developed for underground mine planning,
and demonstrates its applicability to large scale mining operations
by means of a case study. The case study has particular application
to development scheduling in a block cave operation, as sot has
the potential to save valuable time by automating many of the
mundane scheduling tasks.
This paper will demonstrate the applicability of sot as a tool
that can be used to smooth development schedules for block
caving projects. If the caving sequence is predetermined, different
constraints can be placed on the various development types and
run as a batch optimisation process, and results can be analysed
to determine the most suitable development strategy. The result is
a tool that automates the smoothing of the development schedule
for a block caving production sequence that has been derived using
a stand-alone cave sequencing program. Alternatively, if the
caving sequence is not predetermined (such as for a conceptual
stage evaluation), both production and development can be
constrained in different configurations and sot used to identify
the combination resulting in the maximum value. Once an
acceptable production profile is identified, further optimisation
runs are created using combinations of multiple constraints for
the various development types to derive schedules that may better
satisfy practical development limits.
110 Optimi sation of Con s tru c tion and Produ c tion in a Block...

introduction
Optimisation is the process of maximising or minimising a function. In the context
of underground mine scheduling, optimisation usually involves maximising the Net
Present Value (npv) of the project. Most of the optimisation work in block cave operations
has focussed on the scheduling of production draw to maintain a consistent flow of
material. However, even the smallest change in the time at which production can start
also has a significant impact on the value of the project. As a result, the optimisation of
the construction schedule can have a significant influence on the project value.
This paper describes the Schedule Optimisation Tool (sot) that has been developed at
mirarco for use in underground mine scheduling. Through the use of sot, thousands
of feasible mine schedules under various strategies can be evaluated, providing helpful
information to decision-makers in the planning process. A case study is presented to
illustrate how this advanced computing power and automation can be used within the
scheduling process for a block cave mining operation, with specific emphasis being placed
on the scheduling of construction activities.
The optimal solution of scheduling problems is a topic of interest to many in the
operations research field, with the solution methods being categorised as either exact or
heuristic. Exact optimisation strategies can be described as “brute force” methods that
iterate through all of the potential solutions. However, when the search space is immense,
optimising with any exact technique quickly becomes infeasible [1] .
To cope with the immense search space of the underground mine scheduling problem,
heuristic optimisation methods have been introduced. One form of heuristic solution to
mine scheduling has been the use of rules. However, these solutions are pre-determined
in many cases, and the solution provided is merely what the process dictated. More
advanced solutions are found using heuristic searches rather than heuristic rules. One
of the simplest heuristic search techniques is hill climbing, which evaluates solutions in
the neighbourhood of the current solution for their ability to increase value. While this
method offers advantages over other methodologies, its main drawback is the number of
cases that need to be evaluated with an increase in the strategic decision parameters [2] .
Alternatively, a linear programming problem involves a finite number of continuous
decision variables, and can be solved by simple methods that are quite efficient at finding
a solution. There are two models that comprise the scheduling problem: the sequencing
model and the multi-period planning model. The sequencing model addresses the problem
of scheduling a finite number of activities by relating the activities to one another
through dependencies, as one activity will depend upon the initiation or completion of
another. The multi-period planning model addresses the problem of resource allocation
over time. In this model, there are a finite number of time periods of equal length,
and the decisions about resource inputs are made at all these periods, with the set of
acceptable decisions at each period being defined by resource constraints. Unfortunately,
these models suffer from their inability to handle large numbers of integer variables.
Finally, Genetic Algorithms (ga s) have been identified as one of the best techniques
currently available for getting close to an optimum where there is no analytical solution [3].
A ga can be viewed as a generalisation of the hill climbing technique. It addresses the
problem of climbing to a local optimum by simultaneously considering a number of
solutions in different ‘neighbourhoods’ of the search space. Although this does not
guarantee arriving at the global optimum, it typically improves upon a simple model by at
least 10 to 15% [4]. Further, gas were identified as a viable tool for optimising underground
mine plans through the development of a parallel ga to assist the mine planning process
in finding alternative schedules with a higher likelihood of optimality [5].
CHAPTER III 111

Schedule Optimisation Tool (SOT)


Through applied research projects with industry partners, a Schedule Optimisation Tool
(sot) has been developed at mirarco [6] . sot is a planning tool that allows mine planners
to quickly identify feasible and high-quality mine schedules. In its optimisation process,
sot uses a powerful ga to search thousands of feasible scheduling alternatives to identify
those that maximise the npv of the project. It creates these schedules by completely re-
ordering the activities that are to be sequenced as it evolves towards “better” scheduling
alternatives for the given design. Alternate designs can be considered in the same
manner, and as a result, a full scenario evaluation is possible.
The main features of sot that contribute to npv improvements include the
application of constraints, just-in-time development, heuristic guidance and learning.
Figure  ➊ illustrates the optimisation process flow that is used. sot adheres to all
operational constraints that have been established, including precedence constraints and
resource capacity constraints. sot produces smoothed schedules, i.e., schedules adhering
to all identified capacity constraints on activity properties expressed as tonnages or
lengths. For example, when an ore tonnage cap (resource capacity constraint) is used,
not only does sot ensure that this limit is never exceeded, but it also strives to achieve
the specified level as closely as possible. In an attempt to identify better schedules, sot
then performs a sliding process, in which costly development activities are pushed off
to just-in-time without breaking caps or required scheduling lags. This process not only
has the potential to defer costs, but also boost revenues by reallocating the resources.
The result is an optimised schedule that adheres to multiple scheduling constraints.

Figure 1 SOT process.

The search for improved mine schedules can begin from a random set of schedules
generated by sot. However, because the search space is so large, this may lead to an
unnecessarily long run time. Thus, heuristic guidance of the ga was incorporated into
sot as a ranking. This ranking is then used to prioritise the scheduling of both the
stopping and required development activities. This approach was found to provide a good
initial starting point for the optimisation. It effectively narrows the search space to be
investigated and directs the search towards schedules with higher npvs, which resulted
in a much reduced search time. Heuristic guidance of the planning process potentially
sacrifices the globally optimum npv, but directs the search to areas where it can quickly
find high quality schedules based on a simple ranking scheme.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
112 Optimi sation of Con s tru c tion and Produ c tion in a Block...

sot repeatedly produces a set of valid schedules and then combines them to ‘learn’
better schedules. The learning (by means of the ga) uses a tournament selection process
to build on the schedules developed through constraint adherence, guidance and sliding
of activities. Through this process, the best scheduling alternatives are selected and
combined to form new alternatives, constructing progressively more optimal schedules.
While the many features within sot allow it to create improved scheduling options,
there are also many ways in which a user can manipulate the tool to provide answers
to their questions.
This document describes a study to show that mirarco's Schedule Optimisation Tool
(sot) can be applied to a block cave data set, with particular focus on the smoothing of
development schedules. It has been demonstrated through industry funded research
projects that sot has the potential to save valuable time for the planning professional,
as it automates some of the mundane scheduling tasks.

methodology
A stand-alone block cave schedule comprising approximately 150 activities was used for
this case study, providing a manageable data set for performing runs quickly. This data
set comprises the following development activity types:

• 8 m x 6 m, 5,600 m total, scheduled at 75 m/month per heading


• 5 m x 5 m, 81,900 m total, scheduled at 150 m/month per heading
• 4 m x 4 m, 9,800 m total, scheduled at 300 m/month per heading.
For the purpose of this study, costs for the different development activities were kept
constant (irrespective of annual development metre totals) and non-development capital
expenditures were not considered. This approach was taken, as the objective was to
investigate whether sot could be used to smooth block cave development profiles rather
than find the schedule that provides the maximum value. (Note: sot is also able to
accommodate different capital investment strategies.)
Two scheduling situations were considered in this study. First, it was assumed that
a third party cave production scheduling software (such as Gemcom's pc-bc production
scheduling product) had been run to identify the optimum/preferred cave extraction
sequence for this project. This scenario is termed With Cave Sequence. For this study, a cave
sequence that produces approximately 20 million tonnes per year of ore was assumed,
and was linked to the development activities by adding suitable predecessor-successor
relationships. sot was then run on the combined data set, with no constraints on annual
development, to derive the theoretical maximum Net Present Value (npv). Next, two
constraints were applied to the 5x5 m development type (8,000m/y and 4,000m/y), with
500 possible schedules generated in each case to compare npvs and schedule profiles.
The second application of sot was to assume that no cave sequence exists for this
orebody. This scenario is termed Without Cave Sequence. Under this assumption, activities
are linked by predecessor-successor logic only, allowing sot to investigate different cave
sequences (assuming the mineable reserve remains unchanged) along with associated
development. sot was run under this approach to find the theoretical maximum npv
before having the 5x5 m development constrained to 4,000m/y. Finally, the 4x4 m
development type was constrained to 2,500 m/y and sot re-run, providing 500 possible
schedules for review. (Note: The 8x6 m development type was not constrained in this
study due to the relatively low, unsmoothed annual metre profiles.)
CHAPTER III 113

Results With Cave Sequence


As mentioned, the first case assumed that a cave sequence is defined for the project.
Figure ➋ shows the cave production tonnage for this sequence, constrained to maintain
a steady-state cave production tonnage of approximately 20 million tonnes per year. If
no constraints exist for annual development, this tonnage profile is achievable, with
the development and tonnage profiles found using sot shown in Figure  ➌  and ➍
respectively. These profiles result from a schedule with a maximum npv of $7.8 billion
over a 27-year mine-life, with a 5x5 m development requirement of 15,000 metres in year
nine and 13,000 metres in year 15.

Figure 2 Optimised cave production tonnage using third party software.

Figure 3 Development profiles - npv $7.8 billion.

Figure 4 Production profiles - npv $7.8 billion.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
114 Optimi sation of Con s tru c tion and Produ c tion in a Block...

Comparing these profiles to those originally created by the mine planner in Figure  ➎
and  ➏, it can be seen that the original schedule does not have a smoothed production
profile, initially peaking at more than 30Mt/y and declining thereafter. If the effects
of stockpiling are disregarded, the financial viability of this schedule is questionable,
particularly if the process plant and other infrastructure is sized for the maximum
throughput (30Mt/y). This schedule also requires 15,000 metres and 20,000 metres of 5x5 m
development in years nine and 14 respectively. These levels are considered an impractical
amount, particularly when coupled with the overall 5x5 m development type profile.

Figure 5 Development profiles - original mine planner.

Figure 6 Production profiles - original mine planner.

Since these profiles are considered to be both impractical and financially unattractive,
a constraint of 8,000 metres per year was applied to the 5x5 m development type. This
constraint was input to sot and run for two hours. From this run, 500 feasible schedules
adhering to the development constraint and predecessor-successor rules were produced,
ranging in npv from $0.2 billion to $7.5 billion, with an average of $7.1 billion; the
development and production profiles for the $7.5 billion alternative shown in Figure  ➐
and ➑ respectively. From these figures, it can be seen that the profile for the 5x5 metre
development type is beginning to look more realistic. However, while the production
profile is still achieving its 20 million tonne per year target through the majority of the
schedule, it is becoming constrained by the amount of development that can be feasibly
achieved in a year.
CHAPTER III 115

Figure 7 Development profiles - npv $7.5 billion.

Figure 8 Production profiles - npv $7.5 billion.

To investigate further smoothing of the development profiles, a constraint of 4,000 metres


per year was then placed on the 5x5 metre development and run through sot using the
same process as before, with the resulting development profile for the schedule with the
maximum npv shown in Figure  ➒.

Figure 9 Development profiles - npv $5.9 billion.

It can be seen in this figure that assuming a 4,000 metres per year constraint for the
5x5 m development results in a fairly constant 3,500 metres per year being scheduled.
This is a result of the relatively coarse activity data set – just 150 activities representing
a complete block cave operation. By defining more suitably-sized (less coarse) activities,
sot would be able to create schedules that meet the constraint more closely.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
116 Optimi sation of Con s tru c tion and Produ c tion in a Block...

From the 500 feasible schedules that were investigated, the results have a npv range of
$0.2 billion to $5.9 billion, with an average of $5.5 billion. This signals that as we further
constrain the development activities, it becomes more and more difficult to obtain the
production profile found with the third-party scheduling software. To achieve this level
of production, the development constraint would need to be relaxed in the early years
and sot would need to be re-run to identify the level of annual development required to
maintain this cave production profile.

Results Without Cave Sequence


The second scenario in this case study assumed that no cave sequence was derived from
a third party software package. As such, a large number of cave sequences are possible,
only being constrained by predecessor-successor logic. Figure  �� and �� respectively
show the development and production profiles that can theoretically be achieved by this
block cave, assuming no constraints are placed on development or production.

Figure 10 Development profiles - npv $9.2 billion.

Figure 11 Production profiles - npv $9.2 billion.

By running this scenario through sot, we identify the maximum value case against
which our further analysis can be compared. These results show a npv of $9.2 billion with
a 24-year mine life. However, mining the cave at this rate requires 15,000 metres of the
5x5 m development type in years nine and 15, and almost 10,000 m in years 10 and 14.
Assuming the development profiles in Figure  �� are impractical, the 5x5 m development
type is constrained to 4,000 metres per year. Running sot for two hours with this new
constraint produces 500 possible schedules, ranging in npv from $0.2 billion to $5.7
billion with an average of $5.0 billion. The maximum value schedule has a 34-year life,
CHAPTER III 117

with its development and production profiles shown in Figure  �� and �� respectively.
It should be noted that it was difficult to achieve a satisfactory production profile given
the relatively coarse data set used in this case study (just 150 activities representing the
entire block cave project). Using a data set with less coarse activities would allow sot to
further smooth profiles.

Figure 12 Development profiles - npv $5.7 billion.

Figure 13 Production profiles - npv $5.7 billion.

As with the previous scenario, it is seen that constraining the 5x5 m development to this
level severely restricts the productive potential of the block cave project. Also, the removal
of a strict cave sequence creates more possible scheduling options for sot to evaluate,
resulting in a lower average npv compared to results that utilise a predetermined cave
sequence. As a result, guidance should be used more effectively to focus the search on
schedules that offer higher values.
It can also be seen that a significant peak in the 4x4 m development type now occurs
in years eight and nine. With the automated power of sot, this development type can
easily be constrained to 2,500 metres per year. From this analysis, a schedule results
with a npv of $5.4 billion and 34-year life, which is only a slight decrease in value
compared with the result when this constraint is not in place. This process could then
be repeated for any combination of constraints for these two (or more) development
types to find practical schedules that meet production requirements for the orebody.
This would provide planners with a full understanding of the level and timing of annual
development required to achieve various production profiles.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
118 Optimi sation of Con s tru c tion and Produ c tion in a Block...

conclusions
This study has demonstrated that the Schedule Optimisation Tool (sot) can be used
to smooth construction schedules for block cave projects. If the cave sequence is
predetermined, different constraints can be placed on the various development types
and run as a batch optimisation process. The results can then be analysed to determine
the most suitable construction schedule(s) for the project in question, as well as how they
will impact the production target. If the cave sequence is not predetermined (such as for
a conceptual stage evaluation), both production and development can be constrained in
different configurations and sot can be used to identify the combination that gives the
maximum value.

references
Blattman, M. (2003) Creation of an Underground Mmine Development Sequencing Optimizer. Proceedings,
2003 sme Annual Meeting, February 24-26, Cincinnati, oh, Preprint 03–064. p. 8. [1]

Hall, B. (2003) How Mining Companies Improve Share Price by Destroying Shareholder Value. Paper for
presentation at cim Mining Conference and Exhibition, Montreal. Retrieved January 2005 from
www.amcconsultants.com.au, p. 31 . [2]

Hall, B. & Stewart, C. (2004) Optimising the Strategic Mine Plan – Methodologies, Findings, Successes
and Failures. In R. Dimitrakopoulos (Ed.), Orebody Modelling and Strategic Mine Planning
Symposium: Spectrum Series 14. Orebody Modelling and Strategic Mine Planning: Uncertainty
and Risk Management, Perth: Ausimm, pp. 281–288. [3]

Michalewicz, Z. (1992) Artificial Intelligence. Berlin, Germany, Springer-Verlag. [4]

Fava Lindon, L., Goforth, D., van Wageningen, A., Dunn, P., Cameron, C., & Muldowney, D. (2005)
A Parallel Composite Genetic Algorithm for Mine Scheduling. In A.P. del Pobil (Ed.), The 9th iasted
International Conference on Artificial Intelligence & Soft Computing. Artificial Intelligence
and Soft Computing, Anaheim: acta Press, pp. 245–250. [5]

Maybee, B., Fava, L., Dunn, P., Wilson, S., & Fitzgerald, J. (2009) Towards Optimum Value in Underground
Mine Scheduling. Paper presented at the cim agm, 10–13 May, Toronto, Canada. [6]
Computer Simulation  –  A n Aid
to Determining Fleet Sizes

abstract
Raymond suglo The ability of a mining contractor to meet contractual production
University of Mines and requirements depends on the type of equipment and the fleet
Technology, Ghana sizes used. This paper employs computer simulation techniques
to model, formulate and determine the fleet requirements of a
mining contractor in the third and fourth year of operation. This
approach incorporates the stochastic nature of the problem by
using the Monte Carlo simulation techniques of Visual Simulation
Language for Alternative Modelling (slam) with AweSim. The
approach has been used to assess the performance of an existing
materials handling system in a mine and to predict the additional
truck units required. The uncertainties associated with the
predicted productivities have also been calculated. The results
show that the mining contractor should buy or lease at least three
cat 775e trucks in the third year and one more truck unit in the
fourth year in order to maintain the current rate of production.
This will enable him to produce 14,516 tonnes ± 253 tonnes and
14,602 ± 284 tonnes of ore in years three and four, respectively.
120 Computer Simulation – An Aid to D e termining F lee t Si zes

introduction
In the past, all the mining operations were undertaken by mining companies using their
own men and equipment (referred to as owner mining). Since the early 1990s, there has
been an increasing tendency for managements to contract the various mining operations
to mining contractors (contract mining). This often enables the mines to achieve their
production targets at lower rates per unit of materials excavated and handled due to
the economy of scale in the use of larger equipment by such specialised contractors. It
is likely that by the year 2020, over 90% of most mining companies worldwide will be
contracting their operations to mining contractors. Contracting out mining operations
has several advantages including reduction of front-end capital, a well-defined fixed cost,
greater flexibility for the mine owner, risk sharing and a more efficient system due to
the contractor’s experience [1] . However, for the mine owner, obtaining the best out of
the contractor is not a simple issue [2, 3] .
In this paper, a mining contractor is required to produce 14,400 tonnes of ore per shift
using a number of cat 777d rear dump trucks in a medium scale mine. He has decided to
employ simulation methods using the Visual slam with AweSim software to determine
the optimum fleet of trucks which will enable him meet his production targets in ore
mining.

statement of problem
A mining contractor is required to produce ore at a constant rate of 14,400 tonnes per shift
over the five-year duration of the contract. The contractor is currently able to achieve that
production rate with five cat 777d trucks, each with a carrying capacity of 100 tonnes in
one pit. The contractor, however, is concerned whether the current production rate can be
maintained in the third and fourth year of the contract as the equipment age and the pit
dimensions (depth and width) increase. This fear is based on the knowledge that haulage
unit production is a function of the pit’s dimensions, haul route profile, age of loading
and hauling equipment and nature of broken materials [4] . It is known that the haul
route profile would change substantially during the third and fourth year. In addition
the mechanical availabilities and utilisations of the production equipment (loaders and
haulers) will decrease with age. The contractor plans to augment the production from
the fleet of five cat 777d trucks with cat 775e (each with a carrying capacity of 70 tonnes)
if the need arises.
The contractor wants to know in advance the number of cat 775e units that will be
required to meet the production rate in the third and fourth year. This will enable him
arrange either to purchase or lease the required number of trucks to supplement his
initial fleet in years three and four of the operations. A model of the current scenario as
well as the scenarios in the third and fourth year is required. It is important that the
model takes into account, the differences in travel speeds, capacities and dumping times
of the two types of trucks over the years.

input data
The input data for the simulation models in this work include travel times over various
sections of the haul roads, loading and dumping times, and the speeds of the two types of
truck units. Tables 1 to 3 contain a summary of the results of time and motion studies
on the current scenario. The nature of the haul route sections jnd 1 to 5 are summarised
in Table 1 . Table 2 shows the detailed components of the statistical distribution of the
CHAPTER III 121

times on the sections of the haul road, while Table 3 shows the statistical distribution
of shovel cycle and truck dumping times. The cat 775e trucks are assumed to have 1.09
times the speed of the cat 777d and 76% of the dumping time. It is also estimated that
the cat 775e trucks will be loaded by the excavator in four passes. These estimates were
calculated from the manufacturer’s specifications of the units [5] . The cycle times of
the trucks and excavators were increased by 15% every year to account for the ageing of
the equipment. Other data on the trucks are given in Table 4 .

Table 1 Travel times of loaded CAT 777D trucks

Travel Time (s)


Haul Road Section Distance (m) Grade (%)
Selected Distribution Distribution Parameters

In-pit travel – 0 Gamma α = 20.87 β = 3.66


JND 1 350 9.8 Gamma α = 442.62 β = 0.25
JND 2 184 8.0 Gamma α = 93.22 β = 0.46
JND 3 171 4.2 Normal μ = 24.73 σ = 2.13
JND 4 70 6.0 Gamma α = 271.86 β = 0.11
JND 5 50 0 Gamma α = 327.21 β = 0.08

Table 2 Travel times of empty CAT 777D trucks

Travel Time (s)


Haul Road Section
Selected Distribution Distribution Parameters

In-pit travel Gamma α = 86.54 β β = 0.73


JND 1 Normal μ = 35.41 σ σ = 5.56
JND 2 Gamma α = 28.87 β β = 0.46
JND 3 Gamma α = 25.91 β β = 0.46
JND 4 Normal μ = 16.47 σ σ = 0.23
JND 5 Gamma α = 38.57 β β = 0.43

Table 3 Statistical distribution of shovel cycle and truck dumping times

Duration (s)
Activity
Selected Distribution Distribution Parameters

Cycle time of shovel Gamma α = 35.92 β = 0.92


Dumping time of CAT 777D Normal μ = 206.32 σ = 11.21

Table 4 Truck types and their attributes

Truck Type Capacity (tonnes) Number of Shovel Passes

CAT 777D 100 5


CAT 775E 70 4

visual slam models of current scenario


Currently, the mine uses a combination of one hydraulic shovel and five cat 777d trucks.
The cat 777d trucks are loaded in five passes by the shovel. These are modelled as entities
which are introduced into the model at the beginning of simulation by a create node.
The shovel and the dump trucks are modelled as resources with capacities of one and two
respectively. The restriction in dumping is that only two trucks can dump simultaneously
at the crusher location.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
122 Computer Simulation – An Aid to D e termining F lee t Si zes

The travel speeds over the various sections of the haul route have been calculated from
the distances and the travel time distributions. These are assigned to a truck once it leaves
the pit area at an ASSIGN node [6] . Each entity (truck) then goes round a loop five times
to mimic the five sections of the haul route. The duration of the activity is modelled as
the quotient of distance and speed. After exiting this loop the entities proceed to wait at
the crusher to dump their load. After dumping, the cycle time, current total production
and hourly productivity are recorded. The trucks are then dispatched to the next available
shovel in the pit by going through another loop representing the travel back empty. The
duration of the shift (hence that of each simulation run) is seven hours.
The current scenario is modelled such that all trucks already within the pit and
waiting at the shovel location are allowed to complete the subsequent operations and to
dump their loads before exiting from the system. Also all dump trucks already returning
to the pit as well as all the loaded trucks going to the crusher or already waiting in the
crusher queue to dump their load are allowed to complete the subsequent cycles before
exiting from the system. This is in line with production routines at the mine of allowing
an extra 30 minutes after the shift to enable all operations to be fully rounded up. After
the last truck exits the system, the total production at the end of the shift is computed.
The network diagram of the model is given in Figure ➊. The control statements are
used to define equivalences and initialise the global variables to zero at the beginning
of each simulation. It was found that more than 30 simulation runs led to only 1.5%
increase in total production. This was considered to be marginal. Thus the model was
run for 30 runs for each of the scenarios.

Embellishments
In modelling the proposed system, an additional create node is used to introduce the
cat 775e trucks into the system. Since the number of additional trucks is varied in the
analysis the number of 775E trucks to be created is specified as the variable qty. The
variable is then defined using the equivalence statement in the control statements.
The attribute capacity (atrib [1]) is used to distinguish the two kinds of trucks. Thus,
depending on the capacity of the truck, it is loaded in four or five passes by the shovel.
Figure ➋
shows the embellished model of the system. In the control statements, Array
one (distance) is also changed to reflect the lengths of the different sections of haul
routes with time. These lengths are estimated from mine plans for the current pit and
future ones taking into consideration the changing elevations of the benches and pit
floor annually.
CHAPTER III 123

Figure 1 Visual SLAM network model of original system.

Figure 2 Visual SLAM network model of embellished model for third and fourth year.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
124 Computer Simulation – An Aid to D e termining F lee t Si zes

Model Verification and Validation


The model was verified by extensive use of the interactive execution facility in AweSim.
After gaining confidence in the model logic, validation was carried out using the cycle
time and the production per shift. Table 5 summarises the results of the simulation
runs for validation using Visual slam with AweSim. The errors in the cycle time and the
production per shift were calculated to be 2.04% and 3.8% respectively. These are within
the acceptable error range (≤ 5%). Therefore the model is considered good for analysis.

Table 5 Simulation results for validation

Simulation Results
Parameter Mean from Data
Mean Uncertainty (±)

Cycle Time (min) 14.7 1.04 15


Production Per Shift (tonnes) 14,453 165 15,000
Shovel Utilisation (%) 91.6 1.5 –

analysis of results
The observed statistics for Scenario Option 1 of the AweSim multiple run report for the
basecase are given in Table 6 . The results show that the mean total production per shift
was 12,377 tonnes, the fleet of trucks produced between 1,522.48 t/hr and 1,585.71 t/hr, that
the average queue length at the shovel location was 0.282 and that there were virtually no
queues at the dump location. The average utilisation of the shovels was 79.3%. Figure ➌
shows a general drop in production per shift and shovel utilisation over the years. This is
due to the increased queuing at the servers, the longer haul routes due to the widening
and deepening of the pit and the ageing of the production equipment. Inspite of these
facts, the shovel’s utilisation was generally good (≥ 68%). From the trend in production,
it is very unlikely that the contractor will be able to achieve the production target per
shift after year two. This means that he has to increase the fleet sizes in years three and
four to meet the production target of 14,400 tonnes per shift.

Table 6 Observed statistics for Scenario Option 1 of the AweSim multiple run report for basecase

Standard Standard Minimum Average Maximum Average


Label Mean Value
Deviation Error Value Value

Cycle time 1,173.44 7.45 1.36 1,158.16 1,186.76


Total Production
12,377.00 99.66 18.20 12,210.00 12,610.00
(t/shift)
Time-Persistent Statistics for Scenario Option 1
Standard Standard
Label Mean Value Minimum Value Maximum Value
Deviation Error
Production/hour
1553.43 16.81 3.07 1522.48 1585.71
(t/hr)
File Statistics for Scenario Option 1
Average Standard Maximum
File Number Resource Label Standard Error
Length Deviation Average Length
1 Shovel 0.282 0.026 0.005 0.331
2 Dump 0.000 0.000 0.000 0.000
CHAPTER III 125

Resource Statistics for Scenario Option 1


Average Standard
File Number Resource Label Standard Error Average Available
Utilisation Deviation
1 Shovel 0.793 0.011 0.002 0.207
2 Dump 0.983 0.007 0.001 2.017

Figures ➍ and ➎ show the results of the simulation analysis for the third and fourth
year of production. The values for shovel utilisation and production per shift are the
average values from the multiple run summary reports for 30 runs. The graphs show a
general increase in production per shift and shovel utilisation as more cat 775e units
are added to the operation. However, the utilisation and production rates tend to level
up as more and more 775E units are added due to increased queuing at the shovel and
dump locations. Thus as additional truck units are added, there are marginal increases
in productivity and shovel utilisation . Figure ➍ shows that the mining contractor can
meet the production target of 14,400 tonnes of ore per shift by employing at least three
cat 775e units in year three and at least four cat 775e units in year four. This is because
the production curves for the fleet of trucks for years three and four exceed targeted
14,400 tonnes of ore per shift at those instances. Thus the base case utilisation is exceeded
after three units of cat 775e are added in both the third and fourth year (see Figure ➎).

Figure 3 Performance of original system.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
126 Computer Simulation – An Aid to D e termining F lee t Si zes

Figure 4 Production per shift with additional 775E trucks.

Figure 5 Shovel utilisation with additional 775E trucks.

conclusions
From the analysis in this paper, it can be concluded that for the mining contractor to
achieve the required production target of 14,400 tonnes per shift, he needs to buy or lease
at least three cat 775e trucks in the third year and one more truck unit in the fourth
year. This will enable him to produce 14,516 ± 253 tonnes and 14,602 ± 284 tonnes of ore
in years three and four, respectively.
CHAPTER III 127

references
Haas, D. (1987) How to Contract for Your Sur face or Underground Mine. Contract Mining Workshop,
Northwest Mining Association, Spokane, Washington, USA, p. 23. [1]

Redpath, J. S. (1977) Mining Contractors – Necessar y Evil or an Increasingly Valuable Ser vice to Industr y,
CIM Bulletin, Vol. 70, Is. 778, Canadian Inst. of Mining and Metallurgy, Montreal, Canada,
pp. 51–54. [2]

Redpath, J. S. & Delavergne, J. N. (1980) Securing Maximum Effectiveness from Mining Contractors. CIM
Bulletin, Vol. 73, Is. 821, Canadian Inst. of Mining and Metallurgy, Montreal, Canada, pp. 85–89. [3]

Hays, R. M. (1990) Trucks in Ch. of 6.5.2 of Surface Mining, 2nd ed. B. A. Kennedy (ed.), Soc. for Mining,
Metallurgy, and Exploration, Inc., Littleton, CO, pp. 677–691. [4]

Anon. (2003) Caterpillar Inc., USA, retrieved August 23, 2009 from http://www.cat.com. [5]

Pritsker, A. A. B., O’Reilly, J. J. & LaVal, D. K. (1997) Simulations with Visual slam and AweSim. Systems
Publishing Corp., Indiana, p. 818. [6]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Mine Planning Considering
Uncertainty in Grades and
Work Index

abstract
Rodrigo contreras Planning is a critical step in the mining business, because at
Julián ortiz this stage the economic potential of the deposit is materialised
Universidad de Chile in expected cash flows. Therefore, this potential must be
quantified as accurately as possible by including all relevant
Claudio bisso variables that influence the creation of added value. Moreover, it
Minera Esperanza, Chile is necessary to assess the uncertainty associated to these variables
and account for it in the decision making process during mine
planning. The proposed work consists of developing a quarterly
plan for a year of production, in an open pit mine, that includes
the uncertainty and the hardness of the mineral, through the
Tonnes per Hour of Processed ore in the plant (thp). This plan
is compared with a conventional plan that only considers
the grades and their respective recoveries for the elements of
interest. The applied methodology considers two stages. The first
consists in geostatistically simulating Work Index, Cu and Au
grades, using information from long term drillholes, within the
estimation units defined by the geologic model in order to obtain
twenty possible scenarios of these variables. The second stage
consists of planning the selected volume quarterly, through an
optimisation algorithm, using two different decision variables.
The first one uses as a decision variable the content of metal in
the optimisation, and the second one considers the metal per
hour that the grinding line is able to process at the plant. Results
about costs, profits and geometry by period for each methodology
of planning are obtained, which are then compared. To finalise,
plans are built for the different scenarios created during the first
stage, using the optimisation criterion metal per hour. As a result,
planning including thp generates a 9% higher benefit than the
conventional approach and the geometry of each program varies
when applying these two methodologies. It can be concluded
that the most convenient planning methodology implemented
and developed, from an economic perspective, is the one that
includes the thp variable in the decision about sending a block to
the dump, plant or stock.
130 Mine Planning Con sidering Uncer taint y in Grades...

introduction
The mining business involves many areas, from the resource estimation, mine planning
and metallurgical process, to the marketing and sales of the final product. Each one of
these areas has an important role to increase the economic value of the deposit.
Mine planning is a very relevant area, since it defines which, how and when the
reserves are to be extracted. It defines a production plan, where tonnages and grades are
specified for each stage of the extraction, in order to maximise the economic value of the
mining business [1–3] . The assessment and planning should incorporate all variables
that are relevant to the final result. Conventionally, only mining variables are accounted
for in order to ensure a given production and grade. However, many recent studies have
shown that adding metallurgical variables to the planning increases the value of the
project significantly [4, 5] .
This paper shows the results of mine planning on a quarterly basis over a year for the
Esperanza Mine, owned by Antofagasta Minerals, accounting for the uncertainty in the
grades distribution, and the grindability of ore, through the use of the Tonnes Per Hour
(tph) that can be processed at the plant. These results are compared with a conventional
plan that only accounts for grades and metallurgical recovery.

case study
Minera Esperanza is a copper and gold deposit located in Sierra Gorda, II Region de
Antofagasta in Chile. The mine considers an open pit with an estimated production
of 750,000 tonnes of copper concentrate and 200,000 tonnes of fine copper per year.
Production is expected to run for 25 years at a production rate of 95,000 tonnes per day
of ore. The project is currently under construction and pre - stripping is ongoing. The
processing plant is expected to start processing sulphides at the end of 2010.
The case study was done considering the volume corresponding to the production for
2011. The available information is:

• Samples with grades for CuT [%] and Au [ppm]


• Samples of Wibo[k Wh / tc] from diamond drillholes composites of 30 m
• Two simplified geological units for Wibo
• Ten estimation units for the grades.

The metallurgical parameters used to compute the profit of each block are:

• tph, which predicts the mass flow that can be processed in the comminution stage at
the processing plant, for each geological unit, through the model of specific energy
consumption.

• Grades: Cu [%] and Au [ppm]

• Equivalent copper grade [%]

• Metallurgical recovery.

Currently, mine plans are based on cutoff grades for each phase, year and period, defining
the minimum grade to send the block to the processing plant or to the stock. The remaining
material is considered waste. The cutoff is based on the equivalent copper grade in order
to take into account the gold content. Although Esperanza also has some oxides, these
are processed in the Tesoro Processing plant and were not considered in this work. The
processing plant for sulphides has a circuit with configuration sabc-b, composed of a sag
mill, a sieve, two ball mills and a pebble crusher (Figure ➊). The economic parameters
used are the copper and gold price, and mining, processing, and sales costs.
CHAPTER III 131

Figure 1 Processing plant configuration.

methodology
The work was done in two steps:

1. Reserves model
Geostatistical simulation is used to generate realisations of the Work Index, CuT grade
and Au grade over the domain (volume to be produced in 2011). This is done for each
estimation unit, including an exploratory data analysis, variogram calculation and
modelling, simulation and validation of results. An estimated model is obtained by
averaging the simulated grades on each block.

2. Mine planning
Mine planning is done on a quarterly basis over the year's production volume. The plan
is computed using tph and CuT equivalent grade, with an optimisation algorithm
programmed in ampl-cplex. The following cases are evaluated:

• Base case: Accounts only for equivalent CuT grade for the reserves model obtained
as the average value of each block from the simulated realisations. This average is
equivalent to using a kriging estimate to build the model. The value is obtained
by maximising the fines produced, in terms of equivalent CuT. Therefore, only the
tonnage, equivalent CuT grade and metallurgical recovery are used for mine planning.

• Case tph: Accounts for equivalent CuT grade for the reserves model obtained as the
average value of each block from the simulated realisations, but also includes the
processing capacity of the plant, which is a function of the ore type, and is measured
as the time required to process (crush) a given block. Thus, fines of equivalent CuT
per hour are maximised in the optimisation. The value of each block is computed in
reference to an average block, which would take 3.89 hours to be processed.

• Case tph scenarios: Each simulated realisation is used as a reserves model, and
the optimisation is done as in the previous case. Results provide an assessment of
the uncertainty related to the economic profit and to the geometry of the extracted
volumes each quarter.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
132 Mine Planning Con sidering Uncer taint y in Grades...

results
The domain is presented in Figure   ➋. Some basic statistics are presented in Table 1 .

Figure 2 Volume and units considered.

Table 1 Basic statistics of mineral type and geological units

Mineral type Simplified geological unit

Type Blocks Tonnes [MT] % Type Code Blocks Tonnes [MT]


Sulphides 3895 57.9 37.5 Porphyry 2 352 5.4
Oxides 7813 97.0 62.5 Andesites 1 11274 149.2
Undefined 1 0.0 0.0 Undefined -99 83 0.4
Total 11709 155 100 Total — 11709 155

Simulation of work index is done for the porphyry and andesites units. Figure   ➌ shows
one realisation and the summary statistics of 20 realisations. Although more realisations
would be desirable, there were time constraints to perform the work and 20 is considered
a reasonable number to obtain good estimates of mean values and standard deviations.
Precision of the results could be improved by considering more realisations.

Work Index - Realisations

13.100

12.900
Work Index [Kwh/tc]

12.700

Wi[Kwh/tc]- Andesites Unit


12.500
Wi[Kwh/tc]- Porphyry Unit
12.300 Wi[Kwh/tc] - Andesites Unit Average
Wi[Kwh/tc] - Porphyry Unit Average
12.100

11.900

11.700
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Realisation

Figure 3 One realisation of Wi and summary statistics over 20 realisations.

Cu and Au grades are simulated on each geological unit, six for sulphides and four for
oxides. The correlation coefficient between Cu and Au grades is higher than 0.7 in the case
CHAPTER III 133

of sulphide units, therefore a bivariate statistical analysis and direct and cross -variograms
are computed and modelled and cosimulation is performed. In the case of oxide units, the
correlation is low; therefore Cu and Au are modelled independently. Figure  ➍ shows two
realisations of CuT and the summary statistics of 20 realisations of Cu and Au grades.

Realisations CuT Grades Realisations Au Grades


0.170 0.040 0.130 0.060

0.035
0.165 0.050
0.125
0.030
0.160 0.040
Average [%]

0.025
0.120
0.155 0.020 0.030
Variance

0.015 0.115
0.150 Average [ppm]
0.020
0.010
0.145 0.110
0.010
0.005

0.140 0.000 0.105 0.000


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Realisation Realisation

Average Variance Ave rage Variance

Figure 4 Two realisations of CuT grades and summary statistics over 20 realisations for CuT and Au grades.

Planning is fed with models of equivalent CuT grade, fines of equivalent CuT, tph and
processing time per block (Figure   ➎).

Figure 5 Average values of variables required for planning for each realisation.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
134 Mine Planning Con sidering Uncer taint y in Grades...

A linear programming problem is set up to maximise a decision variable under


constraints. The decision variable is CuT fines or CuT fines per hour, and the constraints are:

• Maximum and minimum mine movement capacity


• Maximum processing capacity of the plant
• Respect the precedence of extraction of blocks
• Advance in a bench in a radial fashion from a predefined access
• Processing only of mined out blocks
• Extract and process blocks only once.

The constraints are specified in Table 2 .

Table 2 Constraints for optimisation model

Mine Movement of Material


Period 1 Period 2 Period 3 Period 4
Daily
[90 days] [91 days] [92 days] [92 days]
Max Mine
425,000 38,250,000 38,675,000 39,100,000 39,100,000
Capacity [t]
Max Plant
100,000 9,000,000 9,100,000 9,200,000 9,200,000
Capacity [t]

When comparing the base case with the case tph, an 8.8% increased profit is achieved
(Figure   ➏).
Comparison of profit by period

600

500
Profit [MMUS]

400

300
Base Case
200
TPH Case
100

12 34 Sum

Period

Figure 6 Comparison of profit by period - base case vs. case TPH.

The geometry of the optimised plan for the base case and the case tph is assessed in
terms of the blocks that coincide in both plans for every period. These are summarised
in Table 3 .

Table 3 Geometric coincidence - base case vs. case TPH

Number of coincidences: Base Case vs. Case TPH

Period Mined Processed Stock


1 2503 411 373
2 2501 423 97
3 2463 483 313
4 2621 513 398
Total 10088 1830 1181
Coincidence
86.2 77.3 88.3
(Base Case) [%]
CHAPTER III 135

This means there is a 20% difference in the plans. This is also reflected in the
coincidence in processed blocks, which is 85%, accounting for the tph changes of 15% in
the blocks sent to the plant.
Regarding the case tph Scenarios, each one of the 20 scenarios simulated is optimised
to assess the expected variability in profit (Figure   ➐ ). Total profits fluctuate from
US$ 495 to US$ 559 million, with an average of US$ 531 and a standard deviation of
US$ 15.8 million. These results provide valuable information for decision making and
understanding the risk involved.

Figure 7 Profits per period for each one of the optimised scenarios.

conclusions
Mine planning must account for all relevant parameters and variables that have an
impact in the final result. In the case study presented, we show that not taking into
consideration the fact that some ores are easier to process than others, which translates
into different processing times, generates a plan that does not take into account all the
value of the ore. Accounting for the tph implies that keeping the plant capacity to its
maximum and selecting the easier to process ores increases the profit of the operation
in up to 8.8% for a yearly production.
Furthermore, the use of geostatistical techniques allows assessing the variability of
the ore and its properties and provides a quantification of the risk involved in the mine
plan. Geometallurgical modelling should aim at incorporating more and more of these
variables in order to refine the forecasting of the actual benefit that the extraction may
report, yet knowing the variability that one should expect in the result.
Finally, it is important to emphasise that mine plans that incorporate geometallurgical
variables are different in their geometry to the conventional ones. Efforts should be made
to provide tools to optimise the plans that incorporate as many variables as possible and
account for the uncertainty of their distribution in the long, medium and short-term.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
136 Mine Planning Con sidering Uncer taint y in Grades...

references
Araneda, O. (2008) Economía Minera (mi65a). Departamento de Ingeniería de Minas, Universidad
de Chile. [1]

Rubio, E. (2007) Tópicos Especiales de Planificación Minera (mi75e). Departamento de Ingeniería de


Minas, Universidad de Chile. [2]

Vasquez, A. & Galdames, B. (2007) Diseño de Minas a Cielo Abierto (mi58a). Departamento de Ingeniería
de Minas, Universidad de Chile. [3]

Cáceres, J., Pelley, C., Katsbanis, T. & Kebelek, Y. (2004) Integrating Work Index into Mine Planning at
Large Scale Mining Operations. Massmin. [4]

Rubio, E. (2006) Mill Feed Optimisation for Multiple Processing Facilities using Integer Linear Programming.
In: Proceedings of the Fifteen International Symposium on Mine Planning and Equipment
Selection. M. Cardu, R Ciccu, E. Lovera, E. Michelloti (eds.), Imprenta ciudad, Fiordo S.r.I. Italia,
Torino, pp. 1207–1213. [5]
Combining Optimisation and
Simulation to Model a Mining
Supply Chain from Pit to Port

abstract
Peter bodon PT Kaltim Prima Coal (kpc) operates a coal mine near Sangatta in
Chris fricke East Kalimantan, Indonesia. Coal is mined at various grades and
Tom sandeman
blended through intermediate stockpiles before being loaded as
TSG Consulting, Australia multiple products on to ships.
To determine the potential consequences of increased production
Chris stanford
and alternative expansion options, kpc required an understanding
PT Kaltim Prima Coal, Indonesia of the interaction between throughput and quality, and how these
were impacted by any changes in infrastructure and/or operating
policy. To help in this understanding, kpc commissioned tsg
Consulting to construct a simulation model of their operation.
The complicating factor in carrying out a capacity analysis
using simulation for an operation such as kpc is the requirement
to meet contracted coal quality on each vessel loaded. Failure to
properly capture this constraint in a simulation model could lead
to an overstatement of the potential capacity of the system. To
ensure that the simulation model developed for kpc could plan
the operation in a manner that closely matched that used on site,
an optimisation model of the planning process was developed.
The optimisation replicates the planning activity that is regularly
performed on site, and enables the simulation model to operate
accurately for an extended duration (anywhere from one week
to several years). The integration of the optimisation within the
simulation allowed kpc to link the effects of real life uncertainties
to the strategic plans being developed.
The addition and integration of the planning engine within
the simulation model was a complex task. In isolation, the
three key elements of this model (quantity model, quality model
and planning) are well established; however the incremental
addition of each into a simulation system exponentially increases
the model complexity. The insights gained from this complex
modelling system hold the potential to revolutionise the way the
kpc operation is planned and operated.
138 Combining Optimi sation and Simulation to Model...

introduction
An export supply chain —beginning with the extraction of ore from a pit and ending
with the loading of this ore on to vessels at a port— is a key component of many mining
operations. These supply chains are comprised of a series of complex operations, such
as mining, ore processing, transportation, stockyard management and vessel loading.
Two differentiating features of mining supply chains are the length of time over which
they operate and the many degrees of uncertainty that affect each link in the chain.
Mining, by its very nature, is a capital intensive industry with relatively long investment
cycles. Mine production life can typically last 30 years or more, which itself is preceded
by significant lead times between orebody discovery and initial production. Added to
this, two key areas of uncertainty play a driving role in any mining project: uncertainty
in the geology of the orebody (supply) and uncertainty in the market price of the final
product (demand). These factors —coupled with the size of the capital and operational
investments required in any mining project— make risk analysis and management a
core function for any company participating in the mining industry.
Typically, the operation and performance of each component of a mining supply chain
is analysed in isolation, with little consideration given to its interaction with upstream
and downstream processes. In reality, stochastic and dynamic influences that affect one
component of the chain can have significant flow on effects to other sub-systems in the
supply chain. Hence, evaluation of the performance of the total integrated system needs
to capture the interaction of these sub-systems. Discrete Event Simulation (des) has
proved to be a powerful tool in modelling supply chains, capturing the system dynamics
and interactions, and allowing the overall performance of the integrated system to be
rigorously evaluated.
The application of des to model mining supply chains is particularly beneficial
when used in the strategic planning process, to aid decision making for the long term.
These decisions are typically associated with significant capital expenditure, and may
form part of a pre-feasibility or bankable feasibility study. In a greenfield environment,
strategic planning focuses on project design, including issues such as infrastructure
requirements, plant design, equipment configuration and capacity, and evaluation of
different options for operating principles and processes. Once a project is up and running,
strategic planning is used to consider and evaluate major capacity expansion options,
and identify system bottlenecks.
While the primary objective of mining export supply chains is typically to maximise
production capacity, that is, tonnes of ore loaded on to vessels at the port, in some
mining operations, the extracted ore is blended into a variety of products with differing
characteristics before being exported. This can be the case for ores such as coal, iron
and manganese. In these operations, an additional objective, in the form of achieving a
pre-determined quality of material on the vessels, is an equally important measure of
system performance. The objective of delivering a certain quality of product often directly
conflicts with the objective of maximising production capacity, resulting in an increased
level of complexity across the supply chain. In these supply chains, the decision making
process of planning the movement and blending of ore through the system is paramount
to the overall system performance. Capturing this complex planning process in a des
modelling language is possible, but proves to be a very difficult and time consuming
task. Since planning problems are often modelled and solved using an optimisation
framework, an alternative approach is to decouple the decision making process from the
simulation model, develop a standalone optimisation model for it, and then integrate
the two to create a holistic model of the supply chain. The optimisation model is itself
CHAPTER III 139

a decision making tool that can be applied to problems covering a much shorter time
horizon than a typical des model, for example decisions to be made on a weekly or
monthly basis. The integration of an optimisation model into a des framework results
in the creation of a supply chain model that captures the effect of system uncertainty
across multiple time horizons, and allows deeper insights to be gained into the system
behaviour under different configurations.
This paper describes a method of incorporating an optimisation model that captures
a complex planning process within a des model of an export supply chain, and presents
a case study of a successful implementation on the export supply chain of PT Kaltim
Prima Coal (kpc) in Indonesia.

methodology
Discrete Event Simulation (des) modelling is the process of emulating real world operations
in a controlled environment on a computer. This des provides a rational and quantitative
process for increasing understanding of the potential consequences of alternate proposals.
This may range from a change in operational philosophies through to the commissioning
of new infrastructure. Hence des modelling can be a useful tool to aid in both long term
strategic decision making and short term planning and operational decisions.
des models are constructed by considering each physical item (train, car dumper,
reclaimer, ship, etc.) as a discrete entity, with its own uniquely defined set of properties or
attributes (speed, material type, reliability, carrying capacity, etc.) [1] . These entities act
out the operational activities that make up the processes being modelled. They consume
discrete periods of time for each activity and incur delays that can be logically induced
(e.g. bin empty, no rake, etc.) and also use stochastic methods to generate randomly
induced delays (e.g. breakdown, failures, etc.), all of which are dependent on the data
and operational rules that are defined for that particular process or piece of equipment.
This combination of logical and random events is designed to reflect the most likely
operational environment.
Each system within a des model has individual operating rules and parameters which
need to be accurately defined. In the context of a mining operation, an export supply
chain involves the movement of ore from pit to port, via any number of sub-systems. In
many mining supply chains, the operational rules regarding the movement of ore are
simply defined (e.g. lump material goes to a lump stockpile, fines material goes to a fines
stockpile). The lack of product diversification in these instances means that there are
little or no blending requirements throughout the supply chain. These simple operational
rules are able to be incorporated into des models of the mining supply chain relatively
easily, allowing the des model to provide a realistic representation of the export supply
chain as a whole. However, in some mining supply chains, the process of moving ore
from pit to port is significantly more complicated. This is particularly the case when
the ore is blended into a variety of products with differing characteristics before being
exported, which can be the case for ores such as coal, iron and manganese. For operations
such as these, day to day movements of ore are typically planned and executed by groups
of experienced individuals, who match current mining stocks and stockpile levels with
a shipping plan. The decision process by which they do so is complex, and cannot be
described using a simple set of rules. This limits the ability of a des modelling language
to precisely replicate the decision making process that is used in practice, and hence
provide an accurate representation of the export supply chain.
Optimisation modelling is ideally suited for analysing complex decision making
processes, where any number of (possibly conflicting) objectives have been identified as

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
140 Combining Optimi sation and Simulation to Model...

being desirable, subject to constraints such as system capacity, operational limitations


and time [2] . One of the most powerful features of an optimisation model is its ability
to consider hundreds of thousands of possibilities and determine the optimal decision
in a very short period of time. In the mining industry, optimisation modelling has
been widely applied in long term mine planning, particularly production scheduling
problems and ultimate pit design [3] . It is also possible to apply optimisation modelling
to the problem of planning the movement and blending of ore through a complex export
supply chain, such as those described above. This optimisation component can then
be integrated into a des model of the entire supply chain, enabling a holistic model of
the system to be developed. There are a number of advantages to modelling a complex
export supply chain in this manner. Automating the process of generating the plans
and carrying them out in the simulation reduces the need for human input, and aids in
the process of knowledge capture and retention. In addition, a standalone optimisation
model provides the ability to easily modify and test alternative planning strategies in
isolation. Optimisation models also have the ability to evaluate multiple criteria (e.g.
product quality, demurrage, amount of rom rehandling) as well as explore the effect of
changing priorities on each of these objectives.
This paper describes the development of an automated optimisation planning engine
for use in des models of complex mining export supply chains. The planning engine
plans the movement and blending of ore through the system, and interacts with the des
model, which attempts to enact this plan under realistic conditions, and hence provide
an accurate representation of the system dynamics of the export supply chain. The key
elements of the planning engine are described below.

Time horizon
Generally, a des model of a supply chain will consider the performance of the system
over a one year time period, using a mine plan and shipping plan for one year as inputs.
The planning engine is used to plan material movements on a more frequent basis, such
as fortnightly, weekly, or a number of days in advance. The time horizon used for the
planning process is an important factor in determining the complexity of the planning
problem, and hence the computation time required to solve a problem instance with the
planning engine. Once a short term plan is produced, it has to be translated into tasks to
be carried out by the simulation. The des model then attempts to carry out these tasks
as close as possible to the plan, subject to real life conditions and variability. A small
amount of intelligence is required within the simulation for dealing with unexpected
occurrences such as bad weather shutting down pits or pieces of equipment failing. At
the end of the planning period, control is passed back to the optimisation model with
an updated set of inputs for the next planning period. This process is then repeated.

Inputs
The aim of the planning engine is to plan movements of ore from pit to ship via
intermediate components of the supply chain such as processing plants, transportation
systems (rail networks or conveying systems) and stockyards. A feature of export supply
chains in the mining industry is the inclusion of buffers (stockpiles and queues) between
these sub-systems to mitigate the impact of sub-system performance variability on
overall system performance. Figure  ➊ is a schematic illustration of a generic mining
export supply chain. It highlights the separate sub-processes (mines, rail, port) that are
CHAPTER III 141

necessary to extract ore from the ground and ultimately load it onto customers' ships. It
also includes the buffers (stockpiles and queues) that are required to mitigate the impacts
of sub-process performance variability on total system capacity. A further complication
in the case of a multi-pit, multi-product mining operation is that intermediate stockpiles
may be used to blend the various ores into homogeneous products for shipping. It follows
that inputs to the planning engine are short term mine and shipping plans and the
current levels and quality in the intermediate stockpiles.

Mines Rail Stacking Reclaiming Shipping

Stockpile Queue of Stockpile Queue of


Surge Ore Cars Surge Vessels

Figure 1 Generic mining export supply chain.

Objective function
The planning engine is designed to determine the manner in which material is to be
moved through the system via the intermediate buffer stockpiles to attempt to satisfy
the input shipping plan. The objective is to maximise the throughput of material
while keeping shipped quality as close to target as possible, subject to equipment
availability constraints. It is formulated as a mixed integer linear program involving
multiple time periods, with a standard structure [2, 4] . As such, it has an objective
function which in this case is a maximisation, subject to a number of constraints,
of the general form:

Maximise  (1)

Subject to  (2)

The model is implemented in the optimisation language Lingo [5] . The following is a
list of objectives that the planning engine has to optimise against given the constraints
listed below:

• Maximise throughput (tonnes shipped);


• Minimise deviation of stockpile quality from the assigned lower and upper bounds;
• Minimise deviation from each vessels' target quality;
• Minimise deviation of each vessels' actual loading time from its assigned loading time.
The importance of each of these objectives is controlled by weighting multipliers in
the solver.

Constraints
The following are the list of constraints that the planning engine must operate within
when finding a solution that optimises the objective function.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
142 Combining Optimi sation and Simulation to Model...

• Pits
––Only mine blocks that are available, and adhere to the input mine
plan as closely as possible
––P recedence constraints between blocks are not explicitly captured.It
is assumed that precedence requirements have been handled in the
mine plan, and excursion from the mine plan is allowed only on a
quality issue
––Do not exceed shovel capacity in any pit and across all pits in any shift
––Do not mine more than the tonnage of any given block.

• Overland conveyor
––Do not exceed the maximum conveying rate.

• Stockpiles
––Do not exceed each stockpile's capacity
––Do not exceed each stockpile's maximum reclaim rate
––Aim to have each stockpile's quality within its nominated quality
range at all times
––Build stockpiles to completion before turning them over and
commencing reclaiming.

• Vessels
––Load vessels to their stated tonnage
––Do not load vessels prior to their arrival time
––Do not exceed the rated capacity of the ship loader
––Aim to have the product loaded on each vessel within its nominated quality range
––Allow through loading to occur when possible.

results and discussion


PT Kaltim Prima Coal (kpc) operates a coal mine near Sangatta in East Kalimantan,
Indonesia. Coal is mined at various grades and blended through a series of intermediate
stockpiles that are linked by a 13 km Overland Conveyor (olc) before being loaded as
multiple products on to ship. To determine the potential consequences of increased
production and alternative production scenarios, kpc required an understanding of
the interaction between throughput and quality, and how these are impacted by any
changes in infrastructure and /or operating policy. To help in this understanding, kpc
commissioned tsg Consulting to construct a des model of their operation.
The complicating factor in carrying out a capacity analysis for an operation such as
kpc is the requirement that the contracted coal quality be met on each vessel loaded.
The impact of this requirement is that the effective capacity of the supply chain as a
whole is reduced. Failure to acknowledge this factor and properly capture it in a des
model is likely to lead to an overstatement of the potential capacity of the system, which
can have severe ramifications for operators and investors alike. To ensure that the des
model sufficiently captured the complexity of the system, and planned the operation
in a manner that closely matched that used in practice on site, an optimisation model
of the planning process was developed. The nature of planning the kpc operation to
achieve the contracted coal qualities involves multiple objectives including throughput,
blending and on time delivery on to ships. As a result, the optimisation model that was
constructed needed to incorporate these multiple objectives, as well as have the ability
CHAPTER III 143

to interact with the des model. The optimisation replicates the planning activity that is
regularly performed on site to enable the des model to operate for an extended duration
(anywhere from one week to several years). The integration of the optimisation within
the simulation allowed kpc to link the effects of real life uncertainties to the strategic
plans being developed. 
The addition and integration of the planning engine within the simulation model was
a complex task. In isolation the three key elements of this model (quantity model, quality
model and planning) are well established, however the incremental addition of each of
these elements into an integrated des and optimisation model exponentially increases
the model complexity. The insights gained from this complex modelling system hold the
potential to revolutionise the way the kpc operation is planned and operated.

Overall benefit
The integrated des model has helped kpc in making strategic long term decisions, short
term planning decisions and also provides the possibility of aiding the operational
decisions of creating and evaluating weekly plans.

Strategic decision making

The primary purpose of building the integrated des model was to help kpc understand
the likely impacts of increased production, identifying bottlenecks in the system and
evaluating the effect and feasibility of various potential future expansions of the
operation. The ability to easily change the inputs of the model enables kpc to quickly
understand the effect of upgrades to equipment such as increasing crusher capacity,
improving conveyor rates and reliability, and increasing ship loading capacity. In
addition, the integrated des model provides the flexibility to test different stockpiling
configurations and examine the effect of reducing the amount of through loading.

Short term planning

The ability to take a short term marketing plan in terms of shipping demand for tonnes
and quality, and evaluate the potential of the operation to supply these tonnes given a
mine plan, has been of great benefit to kpc. The des allows kpc to see which plans are
harder to achieve as well as indicate the potential bottlenecks and identify the areas
that are causing the problems.

Operational decision making

The optimisation component of the integrated des model is able to operate in standalone
mode. This provides the ability to easily modify and test alternative planning strategies
in isolation. In addition, further detailed modelling of the planning process that occurs
on a day-to-day or even hour-to-hour basis is able to be included in this standalone model.
This provides sufficient detail to enable the optimisation component to be used in the
weekly planning sessions that are held on site, where the daily movements of ore for the
next week are determined.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
144 Combining Optimi sation and Simulation to Model...

Scenario Analysis
des modelling is a complex process that has the ability to generate a significant quantity
of output results. The interpretation of these results requires an in depth knowledge of the
model and its outputs. To best describe the performance of the operation under varying
operating scenarios requires a statistical comparison of results and an understanding
of statistics in general. To simplify this process, a reduced number of Key Performance
Indicators (kpis) have been identified that enable a simplified and more manageable
understanding of the outputs.
Just as the real system has variability associated with its performance (due to varying
equipment reliability, varying times to perform tasks and variances in the quality of
ore), so too does the des model. For this reason, no two actual or simulated years will
ever be the same. To cope with this fact it is necessary to run the des model for the same
simulated period a number of times and then calculate the mean and standard deviation
of the results for this period. Calculating the standard deviation quantifies the effect of
variability on the process and enables the range of results that could be expected to be
produced by the operation under similar circumstances to be gauged.
System performance is a combination of the kpis that describe the performance of the
system in terms of its ability to load coal on to ships in the correct quantity and quality
within reasonable time frames. Hence system performance cannot be described alone by
any one performance indicator but rather is a combination of kpis. Furthermore, testing
the system at a single throughput is not sufficient to enable the successful identification
of system bottlenecks and inefficiencies. To enable a complete understanding of the
system performance the combination of the following two kpis is the focus of the
integrated des model of the kpc operation:

• Quantity: tonnes moved from mine to ship and the utilisation of the intermediate
stockpiles.

• Quality: the match between the customers' contracted shipments and the coal that is
actually loaded on to their vessels. Quality is measured by the Gross Calorific Value
(gcv) of the coal.

Quantity

des modelling allows the user to quantify the amount of extra production from a
proposed capital expansion, as well as providing valuable insights into the auxiliary
effects from any actions. This is particularly valuable in systems that contain many
interacting components, such as the kpc operation. Testing the sensitivity of the system
performance to varying equipment rates allows the potential benefit of changing the
operating philosophies used in this area of the supply chain to be determined, and also
provides a means of evaluating whether each piece of equipment is or could potentially
be a restriction or bottleneck on the overall system.

Quality

In the kpc operation, the quality of coal is measured by Gross Calorific Value (gcv). The
integrated des model incorporates the tracking of coal quality through the system and
hence has the ability to measure the quality of coal loaded on to ships. To quantify the
effectiveness of the operation under various scenarios, it is necessary to compare the
contracted consignment quality of coal with the loaded quality.
CHAPTER III 145

Example

The following is an example of the analysis that is able to be undertaken using the
integrated des model of the kpc coal chain. This example examines the effect of a
potential capital investment to upgrade the ship loader to enable it to operate at an
increased rate, both in isolation and in conjunction with the option of making an
additional investment to upgrade the olc to enable it to also operate at a higher rate.
To establish system performance sensitivity, the model is run with a range of
throughput levels, that is, using mine and shipping plans with differing levels of demand
for, and supply of, coal. This establishes a response curve for the system, which describes
the system performance as the demands on it are increased. Figure  ➋ presents an
examination of the impact that the proposed capital upgrades have on the quantity
achieved by the operation, measured by the kpi of tonnes shipped across a year. The
Base Case line shows that the current operation has little excess capacity to cope with
any increase in demand. Upgrading the olc or the ship loader in isolation enables some
gains to be achieved in throughput, while spending the additional capital to upgrade both
items clearly has the greatest impact for the potential throughput of the operation when
shipping demand is increased by 5%. The tailing off in improvement in all scenarios when
the shipping demand is increased by 10% indicates a potential shift in the bottleneck that
is constraining the operation, possibly to mining rate or stockpile size. The model can
then be used to investigate a new set of scenarios that consider alternate capital upgrades
in these areas of the operation. Price forecasts and cost estimates are then applied to
determine which of the potential capital upgrades under consideration are financially
viable and lead to the greatest return on investment.

Figure 2 Effect of capital upgrades on quantity shipped.

To assess the impact of the proposed capital upgrades on coal quality, a chart such as
that presented in Figure  ➌ is produced for each combination of demand and equipment
configuration. This chart compares the contracted consignment quality of coal with
the actual loaded quality. Each point on the chart represents a loaded vessel. The Y-axis
measures the difference between the contracted and actual quality of coal loaded by each
vessel. Therefore a positive Y-value indicates that the model has exceeded the contracted
coal quality for that particular vessel. The increase in the magnitude of coal quality

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
146 Combining Optimi sation and Simulation to Model...

error towards the end of the year indicates a mismatch between the mine plan and the
marketing (shipping) plan developed for the year.
A kpi measuring the percentage of vessels loaded to within a defined percentage of
their contracted coal quality is used to assess the performance of the operation with
regards to coal quality. The benefits of capital upgrades such as increased stockpile
capacities can be demonstrated in a chart such as Figure  ➌ by a reduction in the times
across the year where coal quality error exceeds a predetermined level, for example +/- 10%.

Figure 3 Error in loaded coal quality over time.

conclusions
The application of a properly developed des model provides a range of significant benefits
in assessing an integrated export supply chain. These benefits include the ability to assess
alternative operating practices, including maintenance options, through quantification
of performance. In addition to operating practices, various capital expenditures can be
compared to determine the best infrastructure for a given system. By analysing all of
these options, the optimal capacity of the supply chain can be determined, along with the
robustness of this capacity under uncertainty. The ability of des to investigate outcomes
over a range of different situations makes it ideal for risk analysis. Finally, quantification
removes the gut feel approach and replaces it with what if fact-based scenario analysis.
In the case of operations that have multiple, conflicting objectives, such as delivering
a certain quality of product while also maximising production capacity, an increased
level of complexity is added to the export supply chain. This additional constraint can
have a significant impact on overall system capacity, and is a difficult factor to capture
in any supply chain model. In a mining context, examples of ores for which this may
be the case include coal, iron and manganese. The decision making process of planning
the movement and blending of ore through the mining supply chain is paramount to
the overall system performance for operations such as these. Capturing this complex
planning process in a des modelling language is possible, but proves to be a very difficult
and time consuming task. Since planning problems are often modelled and solved using
an optimisation framework, an alternative approach is to decouple the decision making
process from the simulation model, develop a standalone optimisation model for it, and
then integrate the two to create a holistic model of the supply chain.
CHAPTER III 147

A method of incorporating an optimisation model that captures a complex planning


process within a des model of an export supply chain has been presented in this paper. A
case study of a successful implementation on the export supply chain of PT Kaltim Prima
Coal in Indonesia shows the potential benefits of taking this approach to modelling for
project evaluation and strategic mine planning purposes.

references
Zeigler, B. P., Praehofer, H. & Kim, T. G. (2000) Theor y of Modeling and Simulation, Second Edition,
Academic Press, San Diego, USA, pp. 176 –180. [1]

Winston, W. L. (1987) Operations R esearch: Applications and Algorithms, Duxbury Press, Boston, USA,
pp. 465 –553. [2]

Hustrulid, W. & Kuchta, M. (2006) Open Pit Mine Planning & Design, Second Edition, Taylor & Francis,
London, U.K., pp. 482–640. [3]

Wolsey, L. A. (1998) Integer Programming, Wiley-Interscience, New York, USA, pp. 1–19. [4]

Schrage, L. E. (2003) Optimization Modeling with Lingo, Lindo Systems Inc., Illinois, USA, pp. 197–320. [5]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Creating Mining Cuts Using
Hierarchical Clustering and
Tabu Search Algorithms

abstract
Hooman askari-nasab Open pit mine plans define the complex strategy of displacement
Mohammad tabesh of ore and waste over the mine life. Various Mixed Integer Linear
Mohammad badiozamani
Programming (milp) formulations have been used for production
University of Alberta, Canada scheduling of open pit mines. The main problem with the milp
models is the inability to solve real size mining problems. The
main objective of this study is to develop a clustering algorithm,
which reduces the number of binary integer variables in the milp
formulation of open pit production scheduling. To achieve this
goal, the blocks are aggregated into larger units referred to as
mining-cuts. A meta-heuristic approach is proposed, based on
Tabu Search (ts), to aggregate the individual blocks to larger
clusters in each mining bench, representing a selective mining
unit. The definition of similarity among blocks is based on rock
types, grade ranges, spatial location, and the dependency of
extraction of each mining-cut to the mining-cuts located above
it. The proposed algorithm is tested on synthetic data.
160 Creating Mining Cut s Using Hierarchical C lu s tering...

introduction
There are different approaches in solving optimisation problems which are classified into
three main categories: exact, heuristics, and meta-heuristics. In all three approaches
the size of the problem and its complication is a key feature. Exact algorithms are based
on mathematical programming which will find the optimal solution using different
methods being introduced in literature. Since most of the real world problems are large
scale and complicated, non-exact algorithms have come into existence. Instead of looking
for optimal solution, these algorithms just try to find a good solution within a reasonable
time and resource usage. This resource usage has been of great concern when using both
exact and non-exact algorithms. Different methods have been proposed for reducing the
amount of resources required to solve optimisation problems.
One known large-scale problem which cannot be solved using the exact optimisation
procedures is the open pit production scheduling. The number of variables and constraints
in this problem is related to the size of the deposit and the number of blocks in the
model. Open pit mine block models usually include millions of blocks, which makes the
exact optimisation method intractable for the open pit production scheduling problem.
For the purpose of modelling the open pit mine production scheduling block models
are used as the input. There will be variables, parameters and constraints regarding
extraction and processing of these blocks in production scheduling formulations. The
typical mathematical model used to optimise the open pit mine production scheduling
is Mixed Integer Programming (mip).
If blocks are considered to be large in size in order to reduce the number of blocks,
the precision required to model the overall pit slopes is lost. On the other hand, smaller
blocks may result in more variables and constraints which make the problem unsolvable
in a reasonable time. Therefore, methods have been proposed in the literature trying to
introduce new modelling and solution procedures being able of creating the production
plan and overcoming the curse of dimensionality in aforementioned mips.
The objective of this study is to develop an algorithm based on hierarchical clustering
and Tabu Search to aggregate blocks into larger formations referred to as mining-cuts.
Blocks within the same level or mining bench are grouped into clusters based on their
attributes, spatial location, rock type, and grade distribution. Similar to blocks, each
mining-cut has coordinates representing the centre of the cut and its spatial location.
Introduction of mining-cuts into Mixed Integer Linear Programming production
scheduling models will reduce the number of variables and constraints and as a result
the production scheduling problem could be solved using current optimisers.
In the next section, the literature of production scheduling is reviewed. In the second
section, the development of the clustering methodology based on hierarchical clustering
and Tabu Search is presented, followed by the results of an illustrative example from
implementing the proposed algorithm. Then, the proposed algorithm is applied to a case
study and the results are reported, followed by the conclusion.

literature review
Martinez [1] has worked on improving the performance of a Mixed Integer Production
scheduling model and has implemented his findings in a case study in Sweden. He has
studied on an underground iron ore mine and presents a combined (short and long-term)
resolution model, using Mixed Integer Programming. In order to decrease the solution
time, he develops a heuristic consisting of two steps: (1) solving five sub problems and
(2) solving a modified version of the original model based upon information gained from
CHAPTER III 161

the sub problem solutions. His presented heuristic method, results in a better solution
in less time comparing to solving the original milp problem.
Newman and Kuchta [2] studied on the same case study with Martinez [1] on an
underground iron ore mine. They have designed a heuristic algorithm based on solving
a smaller and more tractable model than the original model, by aggregating time
periods and reducing the number of variables. Then, they solve the original model using
information gained from the aggregated model. For the worst case performance of this
heuristic, they compute a bound and show that their presented procedure produces good
quality results while reducing the computation time.
Ramezan [3] uses an algorithm entitled Fundamental Tree Algorithm which is developed
based on linear programming. This algorithm aggregates blocks of material to clusters
and as a result, decreases the number of integer variables as well as the number of
constraints in mip formulation. He proposes a new Fundamental Tree algorithm in
optimising production scheduling of open pit mine. The economic benefit of the proposed
algorithm compared to existing methods is demonstrated through a case study.
Gaupp [4] presents three approaches to make the milp more tractable: (1) reducing
of deterministic variables to eliminate blocks from consideration in the model; (2)
strengthening the model's formulation by producing cuts and (3) using Lagrangian
techniques to relax some constraints and simplify the model. By using these three
techniques, he determines an optimal (or near-optimal) solution more quickly than
solving the original problem.
Askari-Nasab and Awuah-offei [5] have proposed two milps formulation for long-term
large-scale open pit mine production scheduling problem. They have used the Fuzzy
C-means clustering to aggregate the blocks in each elevation and reduce the number of
blocks to smaller number of aggregated blocks. As a result, they have reduced the number
of variables in the proposed milp. They have implemented the proposed milp theoretical
frameworks for large-scale open-pit production scheduling.
Amaya [6] uses the concept of local-search based algorithm in order to obtain near
optimal solutions to large problems in a reasonable time. They describe a heuristic
methodology to solve very large scheduling problems with millions of blocks. They start
from a known feasible solution, which they call incumbent. Their proposed algorithm seeks
to find a similar solution to the current solution with an improved objective function value.
To do that, the algorithm, by means of a random search, looks for solutions that partially
coincide with the incumbent. Whenever any improvement in objective function is found,
the incumbent is updated and the process is repeated to reach to its stopping criteria.
Boland et. al. [7] propose a method in which the mining and processing decisions are
made based on different decision variables. They use the aggregate of blocks in scheduling
the mining process. Then, an iterative disaggregation method is used which refines
the aggregates up to the point where the refined aggregates result in the same optimal
solution to the relaxed lp, considering individual blocks. Then, their proposed algorithm
uses individual blocks for making decision on processing. They have proposed several
strategies to create refined aggregates. These refined aggregates provide high quality
solutions in terms of Net Present Value (npv) in reasonable time.
Through a literature review, it can be inferred that most of researchers have focused on
heuristic and meta-heuristic approaches to solve the mine production planning problem
which do not guarantee the optimality of the solution. Also, there are some publications
on penalty function and Lagrangian relaxation methods to scale down the problem and
make it solvable with mathematical programming approaches. Only few studies are
done to aggregate the blocks and reduce the number of variables to make the problem
tractable. Among the latest aggregation approaches employed to solve the problem is

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
162 Creating Mining Cut s Using Hierarchical C lu s tering...

what Askari-Nasab and Awuah-offei [5] proposed using Fuzzy C-means clustering to
make mining cuts. In the next section the theoretical framework, used in developing
the algorithm, is reviewed.

Theoretical framework and models


Clustering

According to the handbook of applied algorithms Clustering is the process of grouping together
object s that are similar.  The groups formed by clu stering are referred to as clu sters [8] . This
process is usually performed by calculating a measure of similarity (or dissimilarity)
between different pairs of data. In addition to this measure, our purpose of clustering
may also affect the result of the clustering process. In other words, the same set of data
can be clustered in two different ways when they are to be used for different purposes.
Different characteristics of data is taken into account in calculating similarity measure
can be the cause of having different results as well as the clustering method used.
There have been many clustering algorithms proposed in the literature which are
based on grouping data according to their different characteristics. Some of these groups
are mentioned in this section.
Exclusive vs. Non-exclusive: an Exclusive clustering technique ensures the resulted
clusters to be disjoint in contrast with Non-exclusive algorithms which result in clusters
having overlaps. Most of the current famous algorithms belong to the first group.
Intrinsic vs. Extrinsic: when external effective parameters matter, clustering techniques
are divided into another two categories. Intrinsic algorithms are the ones which are
unsupervised activities and create the clusters based on the data itself. On the other
hand, where clustering is done based on external sources of data such as predetermined
objects which should or should not be clustered together the process is called to be
Extrinsic. Intrinsic clustering algorithms are more common.
Hierarchical vs. Partitional: Hierarchical algorithms are the ones which create a sequence
of partitions on the data and are divided into two groups. Hierarchical agglomerative
algorithms are the one which start creating clusters with a single object and combine them
together in order to find the final clusters. The opposite direction is used in hierarchical
divisive algorithm where the big cluster containing all the objects is created first and it
is segmented to smaller clusters in each step. Partitional clustering methods create
partitions based on the similarity and dissimilarities being defined between objects
such that more similar objects, according to the pre determined properties, are grouped
together. A common example of these algorithms is the k-means algorithm and its many
extensions.
Since in this paper, the focus is on a Hierarchical clustering algorithm, a review of
this algorithm is presented here. According to [9] , the Hierarchical clustering has the
following steps:

• Start by assigning each item to a cluster, so that if you have N items, you now have
N clusters, each containing just one item. Let the distances (similarities) between
the clusters the same as the distances (similarities) between the items they contain.

• Find the closest (most similar) pair of clusters and merge them into a single cluster,
so that now you have one cluster less.

• Compute distances (similarities) between the new cluster and each of the old clusters.

• Repeat steps 2 and 3 until all items are clustered into a single cluster of size N.
CHAPTER III 163

Penalty function

Despite clustering reduces the number of decision variables, the penalty function
commonly eliminates some or all of the constraints from a problem. Penalty functions
are defined as follows: A technique used in solving constrained optimisation problems, often
used to restrict the solution search to designs that meet all criteria. As the name implies, a penalty
is assigned to the figure of merit or merit function if a constraint is violated during optimisation
[10] If you consider a normal objective function to be a cost minimisation function, a
penalty value is the amount which is determined as the cost being applied because of
violation of each constraint.
Two different kinds of constraints can be defined. The first group is called the hard
constraints which cannot be violated during optimisation procedure. The second group,
which can be violated in order to have better results, are called soft constraints. Contrary
to soft constraints, the amount of penalty for hard constraints is set so large that the
solution procedure never violates the constraint.
The simplest way of representing the penalty function is a normalised weighted sum
of deviations from design values of each constraint. These weights can be determined in
for both hard and soft constraints. This can be shown as a definition of Figure of Merit
represented by Equation (1) .

(1)

Where:
di = is the design objective of constraint
ci = is the current value of constraint
wi = is the weight
N = is the number of constraints

Although it seems to be easy and generic to implement penalty functions, it is practically


difficult to use because different penalty functions for different problems should be
determined.

methodology
The mixed integer formulation of the open pit mine planning problem consists of
different integer and continuous variables and different sets of constraints. One of these
sets which are making the problem large in size is the order of extraction constraints. In
order to reduce the number of constraints in this set, a clustering procedure based on
Fuzzy C-means algorithm is proposed in [5] . Developing these clusters is done based on
similarities between blocks regarding physical location of the block and its grade which
is taken into account as a (zero-one) ore-waste categorisation.
In the proposed method in this study the main idea is to create clusters in a way
that the number of constraints regarding order of extraction is reduced in the milp
formulation. The number of constraints for each mining-cut is equal to the number
of clusters in the upper level which are supposed to be extracted prior to extraction of
the current mining-cut. Therefore, the algorithm has a look on the lower level and the
dependencies between clusters while creating clusters in each bench.
The clustering procedure is a two step algorithm. In the first step an initial clustering
is provided based on a similarity factor representing the spatial distance between blocks,

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
164 Creating Mining Cut s Using Hierarchical C lu s tering...

the similarity of rock types, grades, and also the clusters in the lower bench. In this
step, the Hierarchical clustering approach is employed. In the next step a Tabu Search
procedure is applied to modify clusters in a way that the number of dependencies between
the two levels and accordingly the number of constraints is reduced.
The idea of using penalty values has been borrowed from [11] where a clustering
algorithm called Adaptive Mean-Linkage with Penalt y is introduced. In proposing this
algorithm object oriented characteristics have been considered. In this concept each
object is supposed to have different properties. These properties can vary from simple
spatial locations to complicated computational problem specific values. These values can
be divided into two main groups: categorical and numerical values. The only important
point in comparing categorical properties is to see whether the properties are equal or
not, opposed to numerical properties where the amount of difference is of importance.
While calculating dissimilarity measure in Adaptive Mean-Linkage with Penalty algorithm,
both of these types are considered and a penalty value regarding difference is assigned
to each comparing property.
In the proposed algorithm, two numerical and two categorical properties are defined
for each block. Spatial location and grade values are the properties where the amount of
difference matters in contrast to rock type and the lower bench cluster.
The final clusters created using this procedure have to have four main characteristics:

• Each cluster has to be spatially united which means if two blocks are the members of
a cluster there should be a set of blocks in the same cluster connecting these together.

• Ore and waste blocks should be grouped together for further planning and decision making.

• Clusters consisting of one rock type are easier to deal with in scheduling phases.

• One of the most important goals of the proposed clustering algorithm is to create
clusters having a look at the lower bench in order to reduce the number of technical
extraction dependencies. It is satisfied by defining a property called beneath cluster for
each block. In addition, in the Tabu Search, the concept of beneath cluster is taken into
account in measuring the goodness of each state.

At the first step of the procedure one of the parameters considered in defining the
similarity measure is the similarity between the clusters beneath each block. This will
make the algorithm to group the blocks at the top of one cluster together. This will reduce
the number of technical extraction constraints between clusters. In the next step the
Tabu Search tries to modify boundaries of the clusters in a way that the overlap between
clusters is reduced.
In order to use Tabu Search (ts) algorithm to find a good solution for clustering in
each bench, an initial state solution is required to feed into the algorithm. The initial
solution can be found via several means. In this paper, Hierarchical clustering algorithm
is used to generate the initial state solution. In order to use the clustering algorithm,
the similarity between each pair of blocks should be defined first. To do so, the following
steps are considered.

Similarity definition
1. C alculate the distance matrix between all blocks in the active bench. Distance
between blocks is calculated by Equation (2) :


(2)
CHAPTER III 165

2. Calculate the rock type similarity matrix for all pairs of blocks in the active bench.
Rock type similarity is calculated by Equation (3) as assigning zero as the
similarity between two blocks with different rock type and one, as the similarity
between two blocks with same rock types.

(3)

3. Calculate the grade similarity matrix for all pairs of blocks in the active bench by
Equation (4) .

(4)

Where and are grades of blocks i and j and is the grade similarity between
these two blocks.

4. Calculate the beneath cluster similarity matrix for all pairs of blocks in the active
bench. Beneath cluster similarity is calculated by assigning a number (less than
one) as the similarity between two blocks which are above different clusters (in
beneath bench) and one, as the similarity between two blocks which are above the
same cluster. This is represented by Equation (5) .

(5)

C alculate the similarity matrix between all pairs of blocks in the active bench.
Similarity between blocks i and j is defined by Equation (6) .

(6)

As the result, the farther the blocks are from each other, the less similarity they have. In
addition, the similarity between rock types, grades and beneath clusters of two blocks
results in higher similarity between those blocks. By having the similarity matrix, it is
possible to move forward to find out the initial state solution.

Hierarchical clustering algorithm:


Using the similarity matrix, the hierarchical algorithm can be described as:

1. Start by assigning each block to a cluster, so that if there are N blocks in the bench,
there are N clusters, each containing just one block. Keep the similarities between
the clusters the same as the similarities between the blocks they contain.

2. Find the most similar pair of clusters.

3. I f clusters resulted from step 2 are neighbour (have common borders), then merge
them into a single cluster, so that the number of clusters becomes one less and go
to step 4. If not, ignore these clusters and return to step 2.

4. Compute similarities between the new cluster and each of the old clusters by
considering the least similarity of new cluster members.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
166 Creating Mining Cut s Using Hierarchical C lu s tering...

5. Repeat steps 2 and 3 until all blocks are clustered into a pre defined number of clusters.

The maximum number of blocks in each cluster is controlled in order not to have unequal
sizes for clusters. The Hierarchical algorithm to find an initial state solution is presented
as a flow chart in Figure  ➊ (left).

The Tabu Search (TS)


Having the initial state solution, now the solution can be improved by applying the Tabu
Search (ts). The following steps define our solution algorithm based on ts:

1. Determine the neighbours of each state trough the following steps:

1.1 Calculate the number of arcs each cluster has produced/caused (it is determined
by looking at previous bench clusters (level 0) and counting the number of arcs
each cluster at current bench (active level) has produced in respect to clusters of
level 0). Sort the clusters in respect to the mentioned number of arcs.

1.2 Choose m clusters which have produced more arcs (m first clusters from the sorted
list of step 1.1).

1.3 For each block in those m clusters selected in step 1.2, calculate the number of
arcs that are caused by the block and sort the blocks with respect to this number.

1.4 From top of the sorted list of step 1.3, choose n blocks which are on borders of
the cluster.

1.5 Consider all situations in which these n blocks are disconnected from previous
clusters and connected to other neighbouring clusters. (there are at most 3
neighbour clusters for each of bordering blocks of a cluster). As the result, the
maximum number of neighbours in each state is m × n × 3 .

2. Update the goodness measure for all new clusters in new situation as defined by
Equation (7) . This is the intra-cluster similarity measure:


(7)

Where
ni = The number of block in pairs in cluster i
Sjk = The similarity between blocks j and k.

State measure is presented by Equation (8) :


(8)

Where
ICM = Average of all intra-cluster similarities
N = Total number of arcs
Ws = Weight of similarity
WN = Weight of number of arcs

3. Update the candidate list by choosing the state with the maximum measure as the
new member.

4. Repeat steps 1 to 3 until the stopping criteria happen.


CHAPTER III 167

Stopping criteria: number of candidate list members reaches to a pre-determined number or


in s consecutive states, the best solution remains unchanged. The proposed ts algorithm
is presented as a flow chart in Figure  ➊ (right).

Figure 1 Algorithm flow charts.

Numerical example
In order to show how the proposed algorithm works, the hierarchical clustering algorithm
is applied to find the initial state solution to a simple example. The block model data is
as presented in Table 1 .

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Table 1 Sample of block model data base

Block ID X coordinate Y coordinate Rock type Average grade Beneath block


A 1 1 1 5.00% 1
B 2 1 1 4.00% 1
C 3 1 1 1.00% 2
D 4 1 2 0.50% 2
E 5 1 2 0.00% 2
F 1 2 1 4.50% 1
G 2 2 2 3.00% 1
H 3 2 2 2.00% 1
I 4 2 2 1.00% 2
J 5 2 3 0.90% 2
K 1 3 2 4.00% 1
L 2 3 3 5.20% 1
M 3 3 3 3.00% 1
N 4 3 2 2.00% 4
O 5 3 3 0.90% 2
P 1 4 2 3.00% 3
Q 2 4 1 4.00% 3
R 3 4 3 5.50% 4
S 4 4 2 5.10% 4
T 5 4 3 4.50% 4
U 1 5 2 1.00% 3
V 2 5 1 1.50% 3
W 3 5 1 2.00% 3
X 4 5 1 2.70% 4
Y 5 5 3 3.00% 4

For the sake of this problem, the beneath bench (level 0) is assumed to be clustered as
presented in Figure  ➋ (a). It is assumed that this level has 25 blocks on a regular grid
of size 5×5. According to the data presented in Table 1 , there are 25 blocks in the active
bench, labeled 1 to 25. Furthermore, like the lower bench, they are assumed to be aligned
in a regular 5×5 grid on the active bench. The sketches of grade distribution and rock
types are presented in Figure  ➋ (b) and ➋ (c) respectively.
As it is mentioned in previous sections, some parameters are required in hierarchical
clustering algorithm. These parameters are r, c, WD, WG , WR , and WC . The parameters used
in this example are presented in Table 2 .

Table 2 Parameters

WD WD WD WD r c
1 1 1 1 0.8 0.2

The similarity matrix used in hierarchical clustering algorithm is presented in appendix


A. The following sketch presented in Figure  ➌ (a) shows the initial state solution from
hierarchical clustering algorithm. As the result of applying the hierarchical algorithm,
five clusters are made in the last iteration. Applying the Tabu Search to improve the
initial state solution has resulted in Figure  ➌ (b). After eight Tabu Search iterations,
the total number of arcs is reduced from 20 (as result of hierarchical) to 17.
CHAPTER III 169

Figure 2(a) beneath clusters, (b) grade distribution, (c) rock types.

Figure 3 (a) before Tabu Search, (b) after Tabu Search.

case study
The proposed algorithm is applied to a case study. In this case, there are 2598 blocks
in seven benches. The sizes of blocks are 50 x 25 x 15 m. The algorithm is executed by
a matlab code. The plan views of the first and sixth benches showing the grades, rock
types and clusters are illustrated in Figure  ➍ (a) to ➍ (f ). The maximum number of
clusters in each bench is considered to be 10.

Figure 4(a) bench 1, ore-waste distribution Figure 4(d) bench 6, ore-waste


(represents the waste). distribution (represents the waste).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
170 Creating Mining Cut s Using Hierarchical C lu s tering...

Figure 4(b) bench 1, rock types Figure 4(e) bench 6, rock types
(represents the waste type). (represents the waste type).

Figure 4(c) bench 1, clusters. Figure 4(f) bench 6, clusters.

conclusions
A meta-heuristic approach is developed, based on Tabu Search (ts), which aggregates
the blocks into larger clusters in each mining bench representing a selective mining
unit. The main objective of the developed clustering algorithm is to reduce the number
of binary integer variables in the Mixed Integer Linear Programming formulation of
open pit production scheduling problem. To achieve this goal, the blocks are aggregated
into larger units referred to as mining-cuts. The definition of similarity among blocks is
based on rock types, grade ranges, spatial location, and the dependency of extraction of
a mining-cut to the mining-cuts located above it. The algorithm is developed and tested
on synthetic data. As future work we will fully integrate this clustering algorithm into
a milp mine production scheduling framework. The efficiency of the new platform will
be evaluated through case studies on large-scale open pit mines.

references
Martinez, M. A. (2006) Improving the Per formance of a Mixed-Integer Production Scheduling Model for
LK AB's Iron Ore Mine, Kiruna, Sweden. Colorado school of Mines: Golden. p.70. [1]

Newman, A. M. & Kuchta, M. (2007) Using Aggregation to Optimize Long-term Production Planning at an
Underground Mine. European Journal of Operational Research. 176(2): pp. 1205–1218. [2]

Ramezan, S. (2007) The New Fundamental Tree Algorithm for Production Scheduling of Open Pit Mines.
European Journal of Operational Research. 177(2): pp. 1153–1166. [3]

Gaupp, M. (2008) Method s for Improving the Tractabilit y of the Block Sequencing Problem for Open-Pit
Mining, in Division of Economics and Business. Colorado school of Mines: Golden. [4]
CHAPTER III 171

Askari-Nasab, H., & Awuah-offei, K. (2008) Mixed Integer Linear Programming Formulations for Open-Pit
Production Scheduling. Mining optimization laboratory (mol). [5]

Amaya, J.E.A. (2009) A scalable approach to optimal block scheduling. [6]

Boland, N., et. al. (2009) L P-based disaggregation approaches to solving the open pit mining production
scheduling problem with block processing selec tivit y. Computers & Operations Research. 36(4):
pp. 1064–1089. [7]

Stojmenovi'c, A.N.I. (2008) Handbook of Applied Algorithms: Solving Scientific, Engineering and Practical
Problems: John Wiley & Sons, Inc. [8]

Johnson, S. C. (1967) Hierarchical Clustering Schemes. Psychometrika. 32(3): pp. 241–254. [9]

Organization, B.R.(Optimization – Penalty Functions, Solving constrained optimization problems. 2009,


asap technical. [10]
Dósea, M., Silva, L., Silva, M. & Cavalcanti, S. (2008) Adaptive Mean – Linkage with Penalt y: A ne w
algorithm for cluster analysis. Chemometrics and Intelligent Laboratory Systems, 94(1): pp. 1–8. [11]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Quantifying Multi-Element
and Volumetric Uncertainty
in a Mineral Deposit

abstract
Ryan goodfellow Traditional geostatistical modelling of orebodies and estimation
Francisco albor of grade-tonnage curves do not account for the uncertainty of
Roussos dimitrakopoulos
the orebody grades and tonnages. Geological interpretations of
McGill University, Canada complex shapes are often over-constrained, and therefore do
not properly identify the location of the ore. In these situations,
Tim lloyd tonnage is often under-estimated and grade is over-estimated,
Vale Inco, Canada resulting in orebody models used for mine planning that lead
to costly business decisions. This paper presents an approach
aiming to better assess the uncertainty in an orebody model. The
approach is applied to Vale Inco’s West Orebody of the Coleman
McCreedy Mine, a poly-metallic deposit containing nickel, copper,
gold, platinum and palladium. To encapsulate the orebody's
variability and uncertainty, the nickel-copper sulphide contact
is simulated using the Single Normal Equation Simulation
(snesim) method. The realisations served as the orebody models
from which the grades of multiple elements are jointly simulated
using Min/Max Autocorrelation Factors (maf). The final result is a
series of equiprobable representations of the mineralisation that
incorporates both grade and tonnage uncertainty. The case study
indicates that had conventional orebody estimations been used,
there would have been a 10% over-estimation of orebody volume,
along with significant over-estimation of low-grade material and
under-estimation of high-grade material.
150 Qu antif ying Multi-Element and Volume tric...

introduction
In cases where drillhole spacing is particularly tight and the geological domains is
continuous, deterministic mapping of geological domain boundaries (i.e. a wireframe)
is likely to be sufficient in describing the mineralised volume of a deposit [1] . To the
contrary, deposits containing a mixture of two or more grade populations and uncertainty
in the interpretation of the edges of the geological domains lead to substantial volumetric
uncertainty, and stochastic treatment of the wireframes offers a suitable alternative [2] .
Curvilinear geometries, typical in mineral deposits, are not properly modelled using
traditional two-point spatial statistics such as variograms [3] . The reproduction of
geometries calls for the consideration of the joint categorical variability at three or
more points at a time [4] . Strebelle [4] proposed a stochastic simulation algorithm that
does not require variogram modelling and is based on extracting the so-called Multiple
Point (mp) statistics from a training image.
Geological deposits typically contain several variables of interest that are spatially
correlated. The use of joint geostatistical techniques that maintain spatial correlation
are not new [5, 6]; however, the computational costs associated with the simulation
increase significantly with more variables, and require modelling of cross-correlations,
which substantially increases with the number of variables being jointly simulated. A
practical alternative to the ‘direct’ joint-simulation of variables is the decorrelation of
variables introduced using Principal Component Analysis or pca [5, 7]. The effectiveness
of this approach is limited because pca does not eliminate cross-correlations at
distances other than zero. To overcome the above limitations, Minimum/Maximum
Autocorrelation Factors, maf, [8, 9] may be used to decorrelate pertinent variables into
spatially non-correlated factors that are independently simulated and back transformed
to correlated attributes.
In the following sections, a general overview of Multiple-point simulation is given,
followed by an overview of joint simulation of multiple correlated variables using
Min/Max Autocorrelation Factors. Both methods are then applied to a case study
at Vale Inco's West Orebody at the Coleman McCreedy deposit, by first simulating
the ore envelope, and subsequently simulating the grades for gold, copper, nickel,
palladium and platinum jointly within the models. Following this, the results from
the generated grade-volume curves are discussed in terms of volumetric uncertainty
and grade variability within the generated models. Finally, conclusions from this case
study are presented.

multipoint simulation of orebody models


Definitions
It is widely known that drillhole data is often too sparse to provide relevant Multiple-
point statistics to be useful for simulations. In Multiple-point geostatistics, a training
image (i.e. a geological analogue that contains features relevant to the deposit) comprised
of closely spaced data is used to infer Multiple-point statistics that are used for further
simulations. Multiple-point (mp) statistics consider a joint neighbourhood of any number
of points, n. The geometric configuration of a Multiple-point data event D, centred at node
A, is called the template, τn , of size n.
CHAPTER III 151

A conditional simulation algorithm for MP Geostatistics


Consider an attribute S taking K possible categorical states {s k , k = 1, ... K}. A data event
d n of size n centred at location x is defined by the set of n vectors {h α , α = 1,..., n} and
consists of a set of data values s(x + h α) = s(x α), α = 1, ...,n. The mps simulation builds on
the sequential simulation paradigm, where once simulated, a nodal value becomes
conditioning data for other nodes to be simulated.
In the snesim (Single Normal Equation Simulation) algorithm [4] , the available
conditioning data forming the data event d n is stored in a search tree. The proportions
for building the ccdf are retrieved by searching for similar data events in the search tree
reading the related frequencies. In mp geostatistics, there is no need to approximate the
use of the global conditioning data event due to the use of a training image and the exact
calculation of the probability distribution conditional to d n . For a more detailed discussion
on the snesim algorithm, the reader is referred to [4] and [10] .

joint simulation of correlated variables with


minimum/maximum autocorrelation factors
Consider a multivariate, p dimensional, Gaussian, stationary and ergodic spatial random
function . Minimum/Maximum Autocorrelations Factors are
defined as the p orthogonal linear combinations of the original
multivariate vector Z (x). maf are derived assuming that Z (x) is represented by a two-
structure linear model of coregionalisation [7]. The maf transformation can be rewritten as

(1)

and the maf factors are derived from

(2)

where the eigenvectors Q 1 and eigenvalues Λ 1 are obtained from the spectral decomposition
of the multivariate covariance matrix B of Z (x) at zero lag distance. More specifically,

(3)

and Q 2 is the matrix of eigenvectors from the spectral decomposition

(4)

where the matrix is an asymmetric variogram matrix at lag distance for the
regular pca factors Y (x) = Z (x) A, where A = Q1Λ 1-1/2.
The conditional simulation of the Gaussian random function Y(x) is based on the
decomposition of the multivariate probability density function of a stationary and
ergodic random function to a product of local conditional distributions. The Generalised
Sequential Gaussian Simulation algorithm (gsgs) [11, 12] is used to simulate the N nodes
in a domain D, and is performed as follows:

• Define a random path visiting each group of N p nodes to be simulated. For maf
simulations, the random path must be the same for each of the factors.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
152 Qu antif ying Multi-Element and Volume tric...

• At each group of nodes, use Equation (7) to generate simulated values and add the
values to the data set. For maf simulations, the random seed to simulate the factors
at each node must be distinct.

• Go to the next group of nodes and repeat the previous two steps.

• Loop until all groups of nodes have been visited and all N nodes are simulated.

geology of the west orebody


Vale's West Orebody (wob), located in Sudbury, Ontario, Canada, is located 1,463 metres
west of the Lower Coleman Orebody. It has a known strike length of 213 m. The ore
thickness ranges from 1.5 to 46 m with an average of 27 m. The upper portion of the
orebody dips steeply to the south at 80° before flattening to 40° at depth. The orebody
is composed of massive sulphide, stringer sulphide, and weaker disseminations within
granite breccia zones enclosed in the host footwall rocks. The host rocks are granite
gneiss, granite breccia and Sudbury breccia.

simulation of the ore envelope


TI and hard data
The snesim algorithm is used to simulate the shape of the wob orebody. A conventional
deterministic orebody model is used as a training image. This model comprises 31 x 84 x 80
blocks with dimensions of 6.1 x 4.6 x 4.6 m3. The hard data used for the simulations is
obtained from 15,319 drillhole samples composited over 1.52-metre lengths.

Orebody simulation, specifics and post-processing


To assess the geological uncertainty in terms of volume and tonnage, 20 realisations
of the orebody are generated using the snesim algorithm [4, 13] . Since this is a single
orebody, connectivity for the blocks is expected. In this application, the snesim algorithm
found it difficult to retrieve sufficient information related to the connectivity and
orientation of geological features in the orebody in order to guarantee realisations that
consisted of a single completely connected orebody. One solution employed to overcome
this issue was to increase the number of nodes in the template, thus providing more
information to a node being simulated. This, however, raised the computational costs for
the simulations. Nevertheless, the connectivity issue was not completely overcome, and
the realisations required post-processing to remove randomly allocated ore blocks in the
model. The “cleaned” realisations are referred simply as realisations in the subsequent
sections.

Validation
Figure  ➊ shows how the simulated realisations reproduce the large-scale features of
the training image, such as the steeply-dipping of the upper portion of the orebody
and its flattening with depth. The algorithm [10] allows the realisations to reproduce
the target waste proportion of 93.2%; the waste proportions of the twenty realisations
fluctuate between 93.2% and 94.1% with a mean of 93.9%. Although the snesim
algorithm requires no variogram modelling, the realisations reproduce the variogram
of the training image (Figure  ➋).
CHAPTER III 153

Figure 1 Cross-sections of the training image, cleaned maximum, median and minimum simulated models
(in terms of volume) of the deposit, from left to right.

Variogram Reproduction for SNESIM Realisations


0,09

0,08

0,07
Gamma (h)

0,06

0,05

0,04

0,03

0,02 Simulations

0,01 Training Image

0,00
0 20 40 60 80 100 120 140
Distance (m )

Figure 2 Variogram reproduction of the realisations.

Assessing volumetric uncertainty


To assess the grade-tonnage uncertainty present in the modelling of the deposit, the
realisations generated by snesim are used as distinct orebody models from which nickel,
copper, gold, palladium and platinum are jointly simulated. The frequency distribution,
in terms of the percentage total volume of ore in the simulation compared to the volume
of ore in the training image, is shown in Figure   ➌. It is noted that the volume of the
ore blocks of the training image (a conventional deterministic wireframe generated by a
geologist) is approximately 10% higher than the volume of the simulations generated by
snesim. This highlights the need for the generation of multiple objective orebody models
when assessing a mineral deposit. For the purpose of this study, the simulated models
with the maximum, median and minimum ore volumes are retained for further joint
multi-element simulation, and are referred to as the Max, Med and Min orebody models.
➊ shows sample cross-sections of the selected orebody models.
Figure  

Figure 3 Frequency distribution of the volume of ore for the SNESIM realisations
as a percentage of the volume of ore blocks in the training image.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
154 Qu antif ying Multi-Element and Volume tric...

joint simulation of nickel, copper, gold,


palladium and platinum
Normal score transformation
Similar to traditional orebody estimation methods, the drillholes are filtered for each
orebody model, such that only the drillholes inside the model are kept. Prior to applying
the maf transformations to the spatially correlated variables, each of the elements is
transformed to a normal distribution. Normal score transformations are based on rank
ordering of the data and decrease the influence of outliers.

The transformation matrix AMAF (Equation (2)) for the median orebody is shown in Table 1.
MAF are calculated by multiplying the vector of Cu, Ni, Au, Pd and Pt by a vector of loadings
from the rows of the transformation matrix. The ∆ lag in Equation (4) used in this example
is nine metres, and is derived experimentally by testing several lag distances to assure a
suitable decorrelation and stable maf decomposition.

Table 1 MAF loading factors for Med. orebody model

Au Cu Ni Pd Pt
MAF1 0.56 -1.99 1.96 0.20 -0.62
MAF2 1.08 0.01 -0.70 1.58 -1.43
MAF3 0.24 -0.75 -0.39 -2.30 3.43
MAF4 0.02 0.56 0.92 -0.73 0.18
MAF5 -0.50 -0.50 -0.68 2.50 -0.60

Variography of MAF
The variogram models used for simulating each of the maf factors in the median orebody
model are presented in Table 2 . Each of the variogram models was assumed to contain a
nugget and a spherical structure. It is noted that maf variograms are linear combinations
of the variograms of the original (normal score) variables; the variogram models of the
maf factors does not necessarily correspond to the variography of the variables.

Table 2 Variogram parameters for Med. MAF factors

Major (x) Intermediate (y) Minor (z)


Variogram
AZ DIP AZ DIP AZ DIP

MAF1 90° 0° 0° 0° 0° 90°

MAF2 135° 0° 45° -23° 45° 168°

MAF3 135° 0° 45° -68° 45° 68°

MAF4 135° 0° 45° 45° 45° -45°

MAF5 225° 45° 135° 0° 225° -45°

Conditional simulation of MAF


Conditional simulations for each of the factors in each of the orebody models are
performed independently. The simulations are generated inside the orebody models
CHAPTER III 155

using 6.1 x 4.6 x 4.6 m 3 blocks and a 4 x 3 x 3 node discretisation density, resulting in


502,524, 446,940 and 436,536 nodes within the limits of the maximum, median and
minimum simulated orebody models, respectively. Ten simulations for each of the
factors are generated for each of the three simulated orebody models and are validated
in detail for reproduction of data, histograms and variograms on the point support.
Figure   ➍ shows an example cross-variogram for the simulations from the median
model; it is noted that there is negligible correlation at all lags, thus the variables are
being simulated independently. The remaining cross-variograms are very similar to
the one presented, thus are omitted. A validation of variograms, cross-variograms and
histograms is presented only for the data space in the subsequent section.

Figure 4 Example cross-variogram between MAF1 and MAF2 for all simulations from the median orebody.

Back transformation of MAF and validation of results


The realisations of maf were transformed back to simulated normal score variables by
multiplying a column vector of simulated maf in each grid node with the corresponding
inverse matrix of the maf loadings (e.g. Table 1 for the median orebody model). The normal
score Ni, Cu, Au, Pd and Pt realisations are subsequently back-transformed to the data space.
Validation of the jointly simulated variables involves calculation of histograms,
experimental variograms and cross-variograms of the simulated point realisations in
the data space to ensure reproduction of original data and their spatial characteristics.
Figure  ➎ shows some samples of the variograms and cross-variograms for the original
drillhole data and conditional simulations. All results suggest that the reproduction
of the original data spatial characteristics by the simulated realisations is reasonable.
Recall that the variograms and cross-variograms of original variables are not directly
used in the joint simulation based on maf, which used the variograms of the independent
maf. Figure   ➏ shows the reproduction of the histograms of the 10 simulations and
the original drillhole data for Au, Cu, Ni, Pd and Pt in the median orebody. All graphs
indicate that the simulations reproduce well the initial drillhole data. Figure   ➐ shows
sample cross-sections of nickel simulated inside the Min, Med and Max orebody models.
It is noted that by inspection, the grades are similar in all of the models, however there
are some fairly significant differences in the shapes of the models, which highlights
the benefits of generating a series of equiprobable orebody models to obtain a better
understanding of volumetric variations.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
156 Qu antif ying Multi-Element and Volume tric...

Figure 5 East-West variogram and cross-variogram reproduction for the median


orebody (note that the heavy lines are the variograms of the drillholes).

Figure 6 Histogram reproduction of the 10 simulations and original drillholes for Ni, Cu, Au, Pd, Pt in the
median orebody model.
CHAPTER III 157

Figure 7 Cross-sections of a sample nickel simulations for the Max, Med, Min orebody models (left - right).

risk analysis for decision making


To better understand the relationship between volumes and grades for the deposit,
Figure ➑ shows the changes in volume for a nickel cutoff grade for the 6.1 x 4.6 x 4.6 m3
blocks. It is noted that in the graph, the volumes above a cutoff for the Min and Med
orebodies are very close, and is quite difficult to distinguish between the sets of
simulations. The volumes above a certain cutoff grade are significantly higher for the
Max orebody simulations, which is to be expected given that the Max model has a 10%
larger volume than the Med model (whereas the Min model is only 2% smaller than the
Med model). This trend is particularly apparent for low cutoff grades for all of the metals;
at higher cutoffs, all of the curves for the Min, Med and Max orebodies converge.
As a basis for comparison, the nickel grade is estimated within the training image (a
conventional deterministic orebody model) using ordinary kriging (see ok Conventional
Model in Figure  ➑). The volume-cutoff curve for this model highlights the importance of
incorporating both volumetric and grade uncertainty into the geology models. Neglecting
to incorporate the volumetric uncertainty in this case would lead to a significant
overestimation of the orebody volume, while conventional grade estimation methods
would lead to a model that overestimates the low-grade material and underestimates
the high-grade material. It is clear that using the traditional geostatistical methods,
comprised of creating a subjective and deterministic orebody model and estimating
within that model may lead to costly business decisions in the mine planning phase.

Figure 8 Volume above Ni cutoff grade curves for the simulated orebodies with 6.1 x 4.6 x 4.6 m 3 blocks.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
158 Qu antif ying Multi-Element and Volume tric...

conclusions
This paper presents an approach to assess the grade and volumetric variability of a
mineral deposit. snesim was first used to simulate a series of objective equiprobable
models. Multiple spatially correlated elements were then simulated within the orebody
boundaries using Min/Max Autocorrelation Factors. This methodology is applied to
simulate nickel, copper, gold, palladium and platinum at Vale Inco's West Orebody at
the Coleman McCreedy deposit, Sudbury, Ontario, Canada. In the given case study, it was
discovered that had conventional orebody modelling methods been used, there would a
10% over-estimation of ore blocks, significant overestimation of low-grade material and
underestimation of high-grade material. With a series of representations that encapsulate
the volume and grade variability, mining engineers can make better informed decisions
that could ultimately lead to significant cost savings.

acknowledgements
The support of the cosmo Lab and its industry members AngloGold Ashanti, Barrick, BHP
Billiton, De Beers, Newmont, Vale and Vale Inco, as well as nserc, the Canada Research
Chairs Program and cfi is gratefully acknowledged.

references
Srivastava, R. M. (2005) Probabilistic Modeling of Ore L ens Geometr y: An Alternative to Deterministic
Wireframes. Mathematical Geology, Vol. 37 (5), pp. 513–544. [1]

Dimitrakopoulos, R. & Foneca, M. (2003) Assessing risk in grade-tonnage curves in a complex copper deposit,
northern Brazil, based on an efficient joint simulation of multiple correlated grades. apcom, Capetown,
pp. 373–382. [2]

Journel, A. G. (2007) Roadblocks to the evaluation of ore reserves—The simulation overpass and putting more
geology into numerical models of deposits, The Australasian Institute of Mining and Metallurgy,
Spectrum Series, Vol. 14, 2nd Edition, pp. 103–110. [3]

Strebelle, S. (2002) Conditional Simulation of Complex Geological Structures using Multiple-point Statistics.
Mathematical Geology, Vol. 34 (1), pp. 1–21. [4]

David, M. (1988) Handbook of applied advanced geostatistical ore reserve estimation. Elsevier, Amsterdam,
p. 232. [5]

Goovaerts, P. (1998) Geostatistics for Natural Resources Evaluation. Oxford University Press, USA, p. 496. [6]

Wackernagel, H. J. (1995) Multivariate Geostatistics. Springer, Berlin, USA, p. 403. [7]

Desbarats, A. J. & Dimitrakopoulos, R. (2000) Geostatistical simulation of regionalized pore-size distributions


using min/max autocorrelation factors. Mathematical Geology, Vol. 32(8), pp. 919–942. [8]

Boucher, A. & Dimitrakopoulos, R. (2009) Block simulation of multiple correlated variables. Mathematical
Geosciences, Vol. 41(2), pp. 215–237. [9]

Remy, N., Boucher, A. & Wu, J. (2009) Applied Geostatistics with SGeMS: a User's Guide. Cambridge:
Cambridge University Press, p. 284. [10]

Dimitrakopoulos, R. & Luo, X. (2004) Generalized sequential Gaussian simulation on group size v and screen-
effect approximations for large field simulations. Mathematical Geology, Vol. 36 (5), pp. 567–591. [11]

Benndorf, J. & Dimitrakopoulos, R. (2007) Ne w ef ficient me thod s for conditional simulation of large
orebodies. Orebody modelling and strategic mine planning, The Australasian Institute of Mining
and Metallurgy, Spectrum Series, Vol. 14, 2nd Edition, pp. 103–110. [12]

Osterholt, V. & Dimitrakopoulos, R. (2007) Simulation of wireframes and geometric features with Multiple-
point techniques: Application at Yandi iron ore deposit. Orebody modelling and strategic mine
planning, The Australasian Institute of Mining and Metallurgy, Spectrum Series, Vol. 14, 2nd
Edition, pp. 103–110. [13]
Stochastic Mine Planning
Optimisation: New Concepts,
Applications and Financial
Contribution

abstract
Roussos dimitrakopoulos Conventional approaches to estimating reserves, optimising
McGill University, Canada mine planning and production forecasting result in single, often
biased forecasts. This is largely due to the non-linear propagation
of errors in understanding orebodies throughout the chain of
mining. A new mine planning paradigm is considered herein,
integrating two elements: stochastic simulation and stochastic
optimisation. These elements provide an extended mathematical
framework that allows modelling and direct integration of orebody
uncertainty to mine design, production planning, and valuation
of mining projects and operations. This stochastic framework
increases the value of production schedules in the order of 25%.
Case studies also show that stochastic optimal pit limits can
be about 15% larger in terms of total tonnage when compared to
the conventional optimal pit limits. At the same time the net
present value is about 10% higher than that reported above from
stochastic production scheduling within the conventionally
optimal pit limits. Results suggest a potential new contribution
to the sustainable utilisation of natural resources.
174 Stocha s tic Mine Planning Optimi sation: Ne w Concept s...

introduction
Geostatistical estimation methods have long been used to model the spatial distribution
of grades and other attributes of interest within the mining blocks representing a deposit.
These estimated models of orebodies serve as input to mine planning optimisation [17]
and are used to estimate reserves [2] . The main drawback of estimation techniques is
that they are unable to reproduce the in-situ variability of the deposit grades, as inferred
from the available data. Ignoring such a consequential source of risk and uncertainty may
lead to unrealistic production expectations (e.g. in [3] ). Figure ➊ shows an example
of unrealistic expectations in a relatively small gold deposit. In this example [3] , the
smoothing effect of estimation methods generates unrealistic expectations of Net Present
Value (npv), along with ore production performance, pit limits and so on, in the mine's
design. The figure shows that if the conventionally constructed open pit design is tested
against equally probable simulated scenarios of the orebody, its performance will probably
not meet expectations and the conventionally expected npv of the mine has a 2% to 4%
chance to materialise, while it is expected to be about 25% less than forecasted. Note that
in a different example, the opposite could be the case.
For over a decade now, dealing with uncertainty in the spatial distribution of attributes
of a mineral deposit and its implications to downstream studies, planning, valuation
and decision-making, a different framework than the traditional has been suggested
and is outlined in Figure ➋. Instead of a single orebody model as an input to planning
optimisation and a correct assessment of individual key project indicators, a set of models of
the deposit can be used. These models are conditional to the same available data and their
statistical characteristics and are all constrained to reproduce all available information
and represent equally probable models of the actual spatial distribution of grades. The
availability of multiple equally probable models of a deposit enables mine planners to
assess the sensitivity of pit design and long-term production scheduling to geological
uncertainty [7, 9] and, more importantly, empower mine planners to produce mine designs
and production schedules with substantially higher npv assessments through stochastic
optimisation. Figure ➌ shows an example from a major gold mine [8], where a stochastic
approach leads to a marked improvement of 28% in npv over the life of the mine, compared
to the standard best practices employed at the mine; note that the pit limits used are the
same in both cases and are conventionally derived through commercial optimisers [17]. The
same study also shows that the stochastic approach leads to substantially lower potential
deviation from production targets, that is, reduced risk. A key contributor to substantial
differences is due to that the stochastic or risk-integrating approach can distinguish
between the ‘upside potential’ of the metal content and thus economic value of a mining
block from its downside risk, and treat them accordingly, as further discussed herein.

Figure 1 Optimisation of mine design in an open pit gold mine.


CHAPTER III 175

Figure ➊ shows npv versus 'pit shells' and risk profile of the conventionally optimal design.

Figure 2 Traditional (deterministic or single model) view and practice versus


risk-integrating (or stochastic) approach to mine modelling.

Figure ➋ applies from reserves to production planning and life-of-mine scheduling


and assessment of key project indicators.

Figure 3 The stochastic life-of-mine schedule in this large gold mine has a 28% higher
value than the best conventional (deterministic) one. All schedules are feasible.

Figure ➋ represents an extended mine planning framework that is stochastic and


encompasses the spatial stochastic model of geostatistics with that of stochastic
optimisation for mine design and production scheduling. Simply put, in a stochastic
mathematical programming model developed for mine optimisation, the related
coefficients are correlated random variables that represent the economic value of each
block being mined in a deposit, which are in turn generated from considering different
realisations of metal content. Note that the second key element of the risk-integrating
approaches is stochastic simulation; the reader is referred to [14] for the description
of a new general method for high-order simulations of complex geological phenomena.
To elaborate on the above, the next sections examine a key element in the risk-
integrating framework shown in Figure ➋, namely, stochastic optimisation. The
latter optimisation is presented in two approaches, one based on the technique of
simulated annealing and a second based on stochastic integer programming. Examples
follow demonstrating the practical aspects, including monetary benefits, of stochastic
mine modelling.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
176 Stocha s tic Mine Planning Optimi sation: Ne w Concept s...

Stochastic optimisation in mine design and production scheduling


Mine design and production scheduling for open pit mines is an intricate, complex and
difficult problem to address due to its large scale and uncertainty in the key parameters
involved. The objective of the related optimisation process is to maximise the total net
present value of the mine plan. One of the most significant parameters affecting the
optimisation is the uncertainty in the mineralised materials (resources) available in
the ground, which constitutes an uncertain supply for mine production scheduling.
A set of simulated orebodies provides a quantified description of the uncertain supply.
Two stochastic optimisation methods are summarised in this section. The first is based
on simulated annealing [1, 8, 10] ; and the second on stochastic integer programming
[11, 13, 15, 16] .

Production scheduling with simulated annealing


Simulated annealing is a heuristic optimisation method that integrates the iterative
improvement philosophy of the so-called Metropolis algorithm with an adaptive divide
and conquer strategy for problem solving [6] . When several mine production schedules
are under study, there is always a set of blocks that are assigned to the same production
period throughout all production schedules; these are referred to as the certain or
100% probability blocks. To handle the uncertainty in the blocks that do not have 100%
probability, simulated annealing swaps these blocks between candidate production
periods so as to minimise the average deviation from the production targets for N mining
periods and for a series of S simulated orebody models, that is

(1)

where, and are the ore and waste production targets, respectively, and and
represent the actual ore and waste production of the perturbed mining sequence.
Each swap of a block is referred to as a perturbation. The probability of acceptance or
rejection of a perturbation is given by:

(2)

This implies all favourable perturbations are accepted with probability


one and unfavourable perturbations are accepted based on an exponential probability
distribution, where T represents the annealing temperature.
The steps of this approach are depicted in Figure ➍ and are: (a) define ore and waste
mining rates; (b) define a set of nested pits as per the Whittle implementation [17] of
the Lerchs-Grossmann algorithm [12] or any pit parameterisation; (c) use a commercial
scheduler to schedule a number of simulated realisations of the orebody given (a) and
(b); (d) employ simulated annealing as in Equation (2) using the results from (c) and
a set of simulated orebodies; and (e) quantify the risk in the resulting schedule and key
project indicators using simulations of the related orebody.
CHAPTER III 177

Figure 4 Steps needed for


the stochastic production
scheduling with simulated
annealing.

S1... Sn are realisations of the orebody grade through a sequential simulation algorithm;
Seq1... Seqn are the mining sequences for each of S1... Sn. Mining rates are input to the process.

Stochastic integer programming for mine production scheduling


Stochastic Integer Programming (sip) provides a framework for optimising mine
production scheduling considering uncertainty [5] . A specific sip formulation is briefly
shown here that generates the optimal production schedule using equally probable
simulated orebody models as input, without averaging the related grades. The optimal
production schedule is then the schedule that can produce the maximum achievable
discounted total value from the project, given the available orebody uncertainty described
through a set of stochastically simulated orebody models. The proposed sip model allows
the management of geological risk in terms of not meeting planned targets during actual
operation. This is unlike the traditional scheduling methods that use a single orebody
model and where risk is randomly distributed between production periods while there
is no control over the magnitude of the risks on the schedule.
The general form of the objective function is expressed as:

(3)

where p is the total production periods, n is number of blocks, and bit is the decision variable
for when to mine block i (if mined in period t, bit is one and otherwise bit is zero). The c
variables are the unit costs of deviation (represented by the d variables) from production
targets for grades and ore tonnes. The subscripts u and l correspond to the deviations and
costs from excess production (upper bound) and shortage in production (lower bound),
respectively, while s is the simulated orebody model number, and g and o are grade and ore
production targets.

Figure 5 Graphic
representation of the way
the second component of the
objective function in Equation
3 minimises the deviations
from production targets while
optimising scheduling.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
178 Stocha s tic Mine Planning Optimi sation: Ne w Concept s...

This leads to schedules where the potential deviations from production targets are
minimised leading to schedules that seek to mine first not only for high grade mining
blocks but also with high probability to be ore.
Note that the cost parameters in Equation (3) are discounted by time using the
geological risk discount factor developed in [4] . The Geological Risk Discount Rate (grd)
allows the management of risk to be distributed between periods. If a very high grd is
used, the lowest risk areas in terms of meeting production targets will be mined earlier
and the most risky parts will be left for later periods. If a very small grd or a grd of zero
is used, the risk will be distributed at a more balanced rate among production periods
depending on the distribution of uncertainty within the mineralised deposit. The c
variables in the objective function (Equation (3)) are used to define a risk profile for the
production, and npv produced is the optimum for the defined risk profile. It is considered
that if the expected deviations from the planned amount of ore tonnage having planned
grade and quality in a schedule are high in actual mining operations, it is unlikely to
achieve the resultant npv of the planned schedule. Therefore, the sip model contains
the minimisation of the deviations together with the npv maximisation to generate
practical and feasible schedules and achievable cash flows. For details please see [5, 16] .

The value of the stochastic framework


The example discussed herein shows long-range production scheduling with both
the simulating annealing approach in Section 3.1 and sip model in Section 3.2. The
application is at a copper deposit comprising 14,480 mining blocks. The scheduling
considers an ore capacity of 7.5 M tonnes per year and a maximum mining capacity is 28 M
tonnes. All results are compared to the industry's ‘best practice’ that is, conventional
schedule using a single estimated orebody model and Whittle's approach [17] .

Simulated annealing and production schedules


The results for simulated annealing and the method in Equation (2) are summarised
in Figures ➏ to ➒. The risk profiles for npv, ore tonnages and waste production are
respectively shown in Figures ➏ to ➑, respectively. Figure ➒ compares with the
equivalent best conventional practice and reports a difference in the order of 25% in terms
of higher npv for the stochastic approach.

Figure 6 Risk based LOM production schedule (cumulative NPV risk profile).
CHAPTER III 179

Figure 7 Risk based LOM production schedule (ore risk profile).

Figure 8 Risk based LOM production schedule (waste risk profile).

Figure 9 NPV of conventional and stochastic (risk based) schedules and corresponding risk profiles.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
180 Stocha s tic Mine Planning Optimi sation: Ne w Concept s...

SIP and production schedules


The application of the sip model in Equation (3) using pit limits derived from the
conventional optimisation approach forecasts an expected npv at about $238 M. When
compared to the equivalent traditional approach and related forecast, the value of
stochastic framework is $238 M, or a contribution of about 25% additional npv to the
project. Note that unlike simulated annealing, the scheduler decides the optimal waste
removal strategy, which is the same as the one used in the conventional optimisation
with which we compare.

Figure 10 Cross-sectional views of the SIP (bottom) and traditional schedule (TS - top) for a copper deposit.

Figure �� shows a cross-section of the two schedules from the copper deposit; one
obtained using the sip model (bottom) and the other generated by a traditional method
(top) using a single estimated orebody model. Both schedules shown are the raw outputs
and need to be smoothed to become practical. It is important to note that (i) the results
in the second case study are similar in % improvement compared to other stochastic
approaches such as simulated annealing and (ii) although the schedules compared in the
studies herein are not smoothed out, other existing sip applications show that the effect
of generating smooth and practical schedules has marginal impact on the forecasted
performance of the related schedules, thus the order of improvements in sip schedules
reported here remains.

Stochastically optimal pit limits


The previous comparisons were based on the same pit limits deemed optimal using best
industry practice [17] . This section focuses on the value of the proposed approaches with
respect to stochastically optimal pit limits. Both methods described above consider larger
pit limits and stop when discounted cash flows are no longer positive. Figures �� and ��
show some of the results. The stochastically generated optimal pit limits contain an
additional 15% of additional tonnage when compared to the traditional (deterministic)
‘optimal’ pit limits, add about 10% in npv to the npv reported above from stochastic
production scheduling within the conventionally optimal pit limits, and extend the life
of mine. These are substantial differences for a mine of a relatively small size and short
life of mine. Further work shows that there are further improvements on all aspects
when a stochastic framework is used to.
CHAPTER III 181

Figure 11 LOM cumulative cash flows for the conventional approach, simulated annealing
and SIP; and comparison to results from conventionally derived optimal pit limits.

Figure 12 Stochastic pit limits are larger than the conventional ones and physical
scheduling differences are expected when bigger pits are generated.

conclusions
Starting from the limits of the current orebody modelling and life-of-mine planning
optimisation paradigm, an integrated risk-based framework has been presented. This
framework extends the common approaches in order to integrate both stochastic
modelling of orebodies and stochastic optimisation in a complementary manner. The
main drawback of estimation techniques and traditional approaches to planning is that
they are unable to account for the in-situ spatial variability of the deposit grades; in fact,
conventional optimisers assume perfect knowledge of the orebody being considered.
Ignoring this key source of risk and uncertainty can lead to unrealistic production
expectations as well as suboptimal mine designs.
The work presented herein shows that the stochastic framework adds higher value in
production schedules in the order of 25%, independently of which method from the two
presented is used. Furthermore, stochastic optimal pit limits are shown to be about 15%
larger in terms of total tonnage, compared to the traditional (deterministic) optimal
pit limits. This difference extends the life-of-mine and adds approximately 10% of Net
Present Value (npv) to the npv reported above from stochastic production scheduling
within the conventionally optimal pit limits.

acknowledgements
Thanks are in order to the International Association of Mathematical Geosciences
(iamg) for the opportunity to present this work as their distinguished lecturer. The
support of the cosmo Laboratory and its industry members AngloGold Ashanti, Barrick,

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
182 Stocha s tic Mine Planning Optimi sation: Ne w Concept s...

BHP Billiton, De Beers, Newmont, Vale and Vale Inco, as well as nserc, the Canada
Research Chairs Program and cfi is gratefully acknowledged. Thanks to R. Goodfellow
for editorial assistance.

references
Albor Consuegra, F. & Dimitrakopoulos, R. (2009) Stochastic mine design optimisation based on simulated
annealing: Pit limits, production schedules, multiple orebody scenarios and sensitivity analysis. imm
Transactions, Mining Technology, Vol. 118(2), pp. 80–91. [1]

David, M. (1988) Handbook of applied advanced geostatistical ore reser ve estimation. Elsevier Science
Publishers, Amsterdam, p. 216. [2]

Dimitrakopoulos, R., Farrelly, C. T. & Godoy, M. (2002) Moving for ward from traditional optimisation:
grade uncertainty and risk effects in open-pit design. Trans Inst Min Metall (Section A), 111:A82–88. [3]

Dimitrakopoulos, R. & Ramazan, S. (2004) Uncertainty based production scheduling in open pit mining.
sme Transactions, Vol. 316, pp. 106–112. [4]
Dimitrakopoulos, R. & Ramazan S. (2008) Stochastic integer programming for optimi sing long term
production schedules of open pit mines: methods, application and value of stochastic solutions. Mining
Technology: Transactions of the Institute of Mining and Metallurgy, Section A, Vol. 117(4),
pp. 155–167. [5]

Geman, S. & Geman, D. (1984) Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of
images. ieee Trans. on Pattern Analysis and Machine Intelligence, Vol. pami-6(6), pp. 721–741. [6]

Godoy, M. C. (2009) A R isk Analysis Based Framework for Strategic Mine Planning and Design – Method
and Application. Orebody Modelling and Strategic Mine Planning: Old and new dimensions in
a changing world, Proceedings, March 16 –18, Perth, Western Australia, The a us imm ,
pp. 13–18. [7]

Godoy, M. C. & Dimitrakopoulos, R. (2004) Managing risk and waste mining in long-term production
scheduling. sme Transactions, Vol. 316, pp. 43–50. [8]

Kent, M., Peattie, R. & Chamberlain, V. (2007) Incorporating grade uncertainty in the decision to expand
the main pit at the Navachab gold mine, Namibia, through the use of stochastic simulation. Spectrum
Series 14, 2nd Edition, Ausimm, pp. 187–196. [9]

Leite, A. & Dimitrakopoulos, R. (2007) A s tocha s tic optimi sation model for open pit mine planning:
Application and risk analysis at a copper deposit. Transactions of the Institute of Mining and
Metallurgy, Mining Technology, Vol. 116(3), pp. 109–118. [10]

Leite, A. & Dimitrakopoulos, R. (2009) Production scheduling under me tal uncer taint y – Application
of stochastic mathematical programming at an open pit copper mine and comparison to conventional
scheduling. Orebody Modelling and Strategic Mine Planning: Old and new dimensions in a
changing world, Proceedings, March 16–18, Perth, Western Australia, The AusIMM. [11]

Lerchs, H. & Grossmann, I. F. (1965) Optimum design of open-pit mines. Trans. cim, lxvii, pp. 47–54. [12]

Menabde, M., Froyland, G., Stone, P. & Yeates, G. (2007) Mining schedule optimisation for conditionally
simulated orebodies. Orebody Modelling and Strategic Mine Planning: Uncertainty and risk
management models, AusIMM Spectrum Series 14, 2nd Ed., pp. 379–384. [13]

Mustapha, H. & Dimitrakopoulos, R. (2010) Simulation of Geologically Complex Mineral Deposits: New
High-Order Models through Spatial Cumulants. minin 2010 – (this volume). [14]

Ramazan, S. & Dimitrakopoulos, R. (2007) Stochastic optimisation of long-term production scheduling


for open pit mines with a new integer programming formulation. Orebody Modelling and Strategic
Mine Planning: Uncertainty and risk management models, AusIMM Spectrum Series 14, 2nd
Ed., pp. 385–392. [15]

Ramazan, S. & Dimitrakopoulos, R. (2008) Production scheduling with uncertain supply – A new solution
to the open pit mining. cosmo Research Report No. 2, pp. 257–294. [16]

Whittle, J. (1999) A decade of open pit mine planning and optimisation – the craft of turning algorithms into
packages. Proceedings apcom '99, pp. 15–24. [17]
Mine Design under Geologic and
Market Uncertainties

abstract
Sabry abdel sabour Strategic decisions in the mining industry are taken under
Roussos dimitrakopoulos multiple geological and market uncertainties. There is a growing
McGill University, Canada awareness among mining practitioners that, given these different
sources of uncertainties, conventional decision-support tools based
on deterministic, perfect knowledge assumptions are not well
structured to handle real situations. As such, efforts over the last
three decades have resulted in the development of uncertainty-
based methods used to model and deal with geological and market
risk in mine planning and decision-making such as conditional
simulation, stochastic mine planning and real options valuation.
This paper provides a review of applying uncertainty-based
techniques to the case of mine design selection. Two case studies of
a gold mine and a copper mine are provided. Both the conventional,
deterministic-based technique and the uncertainty-based approach
are applied to select the best performing design for open-pit mining
among various technically-feasible designs. The results show
significant differences in decision-making when incorporating
uncertainties into analysis.
184 Mine Design under Geologic and Marke t Uncer tainties

introduction
The process of open pit mine planning starts with modelling the orebody based on the
borehole data and geological information. Then, the mining field is divided into blocks
of regular size. Based on a deterministic metal price, each block is assigned a value
equal to the gross value of its metal content minus the applicable production, processing
and refining costs. The optimum production plan is determined by applying different
optimisation algorithms [1–5] . There may be alternative, technically feasible mine
plans that meet operational and technical constraints. Selection among those plans is
then based on economic reasons. This is carried out by evaluating each of the possible
plans and comparing their economic attractiveness so as to select the most economically
appealing one. The basic assumption in most previous work is that metal prices and/
or metal contents are known with certainty. Nevertheless, the reality in the mining
industry is more complex than this simple assumption may suggest. In practice, mine
planners cannot know with certainty the quantity and quality of ore in the ground. In
addition, both future metal prices and exchange rates cannot be known with certainty.
An example of geological uncertainty is illustrated in Figure   ➊, showing 20
simulations for the possible metal content of an ore block at a copper mine. It is obvious
that the possible copper content is highly uncertain. While the conventional average
indicates that the block contains 34 tonnes of copper, the simulations show that the
copper content is uncertain and could be as low as eight tonnes or as high as 114
tonnes. Ignoring such uncertainty could result in substantial losses, especially when
considering the capital-intensive nature of mining investments. As reported by Vallee [6] ,
shortcomings in geological modelling and financial analysis were among the factors
that caused a loss of $1.4 billion to the Canadian mining industry in the late 1980's.
The second major source of risk is related to the uncertainty about metal prices and
exchange rates. Figure   ➋ shows the average monthly prices of copper and gold over the
period 1996-2009 in current US dollars. It is obvious that these prices are highly volatile
and do not keep a constant trend, which makes it speculative to define deterministic
forecasts for future metal prices. Exchange rate is also another contributor to project risk.
Figure   ➌ illustrates how the average monthly exchange rates of the Canadian/USA and
the Australian/USA currencies are uncertain. Therefore, it is difficult for mine planners
to have precise forecasts for these key exogenous market variables over the mine life.

Figure 1 Example of geological uncertainty of a copper ore block.


CHAPTER III 185

Figure 2 Historical average monthly copper and gold prices January 1996- October 2009.

Figure 3 Historical CAN$/US$ and AUS$/US$ exchange rates January 1999-October 2009.

Under uncertain geological and market conditions, the process of selecting a mine plan
among different alternatives is not simple. Conventional static financial evaluation
methods based on deterministic geological and market variables are not well suited to
handle the multiple sources of risk and the dynamic, proactive nature of management
decisions. Therefore, there is a need for a more efficient system for mine plan selection
under multiple uncertainties that minimises the need for subjective judegments.
Integrating geological uncertainty into open-pit mine planning was first introduced by
Dimitrakopoulos et al. [7] . Subsequently, efforts have been devoted to develop a risk-based
optimisation approach for long-term mine planning considering geological uncertainty
[3, 8–12] . The other significant source of uncertainty that has a significant impact
on the mine planning process is market uncertainty. In this respect, the Real Options
Valuation (rov) can provide a promising tool for better evaluating alternative production
schedules under uncertainty. For more details about rov, see [13–17] .
This paper builds on the work of Dimitrakopoulos et al. [10] by quantifying and
integrating market uncertainty related to metal prices and exchange rates into mine
planning. In this respect, the article aims to develop a system for mine plan selection
based on multiple value statistics and cash flow characteristics by incorporating the
value of management flexibility to react to the new information. The focus in this work
will be on improving the selection process when all designs are based on the same cutoff
grade. Dynamic optimisation of cutoff grade based on the new information is outside the
scope of this article. In the next sections, the proposed selection system will be briefly
outlined. Then, it will be applied to select an optimum production schedule for a copper
mine and a gold mine. Finally, an investigation of the usefulness of the developed system
will be provided.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
186 Mine Design under Geologic and Marke t Uncer tainties

a system for mine design selection under


uncertainty
First, it is worth noting that the different alternative mine designs could result from
different available production scenarios, different lom sequences, and so on. In this study,
the procedure described in Dimitrakopoulos et al. [10] for generating different mine
designs based on simulating multiple orebody realisations is used. The ranking system
proposed in this work takes into account multiple sources of uncertainty simultaneously
and integrates the operating flexibility to revise the ultimate pit limits based on the
new information. The proposed system consists of three main steps: uncertainty
quantification, design valuation and design ranking.

Uncertainty quantification
Both the geological and market uncertainties are quantified. First, the geological
uncertainty is explored by simulating multiple orebody realisations based on the borehole
data using conditional simulation [18, 19] . Market uncertainty about metal prices and
foreign exchange rates is quantified using stochastic models such as Geometric Brownian
Motion (gbm) and the mean-reversion model, see [20] , among others.

Valuation of mine designs


The design valuation model is based on rov using the least squares Monte Carlo method.
This method was originally developed for valuing American style securities [21]. It was
then extended to valuing capital investments under multiple market uncertainties [22]
and extended further to valuing mining investments under multiple market and
geological uncertainties [23–25]. The decision whether to keep or revise the pit limits is
taken based on the expected Continuation Value, cv. If the cv at time t and sample path
n is positive, the optimum decision is to keep the pre-defined pit limits until the next
decision time. Otherwise, if the cv is negative, the original plan should be revised and the
current pit limit at time t should be the final pit limit. Estimating the cv requires knowing
the function that relates the present value of future mining operations beyond time t
to the states prevailing at time t. This function could be in different forms such as the
simple power series, Laguerre polynomials or linear combinations of different forms [21].
Parameters of this function can be estimated at each time using least squares regression.

Indicators for ranking of mine designs


The ranking procedure proposed in this study aims to gather multiple value and risk
analysis indicators into one quantitative measure while integrating real industry
complexities such as uncertainty and operating flexibility to revise pre-defined pit limits.
The proposed design ranking system for selecting the best open pit design based on the
information available at the initial planning time takes into account the following aspects:

• Upside potential that measures the ability of designs to capture possibly more profits
than those expected if outcomes were favourable.

• Downside risk that reflects the difference between designs in minimising negative
cash flow risk throughout mine life.
CHAPTER III 187

• Probability of completion, which is the probability that the mine will be run to completion.

• Statistics of the estimated values which includes the average, lower and upper limits
at a certain confidence level.

After estimating the above described four indicators for each mine design, the Total
Ranking Indicator (tri) is the summation of these four indicators. The designs are then
ranked according to the total indicator and the design with the highest indicator can be
identified. In this work, the average performance of all feasible mine designs will be used
to compare and rank the designs. However, one can replace these averages depending
on the goals and objectives specific to a given mining project. In addition, the above
mentioned ranking criteria are given equal weight in this study. Different weights can
be given to those criteria if considered appropriate.

case studies: selecting designs for a copper


mine and a gold mine
In this section two case studies of real life mines are provided. The first one is a copper
mine in Canada and the second one is a gold mine in Australia. In both cases, it is
assumed that there are alternative feasible mine designs available to decision-makers.
These designs have to be ranked so as to select the best one based on the information
available at the planning time. To investigate the efficiencies of the valuation techniques
and the proposed ranking system in ranking alternatives under uncertainty, the mine
designs will be ranked based on the following four ranking measures:

• The expected value estimated by the conventional npv valuation method.

• The expected value estimated by the real options valuation.

• The npv-based indicator explained above, except the probability of completion indicator
since the npv of the mine calculated at the planning time before starting production
does not consider the flexibility to revise pit limits in the future.

• The Real Options Valuation (rov) based indicator outlined in the previous section.
No capital costs are considered in both cases since it is assumed that production is carried
out by contractors.

Designs ranking at a disseminated copper mine


This copper mine is assumed to have been planned in 1992, started production in 1993
and closed in 2000. There were 10 mine designs estimated by feeding the 10 simulated
orebody models into Whittle Software that uses the nested Lerchs-Grossman algorithm
to generate the optimum mine design [26] . Table 1 lists the average tonnage, operating
cost and grade for the ten designs. The economic data prevailing at the planning time
(1992) are listed in Table 2 . Both the copper price and the exchange rate are modelled
with the mean reverting process.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
188 Mine Design under Geologic and Marke t Uncer tainties

Table 1 Average annual tonnage, operating cost and grade for the technically feasible copper mine designs

  Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8


Tonnage, tonne 7400672 7499491 5929296 7500000 7499390 7455251 7500000 4888820
Design 1

Cost, $ CAN/ tonne 14.62 14.72 15.52 13.85 14.53 14.88 13.38 12.42

Copper grade, % 0.702 0.668 0.566 0.592 0.679 0.715 0.534 0.647

Tonnage, tonne 7499371 7499140 7353125 7497851 7499534 6209452 7500000 3751526
Design 2

Cost, $ CAN/ tonne 14.70 14.65 14.62 13.96 14.84 15.32 12.83 12.42

Copper grade, % 0.701 0.652 0.607 0.613 0.713 0.583 0.584 0.697

Tonnage, tonne 7499705 7499267 7469712 6451536 7500000 7082233 7073037 5445192
Design 3

Cost, $ CAN/ tonne 14.42 14.64 14.62 15.03 13.26 15.28 14.29 12.44

Copper grade, % 0.700 0.657 0.630 0.562 0.634 0.777 0.509 0.632

Tonnage, tonne 7500000 7497633 7470655 5043188 7500000 7500000 7288115 5885209
Design 4

Cost, $ CAN/ tonne 14.33 14.57 14.72 16.10 13.81 13.12 14.83 12.59

Copper grade, % 0.682 0.661 0.666 0.508 0.579 0.676 0.712 0.571

Tonnage, tonne 7498724 6987502 7477194 7500000 7498579 5988935 7500000 4855867
Design 5

Cost, $ CAN/ tonne 14.95 14.85 14.30 13.31 14.76 15.66 13.36 12.54

Copper grade, % 0.744 0.616 0.549 0.629 0.694 0.654 0.558 0.695

Tonnage, tonne 7500000 7499026 6094489 7500000 7500000 7332918 7500000 2948286
Design 6

Cost, $ CAN/ tonne 14.12 14.73 15.34 13.66 14.22 14.85 13.09 12.50

Copper grade, % 0.702 0.671 0.537 0.597 0.680 0.678 0.577 0.698

Tonnage, tonne 7500000 7500000 7499507 6004538 7500000 7259350 7500000 4074685
Design 7

Cost, $ CAN/ tonne 14.42 13.51 14.63 15.55 13.68 15.08 13.93 12.43

Copper grade, % 0.674 0.645 0.665 0.593 0.605 0.767 0.521 0.654

Tonnage, tonne 7497693 6831718 5197588 7500000 7500000 7500000 7499960 6130842
Design 8

Cost, $ CAN/ tonne 14.90 15.09 15.88 14.01 12.93 14.29 14.57 12.46

Copper grade, % 0.729 0.667 0.500 0.577 0.642 0.716 0.618 0.623

Tonnage, tonne 7498487 7498127 6924139 7498990 7500000 6739109 7411945 4535165
Design 9

Cost, $ CAN/ tonne 14.76 14.65 14.89 14.18 12.76 15.48 14.29 12.47

Copper grade, % 0.693 0.644 0.615 0.557 0.669 0.777 0.524 0.645

Tonnage, tonne 7457763 7500000 7498862 6746107 7500000 7439566 7500000 3074822
Design 10

Cost, $ CAN/ tonne 14.71 13.58 14.60 15.00 13.89 14.58 14.12 12.40

Copper grade, % 0.680 0.653 0.669 0.605 0.597 0.748 0.529 0.653

Table 2 Economic parameters for the copper mine

Item Description
Risk-free interest rate, % 9.20
Inflation, % 6.70
Income taxes, % 40.00
Initial copper price, $US/lb 1.07
Volatility, %, copper price 20.00
Reversion speed of copper price 0.19
Long-term copper price, $US/lb 1.00
Initial $US/$CAN rate 1.20
Volatility, %, $US/$CAN 4.00
Reversion speed/year, $US/$CAN 0.08
Long-term, $US/$CAN 1.25
CHAPTER III 189

The 10 designs have been evaluated using the npv and rov. The simulation-based
valuation model applied in this work provides multiple cash flow statistics. These
statistics are used to construct a ranking system as outlined above. The 10 mine designs
have been ranked based on four different measures. As shown in Table 3 , based on the
two ranking measures of the npv, Design 6 is the best alternative. Differently, the two
rov ranking measures suggest that design 10 is the best one. For the npv, design ranking
based on the expected value is almost identical to that of the npv-based indicator. rov
expected value and rov-based indicator have similar ranking for five mine designs and
different ranking for the remaining five designs.

Table 3 Designs ranking for the copper mine

Rank NPV expected value NPV-based TRI ROV expected value ROV-based TRI
1 Design 6 Design 6 Design 10 Design 10
2 Design 10 Design 10 Design 7 Design 2
3 Design 7 Design 7 Design 2 Design 7
4 Design 5 Design 5 Design 6 Design 3
5 Design 2 Design 2 Design 3 Design 4
6 Design 1 Design 1 Design 5 Design 5
7 Design 9 Design 9 Design 1 Design 1
8 Design 3 Design 3 Design 4 Design 6
9 Design 4 Design 8 Design 9 Design 9
10 Design 8 Design 4 Design 8 Design 8

Designs ranking at an Australian gold mine


The three-year gold mine is assumed to have been planned in 2003, started production at
the beginning of 2004 and closed at the end of 2006. Table 4 shows the average tonnage,
production cost and ore grade. The economic parameters used in design evaluations
are given in Table 5 . The gold price evolution has been modelled using the geometric
Brownian motion while the exchange rate evolution has been represented by the mean
reverting model.

Table 4 Average tonnage, operating cost and grade for the possible gold mine designs

Mine Ore tonnage, tonne Operating cost $AU/tonne Grade, g/tonne


Design Year-1 Year-2 Year-3 Year-1 Year-2 Year-3 Year-1 Year-2 Year-3
1 925360 860451 903591 24.41 25.16 27.36 2.42 1.80 1.50
2 1021141 876780 60599.1 24.74 25.28 23.80 2.47 1.74 1.49
3 955141 852045 182569 24.65 24.90 24.57 2.46 1.75 1.62
4 1042777 780285 517351 25.29 25.49 26.64 2.46 1.75 1.62
5 1015688 779731 292940 24.17 25.52 24.69 2.43 1.76 1.52
6 917128 856977 442722 24.31 26.16 26.93 2.48 1.83 1.68
7 1019285 782184 190467 24.17 24.49 30.45 2.42 1.72 1.79
8 971795 920220 770199 24.51 26.34 25.19 2.45 1.80 1.50
9 997262 846177 628824 24.39 26.02 33.01 2.47 1.78 1.54
10 974649 899295 485592 24.84 25.30 25.79 2.50 1.73 1.56
11 1206364 698324 936122 25.14 26.41 25.92 2.36 1.77 1.48
12 937852 782868 954335 24.92 24.48 26.37 2.47 1.78 1.50

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
190 Mine Design under Geologic and Marke t Uncer tainties

Table 5 Economic data and model parameters for the gold mine

Item Description
Risk-free interest rate, % 6.50
Inflation, % 2.63
Income taxes, % 30.00
Initial gold price, $US/oz 417.25
Volatility, %, gold price 13.00
Real trend, gold price 0.00
Initial $US/$AU rate 1.30
Volatility, %, $US/$AU 9.18
Reversion speed/year, $US/$AU 0.24
Long-term, $US/$AU 1.50

Both the npv and rov have been used to value the 12 designs. Also, both the npv-based
and the rov-based indicators are calculated for the 12 designs. The designs are ranked
according to the npv expected value, the npv-based indicator, the rov expected value and
the rov-based indicator, as listed in Table 6 . Based on both the expected npv and the
npv-based indicator, Design 2 is the best one. The two ranking measures based on the
rov generated different results. The expected value estimated by the rov indicates that
Design 10 is the best one, while the rov-based indicator suggests that Design 8 is the
best design for the gold mine. Comparing the four ranking results for the 12 designs, it
is clear that both the npv expected value and the npv-based indicator produce the same
ranking which is different from the two ranks of the rov. In contrast, for the 12 designs,
the rov expected value and the rov-based indicator generated different ranking except
for only one design.

Table 6 Designs ranking for the gold mine

Rank NPV expected value NPV-based TRI ROV expected value ROV-based TRI
1 Design 2 Design 2 Design 10 Design 8
2 Design 3 Design 3 Design 8 Design 10
3 Design 7 Design 7 Design 5 Design 12
4 Design 5 Design 5 Design 6 Design 6
5 Design 6 Design 6 Design 12 Design 11
6 Design 10 Design 10 Design 7 Design 4
7 Design 4 Design 4 Design 4 Design 5
8 Design 8 Design 8 Design 2 Design 1
9 Design 12 Design 12 Design 9 Design 7
10 Design 11 Design 11 Design 11 Design 2
11 Design 1 Design 1 Design 3 Design 9
12 Design 9 Design 9 Design 1 Design 3

Comparison of methods using market data


This section compares the performance of the four measures in ranking mine designs
under uncertainty against the ranking that is based on actual market data. It is worth
noting here that the results of this comparison cannot be generalised at this stage to
prefer one valuation or ranking measures over the others since this requires carrying
out a large number of experiments. However, the comparison results can be regarded as
CHAPTER III 191

an initial indicator for the potential of the proposed procedure to improve the decision
making process.
It is assumed that the copper mine was started at the beginning of 1993 and closed
in 2000. Based on the actual copper prices and exchange rates from 1993 to 2000 and
the 10 simulated orebodies, the average value for the 10 designs has been calculated and
the designs have been ranked according to that calculated average. Now, it is possible to
compare the efficiency of the different ranking measures, as shown in Table 7. The term
“mis-ranking” in Table 7 represents the absolute difference between the rank calculated
by the specified ranking method and the actual rank for each design. A high mis-ranking
indicates that the method can generate misleading decisions regarding design selection,
while a low “mis-ranking” indicates that the method is more efficient. As shown in
Table 7, the average “mis-ranking” of the npv expected value is higher than that of
the rov expected value. The average mis-ranking of the npv-based indicator is slightly
higher than the npv expected value, which means that the selection process based on the
conventional npv analysis has not been improved even when using multiple risk analysis
criteria. This is not a surprising result given the static nature of the conventional npv
method. Since the flexibility to revise pit limits in the future has not been integrated
into the npv valuations, no significant differences can be achieved by risk analysis due
to the symmetry of the resultant distribution around the average. On the contrary, the
average “mis-ranking” of the rov-based indicator that integrates the flexibility to adjust
pit limits is lower than the rov expected value, which indicates that the selection process,
and consequently overall project economics, could be improved by incorporating multiple
risk and cash flow analysis.

Table 7 Comparison of the ranking methods using the copper mine data

Actual NPV expected value NPV-based TRI ROV expected value ROV-based TRI
Design
rank Rank Mis-ranking Rank Mis-ranking Rank Mis-ranking Rank Mis-ranking
1 9 6 3 6 3 7 2 7 2
2 1 5 4 5 4 3 2 2 1
3 6 8 2 8 2 5 1 4 2
4 4 9 5 10 6 8 4 5 1
5 5 4 1 4 1 6 1 6 1
6 7 1 6 1 6 4 3 8 1
7 3 3 0 3 0 2 1 3 0
8 10 10 0 9 1 10 0 10 0
9 8 7 1 7 1 9 1 9 1
10 2 2 0 2 0 1 1 1 1
Average
2.2 2.4 1.6 1.0
Mis-ranking

For the gold mine example, the mis-ranking for all designs using the four ranking
methods along with the average mis-ranking are reported in Table 8 . As in the copper
mine case, using a npv-based indicator did not improve the design selection process.
Ranking designs based on the rov expected values has reduced the average“mis-ranking”
from five, and in the npv analysis, to 3.7. This average mis-ranking has been reduced
further to 1.8 when using the rov-based indicator.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
192 Mine Design under Geologic and Marke t Uncer tainties

Table 8 Comparison of the ranking methods using the gold mine data

Actual NPV expected value NPV-based TRI ROV expected value ROV-based TRI
Design
rank Rank Mis-ranking Rank Mis-ranking Rank Mis-ranking Rank Mis-ranking
1 4 11 7 11 7 12 8 8 4
2 11 1 10 1 10 8 3 10 1
3 10 2 8 2 8 11 1 12 2
4 5 7 2 7 2 7 2 6 1
5 8 4 4 4 4 3 5 7 1
6 6 5 1 5 1 4 2 4 2
7 9 3 6 3 6 6 3 9 0
8 2 8 6 8 6 2 0 1 1
9 12 12 0 12 0 9 3 11 1
10 7 6 1 6 1 1 6 2 5
11 3 10 7 10 7 10 7 5 2
12 1 9 8 9 8 5 4 3 2
Average
– 5 5 3.7 1.8
Mis-ranking

The results above show how it is important to integrate uncertainty into the mine design
selection process. However, given the multiple sources of uncertainty associated with
mining investments, conventional risk analysis could be useless if the management
flexibility to react to the new information is not considered. As shown in Tables 7
and 8 , the npv-based indicator using multiple risk and cash flow analysis generate
almost the same decision as the npv expected value and also has the same average mis-
ranking. On the contrary, when integrating the flexibility to revise the originally taken
decisions concerning the ultimate pit limits, the efficiency of the selection process has
been significantly improved. As shown in Table 7, for the copper mine, a 37% reduction
in the average mis-ranking has resulted from using a ranking system based on multiple
criteria rather than the expected value generated from the real options valuation. For the
gold mine, this percentage of reduction is increased to 51%. However, under this multiple
market and geological uncertainty it is impossible to have a system that always generates
the best solution. In other words, it is impossible to have a ranking system with a 0%
mis-ranking. Nevertheless, efforts have to be devoted to further improve the process of
mine planning under uncertainty.

conclusions
This paper has outlined a ranking system for selection among alternative open pit mine
designs using multiple risk analysis measures while integrating market and geological
uncertainty and the operating flexibility to revise pit limits. The proposed system has
been applied along with three other ranking methods, based on the npv expected value,
the rov expected value and the npv-based indicator, to rank possible designs at a copper
mine and a gold mine. For the copper mine, the average mis-ranking of the proposed
ranking system was 55–58% less than the npv analysis and 37% less than the rov
expected value. For the gold mine, the average mis-ranking of the proposed system was
64% and 51% less than those of the npv analysis and the rov expected value respectively.
Results of the two examples showed that the design selection process may improve when
incorporating uncertainty and operating flexibility. Future extensions could include
integrating more risk analysis measures and other types of management flexibilities
CHAPTER III 193

such as the flexibility to revise the cut-off grade with time so as to improve the design
selection process even further.

acknowledgements
The work in this paper was funded from NSERC CDR Grant 335696 and BHP Billiton,
as well NSERC Discovery Grant 239019, McGill's COSMO Lab and its industry members
AngloGold Ashanti, Barrick, BHP Billiton, De Beers, Newmont, Vale and Vale Inco.

references
Whittle, J. (1988) Be yond optimi zation in open pit design. In Canadian Conference on Computer
Applications in the Mineral Industries, Balkema, Rotterdam. [1]

Ramazan, S. (2007) The ne w fundamental tree algorithm for produc tion scheduling of open pit mines.
European Journal of Operations Research, Vol. 177, pp. 1153–1166. [2]

Dimitrakopoulos, R. & Ramazan, S. (2008) Stochastic integer programming for optimizing long-term
production schedules of open pit mines: Methods, application and value of stochastic solutions. IMM
Transactions, Mining Technology, 117(4), pp.155–160. [3]

Asad, M. W. A. (2008) Multi-period quarr y produc tion planning through sequencing techniques and
sequencing algorithm. Journal of Mining Science, 44(2), pp. 206–217. [4]

Zuckerberg, M., Van der Riet, J., Malajczuk, W. & Stone, P. (2010) Optimization of life-of-mine production
scheduling at a Bauxite mine. Journal of Mining Science, in press. [5]

Vallee, M. (2000) Mineral resource + engineering, economic and legal feasibility = ore reserve. CIM Bulletin,
93 (1038), pp. 53–61. [6]

Dimitrakopoulos, R., Farrelly, C.T. & Gody, M. (2002) Moving forward from traditional optimization: grade
uncertainty and risk effects in open-pit design. Trans. Instn. Min. Metall. (Sec. A: Min. Technol.),
Vol. 111, pp. A82–A88. [7]

Godoy, M. & Dimitrakopoulos, R. (2004) Managing ri sk and wa s te mining in long-term produc tion
scheduling. SME Transactions, Vol. 316, pp. 43–50. [8]

Leite, A. & Dimitrakopoulos, R. (2007) A s tocha s tic optimi zation model for open pit mine planning:
Application and risk analysis at a copper deposit. Mining Technology (Trans. Inst. Min. Metall. A)
116, pp. 109–118. [9]

Dimitrakopoulos, R., Martinez, L. S. & Ramazan, S. (2007) A ma ximum upside/minimum downside


approach to the traditional optimization of open pit mine design. Journal of Mining Science, Vol. 43,
pp. 73–82. [10]

Dimitrakopoulos, R. & Grieco, N. (2009) Stope design and geological uncertainty: Quantification of risk in
conventional designs and a probabilistic alternative. Journal of Mining Science, 45(2), pp. 152–163. [11]

Godoy, M. & Dimitrakopoulos, R. (2010) A risk analysis based framework for strategic mine planning and
design method and application. Journal of Mining Science, in press. [12]

Monkhouse, P. H. L. & Yeates, G. (2005) Beyond naïve optimization. In: Orebody Modelling and Strategic
Mine Planning, The Australian Institute of Mining and Metaluurgy, Spectrum Series No. 14,
pp. 3–8. [13]

Miller, L.T. & Park, C.S. (2002) Deci sion making under uncer taint y-real option s to the rescue? The
Engineering Economist, 47(2), pp. 105–150. [14]

Moel, A. & Tufano, P. (2002) When are real options exercised? An empirical study of mine closings. The
Review of Financial Studies, 15(1), pp. 35–64. [15]

Samis, M., Davis, G.A., Laughton, D. & Poulin, R. (2006) Valuing uncertain asset cash flows when there
are no options: A real options approach. Resources Policy, Vol. 30, pp. 285–298. [16]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
194 Mine Design under Geologic and Marke t Uncer tainties

Slade, M. E. (2001) Valuing managerial flexibility: An application of real-option theory to mining investments.
Journal of Environmental Economics and Management, Vol. 41, pp. 193–233. [17]

Boucher, A. & Dimitrakopoulos, R. (2009) Block-suppor t simulation of multiple correlated variables.


Mathematical Geosciences, 41(2), pp. 215–237. [18]

Scheidt, C. & Caers, J. (2009) Representing spatial uncertainty using distances and kernels. Mathematical
Geosciences, 41 (4), pp. 397–419. [19]

Schwartz, E. S. (1997) The stochastic behaviour of commodity prices: implications for valuation and hedging.
Journal of Finance, 52(3), pp. 923–973. [20]

Longstaff, F.A. & Schwartz, E.S. (2001) Valuing American options by simulation: A simple least-squares
approach. The Review of Financial Studies, 14(1), pp. 113–147. [21]

Abdel Sabour, S.A. & Poulin, R. (2006) Valuing real capital investments using the least-squares Monte Carlo
method. The Engineering Economist, 51(2), pp. 141–160. [22]

Dimitrakopoulos, R., & Abdel–Sabour, S.A. (2007) Evaluating mine plans under uncertainty: Can the real
options make a difference? Resources Policy, 32(3), pp. 116–125. [23]

Abdel Sabour, S.A., Dimitrakopoulos, R. & Kumral, M. (2008) Mine plan selection under uncer tainty.
Mining Technology: IMM Transactions Section A, 117(2), pp. 53–64. [24]

Meagher, C., Abdel Sabour, S. A. & Dimitrakopoulos, R. (2009) Pushback design of open pit mines under
geological and marke t uncer tainties. In proceedings, Orebody Modelling and Strategic Mine
Planning 2009, AusIMM, pp. 297–­304. [25]

Whittle, J. (1999) A decade of open pit mine planning and optimization — The craft of turning algorithms into
packages. In apcom'99 Computer Applications in the Minerals Industries 28th International
Symposium, Colorado School of Mines, Golden. [26]
A Two-Phase Heuristic Method for
Constrained Pushback Design

abstract
Snehamoy chatterjee Pushback design is a well-known problem that consists in
Amina lamghari determining production schedules over the life of an open pit mine
Roussos dimitrakopoulos
that maximise on cash flows and satisfy slope constraints. This
McGill University, Canada problem can be formulated as identifying the maximal closure
of an associated oriented graph and can be solved efficiently.
However, in the presence of additional constraints, the problem
becomes NP-hard, and it is unlikely that optimal solutions can be
generated within a reasonable computing time for large instances
which are of practical interest. In this paper, a two-phase heuristic
method is proposed to address the pushback design problem
including constraints limiting the ore production. In the first
phase, the production constraints are ignored, and the resulting
problem is solved using a minimum cut algorithm. Then, a greedy
destructive heuristic is used to deal with production constraints
violations, if any, by removing some blocks from the pushback
generated. This two-phase procedure is repeated to generate a
series of nested pushbacks until the total ore production reaches
a specified value.  Computational results on a disseminated copper
deposit show that the proposed approach performs very well both
in terms of CPU time and size of solved instances. Moreover, it
produces high quality schedules.
196 A T wo -Pha se Heuri s tic Me thod for Con s trained Pu shback D esign

introduction
The main objective in any commercial mining operation is to maximise the profit gained
from the exploitation of the orebody. The common way to achieving this objective is to
solve two interrelated problems. The first problem addresses the question of what blocks
will be extracted from the ground over the life of the mine such as to maximise the
mine's net present value while satisfying the slope constraints. These blocks define the
so-called ultimate pit. The second problem, referred to as pushback design, can be seen as
disaggregating the ultimate pit into smaller and manageable volumes of material called
pushbacks or mining phases allowing the manager to develop schedules based on series of
small problems.
Both problems have been widely studied in the literature since the seminal work of
Lerchs and Grossman [1] introducing an exact method for determining the ultimate pit.
This method, referred to as LG algorithm, is based on formulating the problem in terms
of graph theory. The authors showed that the problem can be viewed as identifying the
maximal closure of an associated oriented graph, where the nodes are the blocks and the
arcs represent slope constraints. Associated with each node is a weight representing its
economic value, Picard [2] proposed a more efficient algorithm based on the construction
of an auxiliary graph such that a minimum cut algorithm can be used to find the
maximal closure of the original graph, and thus the ultimate pit. Hochbaum and Chen [3]
improved the LG algorithm by including scaling techniques. A well-used approach to
tackle the pushback design problem is to use an algorithm for determining the ultimate
pit and create a series of nested pits by decreasing gradually the economic value of the
blocks. The larger the decreasing factor the smaller the corresponding pit is. When this
factor is equal to 0, the pit produced corresponds to the ultimate pit. This approach has
been investigated by [3] , among others.
Accounting only for slope constraints to identify the pushbacks induces certainly a
problem that can be solved efficiently, yet it yields impractical pushbacks. For instance,
successive pushbacks may have inconsistent sizes or the amount of ore mined may exceed
the mill capacity available. Other specific constraints may also be violated. The manager
can deal with these issues by adjusting the pushbacks manually. However, this is often a
difficult and time-consuming task and may lead to poor solutions very far from the optimal
one. A possible remedy to these drawbacks is to include a priori specified constraints
in the optimisation process to ensure that the pushbacks produced have the desired
features. The resulting problem is referred to as the constrained pushback design problem.
Several papers appeared in the literature reporting the study of the constrained pushback
design problem. The additional constraints taken into account differ with the specific
contexts. For instance, some constraints are related to the amount of rock mined [4] ,
to the pit size [5] , or to a set of resource constraints [6] . Accounting for additional
constraints induces difficult combinatorial optimisation problems usually solved with
heuristic methods based on Lagrangian relaxation.
In this paper, we consider a slightly different approach to design pushbacks accounting
for ore production constraints. These constraints impose a specified upper bound on
the amount of ore produced. The solution method is an iterative procedure where at
each iteration, we first use a parametric minimum cut algorithm to solve the problem
ignoring the production constraints. Then, we use, whenever it is necessary, a ‘repair’
heuristic to make the obtained pushback feasible for the original problem by removing
some blocks from it. We proceed in a greedy manner selecting for removal blocks reducing
the surplus of ore as much as possible and decreasing the pushback's net present value
as less as possible. The proposed heuristic is quite similar to that proposed by [6] in the
CHAPTER III 197

more general context of maximal closure on a graph with resource constraints. However,
it differs in that, unlike selecting a single block for deletion in each iteration, we have
deleted all blocks of the candidate set if the sum of ore tonnage of the candidate set
is less than surplus of ore production constraint. This helps to reduce computational
time of the heuristic of large scale deposit. The process is repeated until the total ore
produced reaches a specified value. Even if the solution approach is introduced to account
for production constraints, it can nevertheless be easily adapted to deal with slightly
different specific constraints.

methodology
Identifying the ultimate pit using a minimum cut algorithm
As mentioned earlier, identifying the ultimate pit can be viewed as a minimum cut
problem in a directed graph representing the orebody model. Before describing the
construction of the graph, let us define the minimum cut problem formally.  Consider a
directed graph G = (V, A) with s and t being the source node and the sink node respectively,
with capacity C(a) of each arc a in A, and a maximum flow from s to t. In graph theory,
a cut is any set of directed arcs containing at least one arc in every path from s to t. In
other terms, if one removes the arcs in the cut, then the flow from s to t is cut off. The
cut value is the sum of the flow capacities from s to t over all the arcs belonging to the
cut. The minimum cut problem consists in finding the cut that minimises the cut value
over all possible cuts in the graph.
Now in the context of an orebody model, the graph is constructed as follows. The nodes
are the blocks (in addition to the source node s and the sink node t). There are three types
of arcs: those connecting each block with its predecessors (i.e., blocks that have to be
removed before that block can be mined), those connecting the source node s with ore
blocks (i.e., with blocks having a positive economic value), and those connecting waste
blocks (i.e., those having a negative economic value) with the sink node t.  Note that the
economic value of a block is a function of its estimated grade, the metal selling price,
and the mining and processing costs. The capacity of each arc of the first type is assigned
an infinite value. The capacity of each arc of the second type is equal to the economic
value of the corresponding ore block. Finally, the capacity of each arc of the third type
is equal to the absolute value of the economic value of the corresponding waste block. It
can be shown that the capacity of the minimum cut corresponds to the maximum flow.
Thus, the minimum cut shows the set of blocks that should be considered in order to
maximise the mine's net present value.  Note that assigning an infinite capacity to the
arcs of the first type ensures that such arcs will never be in the minimum cut, and thus
allows discarding blocks violating the slope constraints.
Figure  ➊ depicts a small example of a vertical section of an orebody model. Each
➋ illustrates
entry corresponds to the economic value of the corresponding block. Figure  
the ultimate pit that can be identified with the minimum cut algorithm for the problem
in Figure  ➊. The bold black line corresponds to the minimum cut.

Figure 1 Section of a block


model with block economic values.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
198 A T wo -Pha se Heuri s tic Me thod for Con s trained Pu shback D esign

Figure 2 Minimum directed graph cut


for pit limit calculation.

Phase 1: Solving the pushback design problem


To produce pushbacks, we use a parameterisation of the minimum cut algorithm
described above. This approach is widely used in the literature as well as in commercial
software. As mentioned earlier, it relies on the idea of scaling down the economic block
values to get a series of nested pits. In our implementation this is done by multiplying
the economic values of ore blocks by a parameter λ in the interval [0,1]. It is clear that
the smaller the value of λ , the smaller the corresponding pit is. Hence, we proceed as
follows: λ is initially set equal to a small value greater than 0. It is adjusted dynamically
at each iteration of the solution procedure by incrementing its value with a constant
∆λ . The solution procedure terminates whenever the total ore production reaches a
specified value.
Since we ignore the production constraints when producing pushbacks, these
constraints may be violated. When this occurs, we use a greedy destructive heuristic to
transform the generated pushback into a feasible pushback. This heuristic is described
in the next section.

Phase 2: Accounting for production constraints


Since the production constraints considered in this paper stipulate that the amount of
ore produced should not exceed a given upper bound, one way of obtaining a feasible
pushback is to remove some ore blocks from the pushback generated in Phase 1. In order
to maintain slope constraints, it may be required to remove some waste blocks as well.
For this purpose, we use a sequential heuristic procedure where at each iteration a block
is selected to be removed from the current pushback. Let us analyse a typical iteration.
Consider the set of ore blocks having no successor in the current pushback. If this set
is empty, then eliminate the requirement that the blocks considered should be ore blocks.
Once a set of candidate blocks is identified with this approach, the candidate blocks
are ranked in decreasing order of their tonnage and the top ranking block is selected.
Ties are broken by choosing the block with the lowest grade.  Note that the logic behind
this hierarchy of selection criteria aims primarily at reducing the surplus of the ore
produced to the most extent possible and secondarily at decreasing as less as possible the
pushbackWs net present value.  Now suppose that the set of block candidates contains
ore blocks. If the total tonnage of these blocks is lesser or equal than the surplus in the
CHAPTER III 199

amount of ore, then all candidate blocks are removed simultaneously and a new iteration
is performed. This allows reducing the computational time, and thus improving the
efficiency of the heuristic.
The process is repeated until a pushback satisfying the production constraints is
obtained.  Note that this heuristic can easily be adapted to deal with multiple constraints.

case study
Description of the deposit
The proposed approach for ultimate pit limit and pushback design was applied at a
copper-gold deposit. The deposit is located in an archean greenstone belt. The region
consists predominantly of mafic lavas with lesser amounts of intermediate to felsics
volcaniclastics. The geological database consists of 185 drillholes with 10 m down-the-hole
composites in a pseudo-regular grid of 50 m x 50 m, covering an approximately rectangular
area of 1600 x 900 m2. Using the geological information available, one mineralisation
domain is defined and modelled through a geostatistical study. The ordinary kriging
method is used to generate orebody model of the deposit [7]. The number of blocks contains
within moralised zone are 2130000 blocks with block size of 20 m x 20 m x 10 m.
To calculate the ultimate pit limit and pushback design, the block economic values of
the individual blocks calculated using the following equations:

(1)

where,

(2)

where, MC is the mining cost, PC is the processing cost, Ti is the tonnage, Gi is the
grade and REC is the recovery. To calculate block economic value, the economic parameters
from Table 1 were used.

Table 1 Economic parameters

Copper price (US$/lb) 2.0

Selling cost (US$/lb 0.3

Mining cost ($/tonne) 1.0

Processing cost ($/tonne) 9.0

Processing recovery 1.0

Pushback design
To generate the constrained pushbacks, the directed graph is constructed using the
block economic value of orebody block model. The block model was generated by ordinary
kriging algorithm using the available drillhole data information. The estimation of
the block model was performed within the mineralised zone. Some air blocks of zero
grades are added with the mineralised zone to form a regular 3-d orebody model. A
directed graph is constructed, where ore blocks are connected with the source node and

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
200 A T wo -Pha se Heuri s tic Me thod for Con s trained Pu shback D esign

waste blocks are connected to the sink node, as described in methodology. To maintain
slope constraints, an infinite capacity arc is formed for underlying blocks to overlying
blocks. The slope angle of the study mine is 45°, thus infinity capacity arcs are directed
from an underlying block to nine overlying neighbour blocks. A high positive number is
chosen to maintain the infinite arc capacities. To generate pushbacks, the initial λ and
∆λ are chosen 0.01 and 0.005 respectively. The push re-label minimum cut algorithm is
applied in this paper for its computational efficiency [8] . The constrained pushback was
then generated by using our proposed algorithm. The ore production target constraint
is considered in this paper for pushback design. The value of ore production target is
7500000 tonnes. Our algorithm stops when total ore production exceeds the above
mentioned value. Then the pushback generated is considered as first pushback. After
generating the first pushback, all the blocks which are falling inside the first pushback
are deleted from the 3-d orebody model and same procedure is followed to generate the
next pushback. The pushback generation algorithm is completed when no more blocks
are falling inside the pit. In our case study, eight pushbacks are generated after obeying
the ore production constrained. Figure  ➌ shows two sections of the pushbacks design
using described method. Figure  ➍ shows the ore production from different pushbacks. It
is observed from the figure that the productions are constant over all pushbacks. It reveals
that our algorithm can successfully constrain the production target while generating the
pushbacks. It was also observed from the figure that in last pushback, the production is
drastically dropped down. The reason behind the sudden decrease of the production in
the 8th pushback is that there are no blocks left in the remaining 3-d block model which
can profitably be extracted from the mine. Therefore, the sum of those blocks which are
assigned any one of the pushbacks are formed the ultimate pit of the mine. The ultimate
pit of the same sections as of Figure  ➌ is presented in Figure ➎. The number of blocks
inside the ultimate pit is 16087. The number of ore blocks and waste blocks inside the
ultimate pit, striping ratio along with undiscounted cash flows are presented in Table 2.

Section 39 Section 41

Figure 3 Two sections of designed pushbacks.


CHAPTER III 201

Figure 4 Ore production from different pushbacks.

Section 39 Section 41

Figure 5 Two sections of ultimate pit.

Table 2 Ore, waste and cash flow of ultimate pit

Number of ore blocks 5001

Number of waste blocks 11086

Striping ratio 2.217


Undiscounted cash flow M$ 349

Production scheduling
To calculate the economic analysis of the mine, the Net Present Value (npv) needs to
be calculated. For that purpose, a year-wise mine production scheduling has to be
assessed with some constrained like yearly ore production and waste production, total
yearly tonnage handling per year etc. In this paper, we have only considered the ore
production constrain for the year-wise production scheduling. Hence, we have generated
our pushbacks with ore production constraint; we can easily consider that each pushback
will be extracted in each individual year. Therefore, the life of the mine will be eight
years. The discount rate considered in this paper is 10%. The cumulative metal quantities,

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
202 A T wo -Pha se Heuri s tic Me thod for Con s trained Pu shback D esign

cumulative ore quantities and cumulative npv are presented in Figure   ➏ to Figure ➑.
We have already seen from the pushbacks design that our schedule is meeting the
production target exactly throughout the life of the mine. It was also observed from
these three figures that the ore and metal production and npv generated in the 8th
year is negligibly small. A strategic decision has to be made by the management that
whether they will go for extracting the 8 th pushback. It can be concluded that our
constrained pushback design can successfully be used for the production scheduling if
ore production target is the only constrained for scheduling. However, other constrain
like waste production target, total tonnage extraction target can also be implemented
with our approach for pushback design as well as production scheduling with slide
modification of the proposed approach, which is the topic of ongoing research at cosmo
Mine Planning Laboratory.

Figure 6 Cumulative ore productions from the case study mine.

Figure 7 Cumulative metal productions from case study mine.


CHAPTER III 203

Figure 8 Cumulative NPV generated from case study mine.

To test the performance of our algorithm, a comparative study was performed with
the pushback generated by conventional practice [9] . The npv generated from the
conventional approach and our proposed method is compared. The npv generated from
conventional-based approach is 216 M$ [10] ; whereas our proposed approach generates
npv value of 257 M$. The result revealed that the proposed approach can generate 16%
more npv than the conventional practice for the case study mine.

CONCLUSIONS AND FUTURE WORK


In this paper, the ultimate pit limit and pushback design was performed by combining
the minimum cut graph and heuristic algorithm. The proposed method shows that the
constrain pushback design can be performed by exactly obeying the constrained. The
proposed approach used push re-label graph cut algorithm which is computationally
faster than the L-G algorithm along with efficient heuristic. The algorithm was tested for
production scheduling with copper deposit with more 0.2 million blocks. The production
scheduling problem of large deposit, which is impossible to solve by integer programming
formulation using any commercial solver, was solved approximately within an hour with
our algorithm. Therefore, the constrained optimisation problem can efficiently be solved
by the proposed algorithm for the large orebody model with less CPU time. Although,
our algorithm is not optimum over entire deposit, however, the pushback design and
production scheduling are optimum for individual pushback generation problem.
The example presented herein is based on only the ore production constraints, the same
method can be easily implemented to incorporate other constraints as well. Apart from
that, the multiple orebody models can also be incorporated to calculate the uncertainty
of the pushback design and production scheduling which are now ongoing researches
in our laboratory. A detail analysis of CPU time and optimality of the solution will be
compared with integer programming results.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
204 A T wo -Pha se Heuri s tic Me thod for Con s trained Pu shback D esign

references
Lerchs, H. &Grossmann, I. F. (1965) Optimum Design of Open Pit Mines. Canad. Inst. Mining Bull., Vol.
58, pp. 47–54. [1]

Picard, J. C. (1976) Maximal Closure of a Graph and Applications to Combinatorial Problems. Management
Science, Vol. 22, pp. 1268–1272. [2]

Hochbaum, D. S. & Chan, A. (2000) Per formance Analysis and Best Implementations of Old and New
Algorithms for the Open-Pit Mining Problem. Operations Research, Vol. 48, pp. 894–914. [3]

Dagdeleen, K. & Johnson, T.B. (1986) Optimum Open Pit Mine Produc tion Scheduling by Lagrangian
Parametrization. In Proceedings of the 19 th apcom Symposium, ed., R.V. Ramani, Publ. Soc. Of
Mining Eng., pp.127–141. [4]

Seymour, F. (1995) Pit Limit Parame tri zation from Modif ied three-dimensional L erches- Grossmann
Algorithm. sme Transactions. Vol. 298, pp. 1–11. [5]

Tachefine, B. & Soumis, F. (1997) Maximum Closure on a Graph With Resource Constraints. Computers
Ops. Res. Vol. 24(10), pp. 981–990. [6]

Goovaert, P. (1997) Geostatistics for Natural Resources Evaluation (Applied Geo-statistics Series). Oxford
University Press, New York, pp. 483. [7]

Goldberg, A. (1985) A new Max-F low algorithm, Technical report mit/lcs/tm-291 , Laboratory of
Computer Science, mit, USA. [8]

Four-X Strategic Planning Software for Open Pit Mines (1998) Reference Manual. Whittle Software,
Melbourne, p. 385. [9]

Albor, F. R. C., &Dimitrakopoulos, R. (2009) Stochastic Mine Design Optimization based on Simulated
Annealing: Pit limits, production schedules, multiple orebody scenarios and sensitivity analysis. imm
Transaction, Mining Technology, Vol.118 (2), pp. 79–90. [10]
Robust Mine Scheduling with
Parametric Regret Minimisation:
Method and Example

abstract
Huan xu Decision making under uncertainty in mine design and
University of Texas, USA production scheduling is a common critical issue for mining
ventures. Minimising the so-called parametric regret is an
Roussos dimitrakopoulos approach examined here to support decision making. Parametric
Putra manggala
regret is defined as the performance gap between the “obtained
McGill University, Canada
solution” and the “optimal solution” and can be used to formulate
a minimal parametric regret mine production schedule. This
framework is computationally intensive, even if uncertainty
is defined through a finite number of representations (orebody
models); however, it is attractive as it leads to a schedule which has
a robust maximum npv in the presence of grade uncertainty. The
computational complexity as well as key aspects of this framework
is discussed in this paper through an example using data from a
copper deposit.
206 R obu s t Mine Scheduling with Parame tric R egre t Minimi sation...

introduction
Long-term mine scheduling consists finding an optimal production plan while satisfying
certain constraints imposed by physical and geological conditions, policies, flexibilities
to respond to market demand changes and the operational mining approach. Here, the
‘optimality’ is defined by management objectives that typically include maximising the
monetary value of the mining project, meeting customer expectations and guarantying
a safe operation. Grade of metal content of mining blocks is critical in formulating
the corresponding optimisation problem. In practice, however, exact grade values are
not available – instead, they are interpreted as a finite number of drilling samples and
hence may differ from the true value. This is often called grade uncertainty and in situ
variability.
Traditional mine planning optimisation approaches are based on a single estimated
model of the orebody, which is unable to account for in situ variability and uncertainty
associated with the description of the orebody [2] , because of the non-linearity introduced
in the optimisation. In contrast to this deterministic optimisation approach, a different
set of techniques under the term conditional simulation [7] provide a tool that directly
address the inherent uncertainty of the mine scheduling problem. Based on drill hole
data and their statistical properties (e.g. expected value and joint variance), conditional
simulations generate several models (representations) of a deposit, each reproducing
available data and information, statistics and spatial continuity, that is, the in situ
variability of the data.
It has been well documented that mine planning which ignores grade variability and
uncertainty can lead to unsatisfactory performance [3, 4, 10] . For example, it has been
shown that the npv of the conventionally-generated schedule is often over-estimated [3] ,
specifically using simulated representations of the orebody to assess this npv shows that
the most likely npv to be materialised may be 25% lower than the forecasted one. It is
also well known that optimising the mine scheduling based on multiple representations
significantly increases the performance. It is shown that a multiple-representation based
production scheduling approach using simulated annealing applied to a gold mine results
in a 28% increase of npv as compared to the conventional approach [6] . A same order of
improvement is also demonstrated using this approach at a copper deposit [8] .
Stochastic integer programming or sip [1] is often used in a simulation-based analysis.
In sip, it is implicitly assumed that the uncertain parameters follow an underlying
probability which is roughly approximated by empirical distribution of the simulated
representations. In this paper, we introduce a new framework which explicitly considers
the in situ variability and relax the probability distribution assumptions of sip. In
particular, we extend the representation-based mine scheduling approach by minimising
the performance gap (hereafter referred as ‘parametric regret’) between the candidate
solution and the ‘almighty’ solution. Here, almighty solution stands for an adaptable
solution which observes the complete and perfect grade information and can be regarded
as the ultimate goal of a mine scheduling problem. Because perfect information is not
available in practice, the almighty solution is not admissible in practice and thus we
must be satisfied with the best approximation we can muster. However, an admissible
solution minimising the performance gap from this approximate almighty solution will
perform reasonably well under uncertainty for the purposes of forecasting and decision-
making in mine planning.
This paper next presents the parametric regret minimisation method in the context
of mine scheduling. Then a proof of concept is presented using a real data set and lastly,
possible topics for future work are outlined.
CHAPTER III 207

methodology
We start by considering the traditional integer programming formulation [9] in
Formulation 1. Without loss of generality, suppose that the ultimate pit contains
blocks indexed by {1, ..., T} and is to be mined within periods indexed by {1,..., T}. The
objective function and constraints are fully described in Formulation 1 and their detailed
explanations are as follows:

• (1) is the objective function, which is the Net Present Value (npv) of the schedule. This
is decided by the X n,t 's, where X n,t is a 0–1 decision variable equal to 1 if in period t, the
n- th block is scheduled to be mined or equal to 0 otherwise. Similarly, C n,t is the npv
generated by the n- th block if it is mined in period t.

• (2) and (3) are grade blending constraints, which say that the average grade of the
material sent to the mill has to be less than or equal to some fixed grade, G max and
more than or equal to some fixed grade G min for every period.

• (4) are reserve constraints, which say that any block cannot be mined more than once
across different periods.

• (5) are processing capacity constraints, which say that tonnage of ore processed is at
most the maximum processing capacity PC max and at least some fixed PC min for every
period.

• (6) are mining capacity constraints, which say that the total amount of material (both
waste and ore) to be mined is at most the equipment capacity MC max and at least some
fixed MC min to ensure a balanced extraction between ore and waste for every period.

• (7) are slope constraints, which say that for an n- th block to be mined in period t, the
set of its overlying blocks indexed by {1, ..., Y} must be mined at any period between
the first and t-th.

Formulation 1:

Maximise:

(1)

Subject to:

(2)

(3)

(4)

(5)

(6)

(7)

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
208 R obu s t Mine Scheduling with Parame tric R egre t Minimi sation...

The traditional formulation Formulation 1 and its variants, with respect to constraints is
quite able in tackling open-pit mine scheduling when there is only one representation of
the orebody, i.e. the set of constraints is a certificate that the schedule generated is valid
(usable in practice) and the objective function is a certificate that the schedule is optimal
with respect to the Net Present Value. However these properties are only achieved if in
reality this representation is indeed realised. As we have discussed in the introduction,
it is reasonable to assume that we have a set of plausible representations of the orebody,
denoted by S={S1, ..., SK}. For each of the orebody representation, we can use Formulation 1
to generate an optimal schedule. We then need to make use of these schedules and in
addition, concoct a variant of Formulation 1 which produces a schedule that is robust with
respect to the multiple representations of the orebody, i.e. the schedule needs to be valid
if any of these representations is realised while still achieving the largest npv possible.
Intuitively, we can think of this robust schedule to be learning; as it witnesses more and
more representations and their different optimal schedules, it evolves to be as similar
as possible to each of these schedules with respect to being both valid and optimal. Of
course, the robust schedule cannot be exactly the same as all of the optimal schedules as
they are different from each other, and it “regrets” more as it is further away from each
of the optimal schedules. Thus the robust schedule must minimise this regret.
The constraints of Formulation 1 may be parametrised according to the different
representations and thus create another integer programming formulation. Indeed,
this is natural since we may parametrise the variables according to the representations.
The objective function and constraints of the regret minimisation are fully described
in Formulation 2 and their detailed explanations are as follows:

• (1) is the objective function, which is the regret that we would like to minimise, and
is clearly defined by the constraint in (2) . Upon solving for the optimal schedule for
each orebody representation S S ∈S, we obtain K optimal schedules for each S∈k.
Here c n,t is also indexed by which representation it is from, denoted by . Similar to
Formulation 1, is the 0–1 decision variable which will give the regret minimisation
schedule. Thus this constraint pertains to how close the robust schedule npv is to the
optimal schedule npvs.

• (5) and (8) are exactly the same as that in Formulation 1.

• The variables in  (3), (4), (6), and (7) are parametrised by the representations, as they are


different for each representation and thus the constraints are for all the representations.

Formulation 2:

Minimise:

(1)

Subject to:

(2)

(3)

(4)
CHAPTER III 209

(5)

(6)

(7)

(8)

Here is the whole procedure:

Algorithm 1 (Parametric Regret Scheduling Algorithm):


• Generate a set of multiple representations S={S 1, ..., S K}.

• Generate an optimal schedule for each representation S i ∈S by solving the traditional


mip (Formulation 1).

• Solve the regret minimisation mip (Formulation 2).

results and discussion


Case study: Parametric regret scheduling of a low–grade
disseminated copper deposit
The deposit is located in a typical archean greenstone belt. The region consists
predominantly of mafic lavas with lesser amounts of intermediate to felsics
volcaniclastics. The geological database consists of 185 drill holes with 10m copper
composites in a pseudo-regular grid of 50 m x 50 m covering an approximately rectangular
area of 1600 x 900 m 2. Using the geological information available, one mineralisation
domain is defined and modelled through a geostatistical study.
Algorithm 1 used to generate the mine production schedule first requires the definition
of the ultimate pit and pushbacks. The nested pit implementation of the Lerch-Grossman
algorithm for pit optimisation is used here to define a final pit and pushbacks. This
algorithm requires as an input a single representation of the orebody, which is created
using ordinary kriging on a 20 x 20 x 10 m3 block support. The associated economic and
technical parameters are given in Table 1 .

Table 1 Economic parameters

Copper price (US$/lb) 1.9


Selling cost (US$/lb) 0.4
Mining cost ($/tonne) 1.0
Processing cost ($/tonne) 9.0
Processing recovery 0.9

Using the parameters in Table 1 and the conventionally estimated orebody model, a
set of nested pits is generated. Pit 17 is selected as the ultimate pit limit as it corresponds
to the maximum net present value. There are 15818 blocks inside the pit limits. In this

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
210 R obu s t Mine Scheduling with Parame tric R egre t Minimi sation...

study, an ore processing capacity of 7.5M tonnes per year is used. The yearly maximum
mining capacity is set to 28M tonnes although there is no constraint to guarantee a
constant material movement over the lom. Twenty realisations of the deposit are then
generated with the direct block simulation method [5] .To obtain a simulated model,
the orebody is divided into blocks of 20 x 20 x 10 m 3 within the mineralized domain.
Each block is then represented by 10 x 10 x 1 nodes. This number of nodes, 100 per block,
is large enough to ensure that the actual block scale variability is reproduced by the
simulated representations. These stochastic orebody representations are used as an input
to Algorithm 1.
As per the previous section, 20 optimal schedules are computed using Formulation  1
and then next the regret schedule is computed by Formulation  2. A risk analysis
is performed for this regret schedule against the 20 representations, depicted in
Figure ➊ to ➌. The following statistics are plotted: minimum, maximum, median
and mean. The risk profiles show that the regret schedule is quite robust against the
variations represented by the twenty realisations. In the profiles we see that deviations
from the mean is minimal, as the medians are mostly superimposed by the mean.

Discussion
In light of robustness, it is interesting to find out whether it is empirically more costly
to consider many orebody representations. It turns out that the trend for the empirical
running time is unlikely to be dependent on the number of representations used in the
order of magnitude we are using, as per in Figure ➍. This is a very promising result,
since then we can consider as many as possible orebody representations we are permitted
when computing the regret solution.

Figure 1 Waste risk profile.


CHAPTER III 211

Figure 2 Cumulative
ore risk profile.

Figure 3 Cumulative NPV


risk profile.

Figure 4 Empirical runtime


for regret minimisation.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
212 R obu s t Mine Scheduling with Parame tric R egre t Minimi sation...

conclusions
It has been shown that the regret minimisation approach for mine scheduling is robust
and is a promising alternative to other methods which consider multiple orebody
representations. Moreover, empirical simulations show that running time of regret
minimisation is likely to be a constant function of the number of orebody representations.
It will be interesting to compare this approach to that of the conventional scheduling and
also others which consider multiple orebody representations. In future studies we also
hope to assess the conservativeness of the solution given by this algorithm, the order of
the number of multiple representations to be used to achieve a stable regret solution and
a more efficient mip formulation or algorithm to compute the regret solution.

references
Birge, J. R. & Louveaux, F. (1997) Introduction to Stochastic Programming. pp. 447 Springer: Berlin. [1]

David, M. (1977) Geostatistical Ore R eser ve Estimation. Elsevier: Amsterdam. [2]

Dimitrakopoulos, R., Farrely, C. & Godoy, M. (2002) Moving for ward from traditional optimisation:
Grade uncer tainty and risk effects in open pit design. Transactions of the Institutions of Mining
and Metallurgy, Mining Technology, 111: A82–A87. [3]

Dowd, P. A. (1997) R isk in minerals projects: Analysis, perception and management. Transactions of the
Institutions of Mining and Metallurgy, Mining Technology, 106: A9–A18. [4]

Godoy, M, (2003) The Effective Management of Geological R isk in Long-term Production Scheduling of Open
Pit Mines, PhD thesis, The University of Queensland, Brisbane, p. 256. [5]

Godoy, M. & Dimitrakopoulos, R. (2004) Managing ri sk and wa s te mining in long-term produc tion
scheduling. Society for Mining Metallurgy, and Exploration Transactions, 316: pp. 43–50. [6]

Goovaerts, P. (1997) Geostatistics for Natural R esources Evaluation. Oxford University Press: New
York. [7]

Leite, A. & Dimitrakopoulos, R. (2007) A s tocha s tic optimi sation model for open pit mine planning,
application and risk analysis at a copper deposit. Transactions of the Institutions of Mining and
Metallurgy, Mining Technology, 116(3): A109–A118. [8]

Ramazan, S. & Dimitrakopoulos, R. (2004) R ecent Applications of operations research in open pit mining.
Society for Mining Metallurgy and Exploration Transactions, 316: pp. 73–78. [9]

Ravenscroft, P. J. (1992) R isk Analysis for mine scheduling by conditional simulation, Transactions of the
Institutions of Mining and Metallurgy, Mining Technology, 101: A104–A108. [10]
Meeting Mill Capacity Using
Dynamic Cut-Off Grade: Application
at Escondida Copper Mine, Chile

abstract
Víctor vidal Open pit mine design and production scheduling is a complex
Roussos dimitrakopoulos operation aiming to generate the optimum mining sequence in
McGill University, Canada terms of total economic value over the life-of-mine. Conventional
methods used to find the sequence of extraction, or pushbacks,
and optimal pit limits often fail to meet mill capacity constraints.
The delay in meeting production targets represents additional
costs for the mining project, including idle capacity costs, costs
for partial repayment of loans and penalties imposed by clients.
Related to mill capacity is the selection of an optimal set of cut–
off grades over the life-of-mine. Common optimisation methods
consider static, predefined cut-off grades. This static selection
can lead to sub-optimal solution, given that cut-off grade is an
economic-based indicator. This paper discusses a method that
generates the maximum expected profit and dynamically defines
the optimal cut-off grade for each mining period or pushback
over the life-of-mine, thus deciding whether a block is ore or
waste during the optimisation process. The practical aspects of
the method are demonstrated in an application at the Escondida
copper mine, Chile.
214 Mee ting Mill Capacit y Using D ynamic Cut- Of f Grade...

introduction
The main difficulty in mine production optimisation and scheduling is deciding which
areas are extracted in a given period, with the goal of maximising total profits over the
planning horizon of N time periods (year, months, weeks, etc). A schedule is deemed
feasible if satisfies a number of specific constraints such as orderly extraction, mining
equipment capacity, milling capacity, refining or marketing capacity, grades of mill feed
and concentrates, and stability of the pit slopes, among other physical, operating, legal
and policy limitations [4] . Hence, pit design can be defined as an operation based on the
determination of the most profitable mining sequence, where the primary objective is to
generate a mine's optimal pit limits and the large scale extraction sequences (pushbacks,
cutbacks or phases) by optimising project's Net Present Value (npv).
Several methods exist aiming to produce an optimal design [8, 11] . The most common
and commercially used algorithm is the Lerchs-Grossmann or l-g algorithm [7] based on
graph theory, which guarantees finding the optimum pit size in three dimensions and is
commercially implemented as the nested pit approach [11] . This approach is essentially
scaling the economic values of the blocks representing the deposit. These smaller pits can
be grouped into possible pushbacks by selecting the group of pits that meet a given sets
of constraints [11] . All existing optimisation methods can be used to calculate, but can
generate ‘gap’ problems [11] . A gap, in this paper, refers to a large-scale difference in the
amount of material sent to the mill in consecutive pushbacks, which is impractical for
the mining process. This miss-calculation will have a negative effect on the plant; given
that the mill capacity is not satisfied, the processing cost will increase. As the amount
of material sent to the mill is less than the amount originally planned, the associated
mining cost increases as parts of the mining fleet remain idle (drills, shovels, trucks,
etc). On the other hand, if the calculated tonnage of ore sent to the mill is larger than
expected, the mining equipment will not be able to move the material, thus delaying
the extraction of ore in the following period. This also has a sever repercussion in the
mill, as it will not be able to process that amount of material in the projected schedule,
which leads to an increase of costs by delaying the extraction and the processing of ore in
the next period. One solution is the construction of a stockpile, however this may be not
feasible in all cases (i.e. not enough space, environmental problems, stability problems,
etc). Delays are not admissible due to the discounted time value of money. The maximum
profit is discounted in time, therefore if a certain amount of material, which represents
profit in a specific period, is not processed according to the schedule, then the operation
will be a suboptimal solution [10] . A possible solution to this problem is for the mine
planner to redesign the pushback design by removing some blocks from the potential
bigger pushback or to add ore blocks to a pushback that fell short in tonnage. However,
this decision of subtracting or adding blocks leads to a suboptimal global solution in
terms of npv.
Another crucial decision is the selection of a cut-off grade schedule or policy [5] . The
cut-off concept is an economic-based criterion to discriminate between waste and ore
in a mineral deposit. The classical heuristic approach to determine optimum cut-off
grades aims to maximise the project's cumulative cash flows under mining, processing
and refining constraints. The idea of this method is to use higher cut-off grades in the
initial periods of the mining operation, creating higher cash flows in the early years.
This formulation needs to include the fixed costs associated with not getting the future
cash flows quicker, due to the cut-off grade decision taken in the present. This implies
that the cut-off grade will be higher in the initial periods of the project [5] . Traditional
CHAPTER III 215

methods of pushback design and scheduling use a static cut-off grade that is essentially
determined independent of the mining sequences, capabilities of the mining system, and
other operational constraints. The problem is that under the constraints of the mining
system, these factors are influenced by the choice of the cut-off, and hence there is an
interaction between these elements. The selection of the cut-off grade should be dynamic,
considering this parameter as a function of the state of the mining system at the time
a decision is to be made as well as the future effects of this decision. The profits of a
mining operation can be substantially affected by the choice of a cut-off grade and hence
this is an important consideration [4] .
This paper presents a case study based in the Escondida Mine data where a new method
of dealing with the mining problem of designing pushbacks is used. The formulation of
the problem as an ip was presented in [9] , they implemented a rounding method known
as pipage rounding to produce a dynamic cut-off grade.

cut-off grade theory


It is well known that quality of ore can vary across the face of a deposit. It is necessary
to choose which ore is rich enough in metal to warrant extraction and which material
is left behind as waste rock. The problem of finding this limit is known as the cut-off
grade problem. Price of the metal, costs of extraction and processing, and technological
limitations in the operation are some of the factors that have direct influence in the
cut-off grade, hence this concept is an economically-based criterion of discrimination.
Intuitively, if the price of metal increases, the cut-off grade should do the opposite; in this
case it is possible to mine more material given that the reserves of the deposit increase.
If the costs associated with mining increase, the cut-off grade will increase and there
will be less material to mine. However, no matter how high the price of metal is or how
low the costs of the mining process are, there will be always a lower bound for the cut-off
grade given by the technology [5] . For every processing stage there exists an associated
cut-off grade (for example extraction and mill, concentration plant and refinery, etc).
There are several methods to determine the cut-off grade. Most processes use a break–even
cut-off grade criterion, defining ore as material that will pay mining and processing
costs; this analysis is usually made without regard to the state of the mining system.
The cut-off grade is calculated independent of the mining sequence, capabilities of the
mining system, and other operational constraints [4] . The general characteristics of
traditional cut-off grade are that they aim to maximise the undiscounted profit; they
are constant unless the price and costs is changed during the scheduling (static), and
they not consider grade distribution of the deposit.
The conventional heuristic approach to determine optimum cut-off grades aims to
maximise the project's cumulative cash flows under mining (or npv), processing and
refining constraints. The idea is to use higher cut-off grades than proposed by the break-
even calculation in the earlier periods of the mining operation, creating higher cash
flows in the early years. This formulation needs to include the fixed costs associated with
not getting the future cash flows quicker due to the cut-off grade decision taken in the
present. This implies that the cut-off grade will be higher in the initial periods of the
project [5] . The opportunity cost introduced in the calculation of the cut-off grade is based
in the assumption that every deposit has a given npv associated with it at a given point in
time and that every tonne of material processed during a given period should pay for the
cost of not receiving the future cash flows by one period sooner. Another way to consider

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
216 Mee ting Mill Capacit y Using D ynamic Cut- Of f Grade...

this is that the opportunity cost should be considered as taking the low grade now when
higher grades are still available [6] . However, this approach assumes that the lifespan
of the mine is known, and it optimises the cut-off grade for a predetermined sequence.
Hence, to find a cut-off grade close to optimal trial-and-error algorithms must be applied.
It is possible to extend the definition of cut-grade not only to discriminate between ore
and waste, but to answer what to do with the material after removing it, such that the
profits of the project are maximised, subject to the constraints of the mining system. This
is influenced greatly by the state of the mining system and its parameters; the extraction
of one block should be influenced by the interaction of other blocks that surround it
herefore, if the objective is to maximise the profit given the extraction sequence, any
classification decision should consider the economics, capabilities of the system, assay
values of the material and other operational constraints. All these factors interact in
complex ways during the lifespan of the mine, so any variation they might present will
affect the cut-off grade. Hence there is a direct relation between the extraction sequence
and the cut-off grade. To maximise the cumulative discounted cash flows of the project,
the use of dynamic cut-off grades is widely accepted as an improvement of the break-even
cut-off grade approach. The ip formulation presented in this paper provides an alternative
to include the decision of mining a block and then decide whether or not to send it to
the mill. Hence the optimisation algorithm determines the dynamic cut-off grade. This
type of cut-off is a function of the state of the mining system at the time a decision is to
be made as well as the future effects of this decision, yielding a more optimum mining
plan than the one possibly obtained using a static cut-off grade approach.

Scheduling mine planning: IP formulation


In an ip formulation of the open pit scheduling problem, every block is a variable.
Solving the problem directly can take a considerable amount of time. One approach
to this limitation is to solve, in a first stage, the lp relaxation and then use a second
stage method to convert the fractional result into an integer solution. In the approach
described below, the procedure relies on having an objective function with non-negative
coefficients. Maximising the profit obtained from sending ore to the mill minus the
cost of removing waste blocks is equivalent to maximising the profit from ore sent to
the mill plus the mining cost not spent for blocks that remain in the ground. Hence, it
is possible to reformulate the objective function to have non-negative coefficients. The
decision of whether or not to send a block to the mill is made during the optimisation,
in other words a way to classify blocks as ore or waste is added to the model in a dynamic
way. The cut-off grade is not calculated explicitly in the method, however it is possible
to obtain after the pushback is designed given the blocks were sent to the mill. In the
analytical description this is made by using two types of variables: blocks removed from
the ground (x i) and blocks sent to the mill (y i).
The ip formulation for the design of pushbacks can be written analytically as follows.


CHAPTER III 217

Subject to

(1)

(2)

(3)

(4)

Where x i is a decision variable, it is 1 if the block is left in the ground and it is equal to
0 otherwise. Similarly, y i equal to 1 implies that block i is removed from the ground and
sent to the mill and if it is equal to 0, block i is not sent to the mill (it either remains in
the ground or goes to the waste disposal). The parameters c i represents the positive cost
of removing the block i from the ground. Value pi is the profit obtained by sending block
i to the mill. Every block also has an associated ore tonnage, w i . The first constraint (1)
represents the mill capacity, therefore the total tonnage removed and sent to the mill
cannot exceed b. The slopes constraints are defined by a directed graph G(V,A). An arc
(i, j) is in the set A(G) if the block i must be removed before removing block j. The set N
represents the blocks in the orebody model. The third constraint ensures that blocks sent
to the mill are actually removed from the ground. Finally x i is defined as a binary variable.
The methodology used is to solve the lp relaxation of this problem and the solution
is then rounded into an integer solution. The lp relaxation consists in changing the
integrality constraints (4) to 0 ≤ x i ≤ 1 and 0 ≤ y j ≤ 1 for all i, j ∈ N.

Improving the solution to integer: Pipage rounding


Pipage rounding is a method that rounds a lp fractional solution to an integral solution
while preserving a specific ip constraint – in this case the mill capacity. The general
framework of pipage rounding relies on finding a non-linear function, F(y), that is equal
to the objective function of the ip, f(y), for integral points. After solving the lp relaxation
the solution y* is used to evaluate F(y*). The function F(y) is such that y* can be rounded
into integrality while preserving feasibility and strictly increasing the value of F(y).
If F(y*) approximates f(y*) suitably well, the rounded solution will be close to optimal.
Details of the method are described in [9] .

case study: application at escondida copper mine


The long term scheduling in this paper was applied to La Escondida Mine. The orebody is
the biggest porphyry copper deposit in the world, and it is located 170 km south-east of
Antofagasta, Chile. The deposit is formed by two major stages of sulphide and one stage
of oxide mineralisation. The supergene enrichment blanket of the deposit is defined
by chalcocite and minor covellite. The primary sulphide mineralisation is represented
by chalcopyrite, bornite and pyrite. Escondida is a classic open pit mining operation
processing sulphide and oxide ores. The mining fleet consists in nine loading shovels
fitted with 73yd3 buckets and four shovels fitted with 50 yd3; 62 trucks have a capacity
of 240 tonnes and 21 with a capacity of 400 tonnes. The operation has 14 electric drills
plus one diesel drill. Bulk ammonium nitrate fuel-oil explosive is used for blasting [12] .

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
218 Mee ting Mill Capacit y Using D ynamic Cut- Of f Grade...

The processing stage is formed by two concentrator plants with capacities of 110,000 and
120,000 tonnes per day each, an electro-winning plant to produce cathodes from oxide
and sulphide ore, and two pipelines that transport copper concentrate from the mine to
the filter plant.
The objective of the case study is to generate consecutive pushbacks using the ip
formulation presented previously. Given the orebody model and the economic values,
the first step is to find the solution of the lp relaxation. After computing the lp solution
the pipage rounding method is applied to find a feasible solution for the original
problem (i.e. an integral solution that meets the mill capacity). Each pushback is
generated subsequently one after another by subtracting the blocks that belong to the
current pushback from the orebody model. The resulting model is the new input for
the algorithm. This methodology is inherently greedy; each pushback is designed by
optimising the discounted cash flow value of the current pushback. It is possible that a
less greedy design increases the value of subsequent pushbacks by removing overburden.
Along with this, several other assumptions were made in this case of study in order to
simplify the implementation of the method: only one processing plant was considered,
the ability to stockpile was ignored and the ore is represented by just one type of mineral.
It is important to mention that the orebody model used in this case study represents a
section of the Escondida porphyry deposit and it contains 33840 blocks with dimension
25 × 25 × 15 cubic metres (Figure   ➊). The economic values of every node in the model
are calculated according to the parameters shown in Table 1 .

Table 1 Economic and parameter values for the case study

Number of blocks 33840

Dimension 25x25x15 m3
Selling Price 2.0 US$/Lb
Mining Cost 1.8 US$/Ton
Processing Cost 4.3 US$/Ton
Selling Cost 0.3 US$/Lb
Density 2.7 M3/Ton
Mill Capacity 16,000,000 Ton
Slope Angle 45°
Discounting rate 10%

Figure 1 Easting section (67) of Escondida orebody model.

Using this information the algorithm was applied to the orebody and 13 cutbacks were
generated (Figures  ➎ and ➏). Each cutback contains the desired amount of ore to
feed the processing plant. The discrepancies (if there is any) between the ore in the
cutbacks and the requirements of the mill do not exceed the average tonnage of one ore
block, which is approximately 26,000 tonnes (Figures   ➌ and ➍). This result leads to
conclude that the constraint in the IP formulation is honoured and the mill capacity is
met perfectly.
CHAPTER III 219

Figure 2 Projected ore tonnage scheduling.

Figure 3 Cumulative ore tonnage.

Figure 4 Easting section


(up) and northing section
(down) of the projected
scheduling.

Figure 5 Projected
scheduling for
Escondida mine.

This scheduling configuration generates a cumulative profit in time of US$1,148 billion


➏). However, there exist large amounts of waste that have to be removed in
(Figure  
the first periods. Later pushbacks have a significant decrease in the amount of waste
➐). This is problematic for the estimated schedule in the first periods; it is
(Figure  
not possible to handle that amount of material in the projected time. One solution to
this operational problem is to design development phases prior to mining the ore. This

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
220 Mee ting Mill Capacit y Using D ynamic Cut- Of f Grade...

proposal can be easily implemented given the profits generated by this schedule; it is
feasible to assign additional initial starting investment, thus delaying the generation
of positive cash flows. An important issue to consider is that the main objective of this
exercise is to generate a sequence of pushbacks with the respective cut-off grade for each
one of them and it was not consider operational restrictions as mining width control
or mineability of the pushbacks. The limitations of the technique are given mostly by
the rounding method used to solve the ip problem. Pipage rounding works on knapsack
constraints representing a single mining process and it is not suitable for lower bound
constraints, as the ones necessary to control the waste extraction.

Figure 6 Cumulative cash flows for the generated schedule.

Figure 7 Projected waste removal scheduling.

In Figure   ➑, the cut-off grade generated by the optimisation procedure is presented.


The grade, as expected, decreases while the cutbacks are extracted. It is observed that
for the initial periods, the cut–off is considerably high. This might be the result of the
greedy way of designing the pushbacks, but the decreasing cut-off grades is the result of
the dynamic implementation. It is possible to take advantage of this result and propose a
technique to improve the actual solution. As it is assumed that there are no stockpiles and
just one processing plant for one type of mineral; the results can be improved considering
these variables in the mining system. The stockpile will allow saving material in the
early periods to be processed in the later years, reducing the amount of waste and giving
options of processing material that can report profit in future years. If the option of
using a stockpile is considered and extra variable should be included in the objective
function. In this case, along with considering the decision of mining a certain block, the
decision of sending that block to the mill or the stockpile has to be made. As the objective
function has to be modified a new non-linear function to apply pipage rounding needs
to be constructed and a knapsack constraint for the capacity of the stockpile has to be
CHAPTER III 221

considered in the problem. The new non-linear function has to have certain properties
specified in [9] so this becomes an interesting problem for further research. Another
problem that can be considered as future work is the implementation of the dynamic
cut–off grade approach in scheduling in a stochastic fashion.

Figure 8 Evolution of the cut-off grade during the life-span of the mine.

In terms of computational cost the average time it took to solve each pushback was
less than eight minutes, including the solution of the lp relaxation and the improve to
integrality using pipage rounding. The design of last pushbacks took more time than
the first ones. A 3.9 GHz pc with 2.0Gbs of ram was used; the algorithm was coded in
C# using gmp as the multiprecision library for evaluating the non-linear objective value.

conclusions
In this paper, a new approach for designing pushbacks was applied. This method handles
specific constraints, such as mill capacity, which in most optimisation problems are
considered extremely restrictive. The method also has the advantage of computing a
dynamic cut-off grade in a better way, thus improving the value of the mining project
by increasing the cumulative cash flows.
A schedule for a section of Escondida mine was generated in the case study using a
dynamic cut-off grade approach. The cumulative cash flows obtained in the proposed
schedule are significant, however the implementation of this design implies the removal
of great amounts of waste in the first periods. The use of development phases could be
used to alleviate this problem, however this delays the extraction of ore and adds extra
initial investments. The high levels of cash flows obtained make this decision feasible.
The use of stockpiles can also improve the solution presented in the case study. If it is
desired to include a stockpile in the operation, an extra variable must be used, and the
modification of the objective function should be considered along with the definition of
a new non-linear function. This is an interesting problem for future research.
The greedy fashion of the proposed method to generate cutbacks is one of the major
drawbacks, hence it is proposed that investigating simultaneously generating pushback
sequences may be beneficial in future work. Another problem that might result
interesting for future work is the consideration of stockpiles in the scheduling problem
using the dynamic approach to choose cut-off grades. Finally a stochastic fashion of
implementing the dynamic cut-off grade in open pit mining scheduling is under study.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
222 Mee ting Mill Capacit y Using D ynamic Cut- Of f Grade...

aknowledgements
The work in this paper was funded from nserc cdr Grant 335696 and BHP Billiton, as
well nserc Discovery Grant 239019. Thanks are in order to Brian Baird, Peter Stone, and
Gavin Yates, as well as Darren Dyck, for their support, collaboration, data, and technical
comments.

references
Cai, W. L. (2001) Design of open-pit phases with consideration of schedule constraints, Computer Applications
in the Minerals Industries, Xie, Wang and Jiang, pp. 217–222. [1]

Dagdelen, K. & Kawahata, K. (2008) Value creation through strategic mine planning and cut-off grade
optimization. Mining Engineering Vol. 60, pp. 39–45. [2]

Hochbaum, D. S. & Chen, A. (1999) Per formance analysi s and bes t implementation s of old and ne w
algorithms for the open-pit mining problem. Operations Research Vol.48, No. 6, pp. 894–914. [3]

Johnson, T. B. (1969) Optimum open-pit mine production scheduling. A Decade of Digital Computing in
the Mineral Industry sme aime, Weiss A, pp. 539–562. [4]

Lane, K. (1964) Choosing the optimum cut-off grade. Colorado School of Mines Quarterly, Vol. 59,
pp. 811–824. [5]

Lane, K. (1988) The economic definition of ore-Cutoff grades in theor y and practice. Mining Journal Books
Limited, London. [6]

Lerchs, H. & Grossman, I. F. (1965) Optimum design of open pit mines, Joint cors and orsa Conference,
Montreal, in Transactions cim, pp. 17–24. [7]

Meagher, C., Dimitrakopoulos, R. & Avis, D. (2007) Optimized open pit mine design, pushbacks and the
gap problem. cosmo Technical Report 1, pp. 25–51. [8]

Meagher, C., Dimitrakopoulos, R. & Avis, D. (2008) A new approach to constrained open pit pushback
design using dynamic cut-off grades. cosmo Technical Report 2, pp. 215–230. [9]

Whittle, J. (1990) Open pit optimization. Surface Mining, 2nd edition. Kennedy B A, Society for Mining,
Metallurgy and Exploration, pp. 470–475. [10]

Whittle, J. (1999) A decade of open pit mine planning and optimization-The craft of turning algorithms into
packages. 28th apcom Symposium, sme-aime, Golden, Colorado, pp. 15–24. [11]

Mining equipment survey 2007–2009, Chile-Peru-Argentina. Minería Chilena Magazine (2009). [12]
Understanding Real Options in
Mine Project Valuation: A Simple
Perspective

abstract
Luis martínez The recent downturn has shown that operating flexibility and
Joseph mckibben strategic adaptability are critical to the long-term success of
Xstract Mining Consultants, many resource companies. The early planning stage of a mining
Australia project (i.e., before completion of the feasibility study) typically
provides the greatest scope to explore alternatives, assess risk and
implement changes in order to minimise overall project costs
while maximising the project upside potential. Once ground has
been broken, the alternatives available to engineers and operators
diminish exponentially.
The objective of this paper is to provide an alternative technique
for project evaluation, which takes into account uncertainty
and risk, as well as managerial flexibility to respond to these
uncertainties. To achieve this outcome, this paper introduces real
options in a general context, and, demonstrates the viability of
the method as an alternative technique for strategic mine project
evaluation. A small disseminated gold deposit is used as case study
and evaluated under gold price uncertainty. The results indicate
that although a bit more complex to implement, real options
gives a better overview of the mine project performance giving
the mine analyst the flexibility of making decisions to minimise
the downside risk while maximising the upside potential.
224 Under s tanding R eal Option s in Mine Projec t Valu ation: A Simple...

introduction
The recent downturn has shown that operating flexibility and strategic adaptability are
critical to the long-term success of many resource companies. The early planning stage
of a mining project (i.e., before completion of the feasibility study) typically provides
the greatest scope to explore alternatives, assess risk and implement changes in order
to minimise overall project costs while maximising the project upside potential. Once
ground has been broken, the alternatives available to engineers and operators diminish
exponentially.
Discounted Cash Flows (dcf) and the associated Net Present Value (npv) techniques
have traditionally provided the major tools for project evaluation. However, these
techniques are somewhat limited in that they provide a static view of the project based
on averages or expected values and, from a valuation perspective, largely disregard cash
flows beyond a certain period (as little as five or six years). This is formally referred as
the Flaw of Averages in Mine Project Evaluation. [1]
It is all very well and good to calculate dcfs and npvs, but can traditional techniques
really put a fair price on long life, strategic or complex mining assets? Are there other
alternatives? Certainly the simplicity and ease of use of traditional dcf and npv analysis
has impeded the development of new techniques as standard open pit mine evaluation
tools. This situation is now beginning to change as mine planners, mine managers and
investors have started to ask questions about the performance of their mine projects
in the face of uncertainty. It is only a matter of time until a new evaluation technique
emerges as the standard tool for mining project evaluation. This is precisely the objective
of this paper. That is, to provide an alternative technique for project evaluation that takes
into account uncertainty and risk, as well as managerial flexibility to respond to these
uncertainties. To achieve this outcome, this paper introduces real options in a general
context, and, demonstrates the viability of the method as an alternative technique for
strategic mine evaluation.

discounted cash flow analysis for mine project


valuation
Resource managers are often required to make capital investment decisions which involve
the commitment of large amounts of capital to a particular course of action that is not
always easy to reverse. Such a decision can be very expensive should it subsequently be
shown to be wrong.
Managers are typically advised to use the dcf method and apply the npv rule in
order to determine whether or not to proceed with a particular course of action. Cash
flows evaluated by the dcf method comprise revenue, operating and capital cost streams
whose interaction influences project value. More formally, the npv technique consists
of subtracting the initial capital investment, CapInv, incurred at the beginning of the
mining project (assumed to be period t0) from the present value of all expected net cash
flows (CF t) generated over the entire mine life, t=1,2,..,T. These cash flows are discounted
at a rate (1+R) t , where R is the hurdle rate. This may be expressed as:

(1)
CHAPTER III 225

In practice, the expected cash flows generated at each production period of a mine project
can be defined, in general terms, as

(2)

where, St , ProdCost t , grade t , are the expected metal price, total production cost, and the
metal quantity, respectively, produced at each production period t=1,2,..,T. As observed
in Equation (2) , the dcf method requires managers to predict the input variables
profile and an appropriate rate of return for the project over its entire economic life. The
resulting cash flows (including the initial capital investment) are then summed and if the
final amount is positive, the npv rule recommends the project should be undertaken. The
npv rule is widely used in the mining industry as it recognises the time value of money
and accounts for risk via an adjusted interest rate, R (Equation (1)). This provides the
analyst with a powerful yet simplistic tool for making financial investment and dividend
decisions. [2]
One important characteristic of the dcf-npv technique is that a single adjusted
interest rate, R, is usually applied to all future cash flows (Equation (1)) to account for
all sources of risk. This may include both economic and technical risks, such as metal
prices, the shareholder's expectation of returns, ore tonnes, and metal quantities, among
others. Normally, this adjusted interest rate, R, is estimated as the company's Weighted
Average Cost of Capital (WACC, refer to Equation (3)). If the mining company has both
equity and debt, the WACC interest rate, RWACC , is determined as the weighted average
cost of both debt and equity, that is,

(3)

where the market value of the mining company, is the sum of the firm's interest-bearing
debt, D, and the market value of the equity, E; Tc is the corporate tax rate, rd is the pre-
tax yield on the company’s debt, and re is the company's expected return on equity as
determined by the Capital Asset Pricing Model (capm) [3] . In this case, re is defined as

(4)

where rf is the risk-free rate (usually considered as the interest provided by government
bonds), and rpremium = (rm - rf ) ϐ is the risk premium composed of the expected return on the
market portfolio, rm, and, ϐ, is a measure of the volatility of the company's stock when
compared to the entire market [5]. Observe that if the company does not hold any interest
bearing debt, as it is normally assumed in most mining project evaluations [6]1, then D = 0­
in Equation (3). Under these conditions, the risk adjusted interest rate, R, is equal to the
company's expected return on equity, that is, R = rf + rpremium . Expressed another way the risk
adjusted rate, R, is equivalent to the prevailing bond rate plus a premium which reflects
the company's internal hurdle rate of acceptable returns in order to invest.

Limitations of the Discounted Cash Flow technique


Due to its simplicity, the traditional dcf analysis is the most widely accepted way of
making practical investment decisions. Certainly for resource projects with healthy
npvs and stable cash flows, dcf analysis will remain the dominant investment

1 Modigliani and Miller (1958) have demonstrated that the value of a firm or project is unaffected by its financing decisions.
That is, the risk-adjusted return, R, expected by both shareholders and lenders gradually adjusts upwards as the level of debt
increases, and that, for this reason, in the final analysis the actual level of debt used to fund a project should not matter.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
226 Under s tanding R eal Option s in Mine Projec t Valu ation: A Simple...

decision-making tool for the mining industry. However, the validity of this approach is
undermined where there is a high degree of uncertainty in future cash flows and where
management has the flexibility to respond to these uncertainties. While conceptually
very simple, the dcf method has some major shortcomings, including:

• How it deals with uncertainty


• The extent of flexibility that management has to respond to uncertain events over
the life of the project.

Typically these shortcomings result in the dcf method consistently underestimating the
value of a project. These limitations are discussed further below.

Dealing with uncertainty


In dcf analysis, a manager is required to predict a cash flow outcome for some period
in the future. Under this scenario, any variance from the expected outcome (either
positive or negative) is a source of both risk and potential upside for the project. However
managers typically focus on the possibility of making a loss rather than making a gain,
and hence a disproportionate amount of time and effort is spent assessing the likely
impact of a loss and how to mitigate this outcome.
A further limitation of the dcf model involves the use of a single risk adjusted discount
rate to represent all sources of uncertainty in the project. This has the effect of masking
the true sources of uncertainty. As we all know there are many factors which can
influence the value of a mining project including technical, geopolitical, legal, financing,
commodity price and market elements. One problem arising from using the WACC risk
adjusted rate of return, RWACC , for a mining company is that it does not differentiate
one project from another (i.e., the mining company uses this WACC for all its projects
and associated scenario testing) and is a global indicator of risk. As such, it may lead to
an incorrect perception of risk when applied to projects that are significantly different
from the company as a whole. This may be the case for many mining projects, as the
geological uncertainty associated with each orebody differs between mines (i.e., metal
grade distribution and other geological, geotechnical and geometallurgical properties
are different).
Furthermore, the riskiness of the project may change over time depending on how
uncertainties unfold and management has to react to these uncertainties [7] . However,
in a practical sense, the task of estimating an appropriate single dynamic WACC risk-
adjusted rate of return is also very difficult to achieve.

Dealing with f lexibility


A dcf model is based on the assumption that the deposit will be mined at some pre-
destined rate or profile. This plan is based on initial assumptions regarding tonnage,
grade, continuity and quality, as well as the likely cost of extraction and recovery rates.
While alternative scenarios can be modelled, from a decision making point of view these
are effectively mutually exclusive. In practice mine managers have some flexibility
regarding a mining project. If external or internal factors change, mine managers can
change their operating strategy. Elements able to be changed include:

• The project design. Which incorporates the applied technology for equipment, mining
rate, scale of the operation and development or mining phases, among others.
CHAPTER III 227

• The timing of project phases and key decision points. For example, the mine manager may
elect to delay the project, undertake further exploration or development, expand or
contract the scale of the operation, accelerate or decelerate the mining rate, develop
mine satellite deposits or temporarily close or abandon the operation altogether.

Such flexibility allows management to either limit downside losses or magnify upside
returns, and consequently the expectations about project return, over the project's
economic life.

real options analysis on mining


projects: the basics
Now, looking towards a better way which minimises project uncertainty and assesses
the impact of managements' response to these uncertainties; the real options approach.
Real options recently come to the fore following advances in the fields of finance and
decision analysis. Importantly, an option conveys the right, but not the obligation, to
perform an act. As such, options are able to add value as they provide opportunities to
take advantage of an uncertain situation.

Financial options analysis: Using options as hedging


strategy in mining industry
An option on a stock is a contract that provides the holder with the right to buy or to sell
one share of the stock on or before a particular date at a predetermined price. A call/put
option gives the holder the right to buy/sell a share of stock, respectively. There are two
common types of options:

• American (call/put) options are contracts that provide its holder with the right, but
not the obligation, to buy/sell one unit of an underlying asset for a predetermined
strike price, K , ‘at any time before the expiration date’, T.

• European financial (call/put) options give its holder the right, but not the obligation,
to (buy/sell) one unit of an underlying asset for a predetermined strike price K ‘at the
option's expiration date’, T.

Consequently, the difference between American and European options is that the former
can be exercised at any time, before and including the expiration date, while the latter
can only be exercised at the expiration date.
To understand how an option can be used to minimise future risks, assume that we
want to buy y tonnes of copper for delivery in one year's time (in this case the expiration
date, T = 1 year, at a specific copper (strike) price of K. There are two ways of doing this
transaction:

• Speculate about the future copper price (1 year in the future) and write a contract for
the total value, that is, $yK. If at the end of the year the copper price ST > K making the
profit of $y(ST -K)>0, we made a good purchase; but if the copper price ST <K generating
losses of $y(ST -K)<0, then we made a bad buy.

• Purchase an American or European call option, paying $f A or $f E , respectively at


time t=0, to have the right of exercising the contract at any time during the year
(if an American option) or at the end of the year (if a European option). In this case,
the profits that we could generate are $y(S t<T -K) (if an American option), or $y(ST -K)

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
228 Under s tanding R eal Option s in Mine Projec t Valu ation: A Simple...

(if a European option), or zero if we decide not to exercise the option, losing just the
purchase price of the option, that is, either $f A or $f E . The problem is to define the
current fair value of the option contract (i.e., either $f A or $f E ).

One important characteristic of the option value is that it can never be negative. For
more properties about the value of an option, the reader is directed to books on option
pricing theories. [8–10]

Real options
Real options analysis is a valuation and strategic decision making tool that applies
financial option theory to real assets [11] . Real options differ from financial options
in that they deal with risky tangible assets, such as mining and petroleum projects,
rather than financial products. But the concepts underlying their usefulness as a tool
for dealing with uncertainty are the same. In the mining context, the similarity and
differences between financial (American) call option and a (real options) mining project
are presented in Table 1 .
In terms of the overall process to complete a real option analysis on a mine project, it is
important to note that the method is based on a traditional dcf which provides the base
case npv as the primary input into a real options analysis. The next step is to identify the
key sources of uncertainty and establish the relative impact of these on project value. In
order to do this it is necessary to determine appropriate parameters for these uncertain
variables. One good way to do this is to analyse the variance of historical values (i.e.
metal prices) and use this to predict how that variable changes over time. However, not
all variables may be available and as such one may have to estimate the future value of
a variable and apply confidence limits in order to calculate the expected variance.
These uncertainties are then combined to estimate the expected volatility in the
project value. Management should then identify opportunities to respond to these key
uncertainties (e.g. can they realistically expand, contract, abandon, etc) and determine
the impact of such measures.
These uncertainties and opportunities are then combined in a valuation framework
(typically for a mining project this is a binomial lattice, although Black and Scholes
models and Monte Carlo simulation may also be used) which is then solved to determine
the expected value of the real option. The real option value is then added to the base case
model to give the Expanded Net Present Value (enpv) (Figure  ➊) of the project. In short,
the process behind real options analysis can be summarised as indicated in Figure  ➊.
Table 1 Comparison between an American call option on a stock and a real
option on the acquisition of a mining project

American Call Option Real Option On Mining Project


Current value of stock (Gross) PV of expected cash flows
Exercise price Investment cost
Time to expiration Time until opportunity disappears
Stock value uncertainty Project value uncertainty
Riskless interest rate Riskless interest rate
CHAPTER III 229

Figure 1 Summary of the process of applying real options when valuing a mine project.

an illustrative example: valuing a small


disseminated gold deposit
In this example, a mining company must decide whether or not to invest in a gold mine.
The decision to develop the mine is irreversible, in that after development management
cannot disinvest and recover the expenditure. The details of the project are given in
Table 2 . The company has the following options: (i) close the mine in the face of adverse
economic environment; (ii) start mining at period t=0; and (iii) start mining at period t=1.
As outlined in Table 2 and Figure  ➋, the gold mining project is expected to have
a Life-Of-Mine (LOM) of two years, producing 4,000 and 10,000 ounces of gold during
the first and second year of operation, respectively. The expected value of the gold
price is US$914.90/Oz (May-2009) and the total expected operating cost is US$600/Oz.
Furthermore, management has advised that the adjusted rate of return to be used in
the project cash flow analysis is 8.33% and the risk-free rate is assumed to be 4%. The
initial capital expenditure is US$4.5 million and the lease cost is US$2.0 million per year.
Following the process shown in Figure  ➊, the first step in valuing the gold mine is
to estimate project's current value using the dcf technique. The results of this step are
shown in Figure  ➋. Using the initial assumptions, Figure ➋, presents the expected
cash flow of US$3.85 M and a negative npv of US$-0.65M for the gold mining project.
Based on dcf analysis and using the npv rule, the decision should be not to invest in
the project.

Table 2 Gold mine project data (all economic values given in US dollars)

Mine life 2.00 years


Lease life 2.00 years
Operating costs -600.00 $/Oz
Current market price for gold 914.90 $/Oz
Adjusted Interest rate 8.33% yearly
Risk-free rate 4.00% yearly
Gold price volatility 19.66% yearly
Initial Project Investment -4.50 $M
Closure Costs $0.00 $M
Lease Cost* -$2.00 $M
* The lease cost also includes other costs incurred when delaying the
mine production.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
230 Under s tanding R eal Option s in Mine Projec t Valu ation: A Simple...

Figure 2 DCF analysis of the gold mine project. Because of negative


NPV the final decision is not to invest in the project.

However, the previous analysis did not consider the uncertainty associated with future
metal prices and assumed that it will remain constant at US$914.9/Oz. Analysis of
historical gold prices, as observed in Figure  ➌, shows that gold's volatility (defined
as the standard deviation of the return of gold price) is around 20% per annum. This
indicates that the gold price may vary by as much as 20% in years one and two.

Figure 3 Historical gold price and gold price return analysis.

Figure  ➍ presents the project's cash flow analysis of the gold mine project where the
gold price uncertainty has been taken into account through the use of a binomial lattice
(see appendix A for details of the binomial lattice).
As observed in Figure  ➍, the gold price binomial lattice indicates that with a 20%
volatility in the initial price (at year 0) of US$914.9/Oz, the gold price is expected to either
increase to US$1,113.7/Oz or decrease to US$751.6/Oz during the first year.
In the second year gold price is expected to have values of US$1,355.64, US$914.9, and
US$617.45 per ounce. The cash flow binomial lattice indicates that different cash flows will
be generated depending on the gold price resulting in a total current (i.e., at year t = 0 ) cash
flow of US$8.7M and an expected project npv of US$4.2M.
Once the gold price uncertainty is taken into account, the decision should be to invest
in the project due to the positive expected npv. In this case, the inclusion of the gold price
uncertainty increased the expected value of the gold project by about US$1.1M (i.e., from
negative US$0.65M to positive US$0.453M).
Observe in Figure  ➍ that the average of the maximum and minimum expected gold
price during periods one and two is equivalent to the current gold price of US$914.9/Oz,
which is the value used in the base-case evaluation (Figure  ➋). This is an important
point to highlight since it indicates that the binomial lattice relies on the expected
CHAPTER III 231

gold price used in our initial base-case npv analysis. However, the binomial lattice also
accounts for the variability of the gold price over time which is not considered when
performing the dcf analysis.

Figure 4 Cash flow analysis of the gold mine project accounting for gold price
uncertainty-with no flexibility, where the NPV =$4.95M-$4.5M= $0.45M.

But, what if we delay the project and start in year 1?


Let us now consider the project value if it is delayed until year one. As observed in the gold
price binomial tree in Figure  ➍, in year one the gold price could go up to US$1113.68/
Oz or down to US$751.60/Oz. So to answer this question, we will repeat the process
outlined in Figure  ➍, considering both gold prices (i.e., US$1,113.68/Oz or US$751.60/
Oz), respectively.
Figure  ➎ presents the cash flow analysis if the gold price rises to US$1,113.68/Oz in
year 1. The risk analysis indicates that if in year 1 the gold price is US$1,113.68/Oz, the
best strategy would be to invest in the project, as it is likely to generate an npv=US$1.24M.
Similarly, Figure  ➏ indicates that if in year 1 the gold price falls to $751.60/Oz, the best
strategy is not to invest in the project, as this would generate a negative npv=$-3.6M.
These results indicate the value of the gold project as at year 1. However, in order to make
a final decision it is necessary to discount these values back to the current period (i.e., at

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
232 Under s tanding R eal Option s in Mine Projec t Valu ation: A Simple...

year 0). In doing, so, it is important to note that an additional cost needs to be considered,
the leasing cost of $2.0M (which is already included in the npvs for year 1).
Figure  ➐ outlines the results of the delay option where the value of the gold project
at year 1 is compared with its corresponding value at year 0. As observed in the figure, at
year 1, the gold project has a positive value if gold price increase to US$1113.68/Oz and a
zero value (no to invest) if gold prices decrease to US$751.6/Oz.
The results indicate that the current value of the gold project, i.e., the enpv value
(Figure  ➊)at year 1 brought back to year 0, is US$0.6M which is greater than the current
value of US$0.453M determining if the project where to commence in year 0. Consequently,
based on this result the best strategy is to start mining at year 1 rather than year 0. The
results indicate that the delay will add US$1.31M to the value of the mining project.
Observe that, in this case, the delay flexibility value is US$1.31M (US$0.656M + US$0.65M,
refer to Figure  ➊ for details).

Figure 5 Cash flow analysis of the gold mine project assuming that the project
starts in year one with an initial gold price of $1113.68/Oz.

Since the analysis indicates that, in this case, the gold project will generate an
npv =$1.23M, the best strategy to follow is to invest in the project.
CHAPTER III 233

Figure 6 Cash flow analysis of the gold mine project assuming that the project
starts in year one with an initial gold price of $751.60/Oz.

Similarly to the results obtained in Figure  ➏, due to the analysis indicates that the
gold project will generate a negative npv =$-3.6M, then, the best strategy to follow is not
to invest in the project.

Risk Neutral P roba bility p = 0.552


Project Value

Binomia l Latt ice for Go ld Pri ce

Base Cas e RO -N oflex Delay Opti on $ $


-$653,919.81 $ $
NO INVEST!! INVEST INVEST $ $

max { RO - No Fle x, Delay Option}


Final Decision DELAY INVESTM ENT- START AT Y EAR 1

Flexibility Value $

Figure 7 Diagram showing the decision making between investing in the gold project at year zero or year one.

The final results indicates to start the gold project at year 1, and that compared with the base-
case result (negative npv) the value of delaying the project investment is FlexVal = $1.31M.

conclusions and comments


Throughout this paper we have discussed the importance of including uncertainty and
flexibilities in any mine project evaluation. As seen in Figure  ➐, the value of a mining
project can significantly increase (although not necessarily) due to managing uncertainty
and risk. It was seen that the inclusion of gold price uncertainty increased the gold mine
value when compared with the base-case scenario using a traditional dcf approach.
Furthermore, it was also seen that the flexibility of delaying project commencement

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
234 Under s tanding R eal Option s in Mine Projec t Valu ation: A Simple...

added additional value to the project, regardless of the leasing value (i.e., under high/
low lease cost). Normally the decision to delay the start of the project will depend on
a multitude of other costs (such as the leasing cost and the cost of obtaining extra
information), which also can be parameterised and evaluated as part of a real options
analysis. Observe that an accurate real options analysis will include all mine project
sources of uncertainty. However this will turn the evaluation process into a more complex
analysis (multi-dimensional risk analysis).

references
Martinez, L. A. (2009) Why accounting for uncertainty and risk can improve final decision making
in strategic open pit mine evaluation. Proceedings of the 2009 Project Evaluation Conference,
p 113. The Australian Institute of Mining and Metallurgy. [1]

Gentry, D. W. & O'Neil, T. J. (1984) Mine investment analysis, p.510, Society for Mining, Metallurgy,
and Exploration. [2]

Sharpe, W. (1964) Capital asset prices: A theor y of market equilibrium under conditions of risk, Journal of
Finance, 19(3), pp. 425–442. [3]

Lintner, J. (1965) The valuation of risk assets and the selection of risky investments in stock portfolio
and capital budgets, Review of Economics and Statistics, 47(1), 1, pp.3–37. [4]

Peirson, G., et. al. (2001) Business Finance, pp.556-560,The McGraw-Hill Companies, Inc. [5]

Modigliani, F. & Miller, M. (1958) The cost of capital, corporation finance, and the theory of
investment, American Economic Review, p. 261. [6]

Smith, J. E. & McCardle, K. F. (1999) Options in the real world: Lessons learned in evaluating oil and
gas investments, Operations Research, 47(1), pp. 1–15. [7]

Rubinstein, M. (1999) Futures, options and dynamic strategies, Risk Books (471). [8]

Hull, J. (1989) Options, futures and other derivative securities, Prentice Hall (Englewood Cliffs.
New Jersey). [9]

Bookstaber, R. M. (1987) Option pricing & investment strategies, p.233, Probus Publishing Company
(Chicago). [10]

Rogers, J. (2002) Strategy, value and risk: the real options approach: reconcilling innovation,
strategy and value management, p.141, Palgrave (New York). [11]
Robust Open-Pit Planning under
Geological Uncertainty

abstract
Nelson morales Mine planning is done so that the long-term, most relevant,
Enrique rubio decisions are made in the first place and therefore the main
Universidad de Chile sequence, the budget and the production goals are determined in
the long and medium-term planning, and constitute an input for
the short-term planning. Traditionally, this process involves the
use of a block model usually constructed by kriging techniques.
Unfortunately, kriging models do not represent the spatial
variability of ore grades and geometalurgical attributes. This
means that the actual attributes of any block are uncertain and
therefore plans elaborated on this block model have a high risk of
not achieving the production goals, failing to follow the long-term
sequence, or not complying with the budget.
One way to tackle the limitations of kriging is to use conditional
simulations, which means having not one but many block models,
each of the models representing the variability better than
kriging. The question arises: How to use the simulations in order
to produce a unique robust plan?
In this paper we present two answers to this question, both
based on binary integer programming with the aim of maximising
the probability (measured over the conditional simulations) that
the production goals are met. We also present preliminary results
of these two approaches.
236 R obu s t Open-Pit Planning under Geological Uncer taint y

introduction
The main source of data for mine planning is contained in the block model, and the
whole process of mine planning is performed under the assumption that the data in the
block model is accurate. In truth, block models are constructed from samples by means
of geostatistical methods and therefore, the actual properties of a block (ore grades and
geometalurgical attributes) are uncertain.
The most common method to construct a block model is kriging. This method has very
nice properties (like being unbiased), but also important limitations. A very relevant
drawback of kriging is that it does not represent the spatial variability of ore grades and
geological attributes of the blocks, but tends to produce block models that are smooth.
While the above may not be relevant (we believe it is) for the long-term plans, where
the decisions are made at a very aggregate level, it is crucial in the short-term, where
production goals, budget and overall mining sequence are an input, but also relevant
constraints like blending and geometry (depending on the available equipment) must
be met. As an example, the higher the copper grade of a block, the greater its variability
and therefore, any production goal based on a kriging model has a big risk to not be
achieved. In fact, most of the time the short-term planner has to pick at most two of the
following: achieving production goal for the period, sticking to the long-term mining
sequence, or respecting the budget.
One way to overcome the limitations of kriging is to consider the use of conditional
simulations (just simulations in what follows). Simulations provide not one, but many
block models that are equally likely, but also represent the spatial variability of the
geological attributes better than the kriging model.

Using conditional simulations for constructing reliable plans


As we mentioned above, a plan based on a block model constructed with kriging naturally
is at risk of being unreliable, that is, to not achieve the promised production goal once
executed. Therefore, using simulations to address this issue seems an interesting
approach. Indeed, using simulations in order to take uncertainty into account is not a
new idea. We review three approaches here: using simulations to evaluate variability,
using simulations to construct several plans, and integrating simulations in the plan
optimisation process.

Using conditional simulations to evaluate variability

First of all, simulations can be used to study the variability of a plan, for any geological
attribute (and particularly the ore production). Indeed, once a plan is constructed, it is
possible to evaluate it against each of the simulations and then to construct a graph
(for example, the production plan) for each of the simulations, so the variability of the
attribute graphed. Figure ➊ shows a sketch of the concept.
CHAPTER III 237

Figure 1 Evaluating attribute variability against conditional simulations.

On the one hand, this approach is very nice, because it integrates easily into the
traditional planning process of any company: the plan is constructed as usual and,
provided that the simulations are performed, constructing one or many variability graphs
is easy. On the other hand, the approach has the drawback that it only shows how to
evaluate the variability of a plan, but it does not provide a way to incorporate it into the
planning process.
One way to confront this limitation is to correct the plan if the evaluations show that
it is too unreliable. For example, it is possible to iterate as follows: if the variability is
too big at a given time-period (for example, if the ore production falls too much), then
the plan is corrected or mitigating actions are taken so the problem is corrected. This
has been done, for example, to tackle operational reliability [1] . While this approach
eventually leads to a feasible solution, it requires the planner to do the corrections,
which can be time-prohibitive. An alternative is to use some computer algorithms to fix
the plan [2] . In this case the correction is automated, but there is no guarantee that the
final solution will be feasible.

Using conditional simulations to construct many plans

Another approach is to use conditional simulations constructing one plan for each block
model (i.e. simulation) and then combining them into a single final plan. See Figure ➋
for a sketch of the concept.

Figure 2 Constructing one plan per simulation.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
238 R obu s t Open-Pit Planning under Geological Uncer taint y

In this case, there are several questions to arise. First of all: Provided that it is possible
to construct one plan per each simulation, how to combine them into a unique plan?
Then there is the practical question of how to construct one plan per each simulation
(recall that each plan is constructed by the planner), that is: what is the transfer function
to be used?
The approach of constructing one plan per simulation is not new either. For example,
Dimitrakopoulos et al. and Godoy et al. [2, 4] have studied how the shape and value of
the ultimate-pit change over conditional simulations. Also Gawthorpe [5] proposes a
method that penalises the value of blocks according to the frequency they appear in the
ultimate-pit in order to produce a final pit that takes into account the variability on the
block model.

Integrating conditional simulations in plan optimisation

The approaches described above are focused on the task of controlling or taking into
account the geological uncertainty in order to construct a plan, but they do not integrate
the plan optimisation into the process of planning itself. We believe that this is due to
the lack of software tools that can automate enough the process of constructing a plan,
or that they do not provide a simple, automated, way to integrate conditional simulations
into the planning process.
One way to incorporate geological uncertainty is to use mathematical models and,
in particular, binary integer program to construct optimised plans. Binary integer
programs have the advantage that, when available, they can be solved using standard
optimisation algorithms and software, provide solutions whose quality is measurable,
and guaranteeing that all constraints imposed are satisfied. On the negative side, they
are often difficult to write or the solution time can be long, hence additional research is
required to speed up the optimisation process.
Another advantage of having a binary linear program is the following: If a binary
linear program that outputs feasible (or feasible enough) plans is at hand, then it can
be easily adapted to tackle uncertainty. For example Vielma et al. [6] show how to add
the constraint of having a final pit with value at most z with probability delta (delta a
parameter, the confidence level), and compare favourably to the results of Dimitrakopoulos
et al. and Godoy et al. [2, 4] .

methodology
In this paper, we present two different approaches to consider uncertainty in short-term
mine planning. Both approaches are based on a binary linear program that calculates,
very quickly, a mining sequence for an open-pit mine, considering geometric and blending
constraints, as well as capacities.
The first approach is very close to that of Gawthorpe [5] and consists of using the
binary linear program to construct one sequence per each conditional simulation and
then uses the frequency of a block being mined at each time-period to penalise the value
of the block in a final run of the model.
The second approach is similar to that of Vielma et al. [6] . We adapted the linear
problem in order to maximise the number of simulations in which a certain production
goal (overall and per period) is achieved.
In order to illustrate the potential of the tool, we run the model using a small dataset
corresponding to a unique bench-phase consisting on 40x60 blocks for which we
generated several simulations.
CHAPTER III 239

Binary linear program for mine scheduling

We briefly describe here the binary linear program that we use as a core to construct our
plans, limiting the presentation to the elements that are relevant for the implementation
and results of this paper. For a more detailed description of the model, please refer to
Morales et al. and Vargas et al. [7, 8] .
The model has been validated on real data for BHP Billiton Spence mine, located at
the North of Chile, where it has been successfully used to construct short-term plans (a
quarter) under the very strong blending constraints of this mine.
The model has been developed considering the main constraints of a short-term open-
pit mine, including capacity constraints, geometric constraints (slope and horizontal
precedences), stocks, blending and campaigns. The model takes as input a (unique) block
model containing the following information: (a) a set of blocks B, and for each block
b∈B an ore content ore(b), tonnage ton(b) and a destination dest(b)∈{PL ANT, WASTE}, and
(b) a set of periods t=1, 2, 3, …, T and for each of them a plant capacity P(t) and a mine
capacity M(t).
The model therefore decides, for each block b at which time period it will be mined
and if so, at which time-period it will be sent to the plant (processed). These decisions
are subject to the following constraints:

• Finite mass: Blocks can be mined and processed at most once.

• Slope precedence: In order to gain access to a certain specific block b, some other must
be extracted in advance as they are located over block b.

• Horizontal precedence: Bench-phases are accessed through ramps located in their


borders and exploitation within the bench-phase starts in these ramps and progresses
from there to the inner blocks. This is modelled with two parametres: a first-step
radius R 0 , determining the distance at which the blocks mined in the first period
must be from the access ramp, and a incremental-step radius R indicating the metres
of progress from the ramp per each week.

• Mine capacity: The tonnage of blocks mined at time-period t is bounded by the mine
capacity of the time-period: M(t)

• Plant capacity: The tonnage of blocks processed at time-period t is bounded by the plant
capacity of the time-period: P(t).(Only blocks with destination plant can be processed
and therefore consume plant capacity.)

The goal of the model is to maximise the overall ore production within the time-horizon
T, and can schedule about 30,000–40,000 blocks into 12–14 periods within a few minutes
in a regular pc Notebook.
Noteworthy, the output of the model is not a feasible plan, but it can be used as a guide
to construct a feasible one. The results (and particularly the ore production) obtained from
the rough solutions of the model are a very good approximation of the ones obtained in
the final plan constructed by using the output as plan.

Running the binary linear program on each simulation to


construct one plan
As we mentioned before, our first approach is similar to that of [5] as it runs the model
on each simulation in order to adjust the block ore content in terms of the frequency of
mining of the block over the simulations.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
240 R obu s t Open-Pit Planning under Geological Uncer taint y

Let i=1,2,…,N enumerate the simulations and m i (b,t)=1 if block b is mined at time-period
t for the schedule produced by the model for simulation i, or 0 otherwise. We consider
the probability that block b is mined at time-period t and estimate it as

(1)

We consider then a parameter and adjust the ore content of a block by setting

(2)

Where ore(b) is the ore content of the block in the kriging model. We run the model with
the new set of ore-contents to obtain our final sequence that we call the HYBRID
solution.
We observe that implies that HYBRID is simply the sequence for the kriging model
and ignoring the results of the sequences for each simulation. Conversely, the smaller
the value of , greater the solution to divert will deviate from the average solution over
the simulations.

Adapting the binary linear program to consider uncertainty

Our second approach takes the binary linear program described before. Let m(b,t) be
the binary variable such that m(b,t)=1 if and only if block b is exploited at time-period
t. We consider two extensions of the model: (a) we add a parameter G, representing a
production goal, and for each simulation i we add a binary variable s i and the constraint
; and (b) we add, for each time-period t, a parameter G t representing
the production goal for that period, and for each simulation i a binary variable s i and the
constraint

In the first case (a), variable s i can be interpreted as taking the value 1 if and only if
the production goal G for Simulation i is achieved, and similarly, in the second case (b),
variable s i takes the value 1 only if the period-to-period production goals for each period.
We then estimate the probability of achieving the production goals as the number of
simulations in which it is actually achieved divided by the number of simulations N.
Therefore, maximising this probability is equivalent to maximise

We run this model and name RELIABLE(G) (or RELIABLE(Gt)) the solution obtained
for the corresponding production goal. We observe that if running the model finds a
solution with value K it means that the production goal can be achieved with probability
K/N and, conversely, if no solution exists with value K it means that an ore production
goal of G (or higher) has a probability smaller than K/N.
Figure ➌ shows how this schema works and also how it enables us to construct a
reliability curve by solving the model for each production goal G. Indeed, small values of
G should be trivially achieved by any plan, but as the production goal increases, finding
a sequence that achieves it over all simulations becomes harder and harder. This allows
the planner to do a compromise between the ore production promise and the level of
reliability required or if that is not possible.
CHAPTER III 241

Figure 3 Constructing a plan with known reliability and choosing a reliability level.

Data description and computational setting


We run our tests using a standard block model consisting of a rectangular bench-phase
with 40x60 blocks of 10x10x10 m3 each. For this model, we estimated the block ore grade
using ordinary kriging and N=100 simulations obtained by the rotating bands conditional
simulation method. The main statistics of the model are shown in Table 1 .

Table 1 Main statistics of the block model

Count Min Max Mean Variance

Copper Grade 2,365 0.12 7.24 1.06 0.42

We assumed a mine capacity of 220,000 tonnes per week and a plant capacity of 200,000
tonnes per week (the dataset was one with particularly high ore grades). We constructed
plans consisting of five periods of three weeks each one.
We executed all the runs of the model in an Intel Xeon with 4Gb of ram and 1.8Ghz.
We used cplex 10.2 to solve the binary integer programs.
After some preliminary calibration runs, we set the access ramp at the South-West
corner of the model, and set the first-step radius at R 0 =75 metres and the incremental-
step radius at R=30 metres.
Figure ➍ shows several views of the bench-phase, including the samples, the block
model obtained by using kriging and two simulations.

Figure 4 From left to right, the dataset as: Samples, kriging model, and two conditional simulations.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
242 R obu s t Open-Pit Planning under Geological Uncer taint y

We constructed a reference plan using the kriging model and calculated a sequence
maximising the ore production overall periods. In order to measure reliability of the
plans, we evaluated them over the kriging model obtaining a kriging production that we
compared with the production, of the same plan, in each of the simulations and counted
how many times the kriging production was equal or larger than the ore produced in
the simulation.

results and discussion


All our results are summarised in Table 2 , which presents different plans: KRIGING
(maximising ore production using only the kriging model), several outputs of the HYBRID
(for values of going from 0.0 to 1.0 indicated in the first column), and several outputs
of the RELIABLE(G) schema (we run also RELIABLE(G t), but we do not report it for lack
of space) with the goal G in the first column.
For each of the plans reported, we indicate the ore production for each period (from
one to five) measured in tonnes, and the overall ore production (also in tonnes). All these
tonnages are evaluated in the kriging model.
Finally, we indicate the reliability of the plan, corresponding to the fraction of the
simulations in which the column "Total" was larger than the same plan evaluated against
each individual simulation.
It is also important to mention that for the case of the plans obtained using
RELIABLE(G), we only used six simulations equally spaced when the simulations are
sorted by overall content of ore. This is due to the fact that the resolution time increased
very fast with the number of simulations onsidered.

Table 2 Summary of production plans for different strategies

PLAN 1 2 3 4 5 Total Reliability


KRIGING 5,992 9,690 7,434 7,364 9,127 39,607 57%
PLAN 1 2 3 4 5 Total Reliability
HYBRID

0.0–0.3 5,992 9,690 7,664 7,240 8,966 39,552 54%


0.4 5,992 9,690 7,649 7,258 8,977 39,567 54%
0.5–0.6 5,992 9,690 7,649 7,259 8,983 39,574 56%
0.7 5,992 9,690 7,649 7,263 8,985 39,579 58%
0.8 5,992 9,690 7,649 7,264 8,988 39,583 57%
0.9 5,992 9,690 7,641 7,276 8,999 39,598 58%
1.0 5,992 9,690 7,434 7,364 9,127 39,607 57%
RELIABLE(G)

44,000–46,000 6,544 10,235 7,510 7,006 8,916 40,210 55%

42,000–43,000 6,534 10,160 7,418 7,021 8,581 39,715 53%

41,000 6,533 10,249 7,378 6,947 8,716 39,823 55%


40,000 6,534 10,191 7,434 6,870 8,774 39,804 51%
39,000 6,534 10,190 7,542 6,843 8,745 39,853 57%
31,000–38,000 6,081 10,092 7,427 6,840 8,715 39,155 56%

We observe that, in general, there is not too much variability in terms of ore production
offered by the plan and the reliability. We believe this is due to the small and particular
dataset we used for our tests.
CHAPTER III 243

The best reliability we were able to obtain is 58%, by means of the hybrid method
considering , that is, to keep close to the kriging model but using partially
the information obtained from the probabilities measured on plans adapted to each
simulation. Nevertheless, we also observe that this is only slightly better than the 57%
obtained by the traditional method, with a very similar ore production. In the same line,
the best ore production obtained with the binary linear program maximising reliability
is 55%, which seems a nice compromise as the ore production offer is the largest among
all the plans constructed.
It is also worth mentioning that all the optimisations (that is, running the models
once the data is ready) can be performed in just a few minutes, which is very promising
considering that the models must be tested in larger blocks models and more varied data.

conclusions
We have presented two methods for addressing geological uncertainty in open-pit mine
planning, based on conditional simulations and binary integer programming. The first
method calculates one adapted optimal solution for each simulation and then uses the
frequency of the mining of every block at each time-period to penalise the block value
at that time-period in order to produce a unique sequence considering uncertainty. The
second method extends a scheduling model and maximises the probability of achieving
a certain production goal, measured over the simulations.
We tested our method on realistic data, and presented some preliminary results that
look very promising. Indeed, the tools developed are useful for constructing schedules
that aim to maximise the ore production very fast, and to study the trade-off between
ore production and plan reliability.
The model and techniques presented are general and can be extended to other
situations, like long-term planning of open-pits and underground mine planning. They
rely only on an automatic tool to construct good feasible plans (or feasible enough plans).
Still, further testing is required in order to validate the methods on a larger scale. Also,
improving the resolution time is a necessary and a challenging issue that remains.

references
Cornejo, M. & Muñoz, G. (2007) Modelo de Confiabilidad Operacional Mina Chuquicamata, División Codelco
Chile, in Proceedings MinePlanning 2009, Santiago, November 2009. [1]

Dimitrakopulous, R., Martinez, L. & Godoy, M. (2002) Moving forward from traditional optimisation: Grade
uncer tainty and risk effects in open pit mine design, Transactions of the imm, Section A. Mining
Industry, p. 106 A9–18. [2]

Whitle, J. & Bozorgebrahimi , A. (2007) Hybrids pits – linking conditional simulation and Lerchs-Grossman
thought set theor y. Orebody Modeling and Strategic Mine Planning, The Australian Institute of
Mining and Metallurgy, Spectrum Series 14, pp.323–328. [3]

Godoy, M. & Dimitrakopulous, R. (2004) Managing ri sk and wa s te mining in long-term produc tion
scheduling for open-pit mines. sme Transactions, 316, pp. 43–50. [4]

Gawthorpe, R. (2009) Evaluación de R iesgos Geológicos Aplicada a Planificación Minera Estratégica, in


Proceedings MinePlanning 2009, Santiago, Chile, November 2009. [5]

Vielma, J., Espinoza, D. & Moreno, E. (2009) R isk control in ultimate pits using conditional simulation in
Proceedings of apcom 2009, Vancouver, Canada, October 2009. [6]

Morales, C., & Rubio, E. (2009) Development of a Mathematical Programming Model to Support the Planning of
Short-Term Mining simulation in Proceedings of apcom 2009, Vancouver, Canada, October 2009. [7]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
244 R obu s t Open-Pit Planning under Geological Uncer taint y

Vargas, M., Morales, N. & Mora, P. (2009) Modelo de Secuenciamiento de E x tracción de R eser va s
incorporando variables operacionales y geome talúrgica s in Proceedings MinePlanning 2009,
Santiago, Chile, November 2009. [8]
Long-Term Extraction and Backfill
Scheduling in a Complex
Underground Mine

abstract
Dónal o'sullivan We use an integer programming model to determine a production
Alexandra Newman schedule for a complex underground mining operation that
Colorado School requires backfilling to support mining activities. The goal of the
of Mines, USA mining operation is to maximise metal through the mill subject
to constraints on maximum monthly mining and backfilling
quantities, maximum monthly grade, and sequencing between
mining and backfilling operations. Our initial solution shows the
potential for improvement over the existing manually generated
schedule and we expect to realise larger improvement gains as we
add alternative mining and backfilling scenarios to our model.
246 L ong-Term E x trac tion and Back f ill Scheduling in a Comple x ...

introduction
In an ideal world, scheduling the extraction of ore at an underground mining operation
would take place before a shovel first enters the ground. The long-term schedule would
be based primarily on the shape of the ore deposit and the decisions regarding mining
methods that the project engineers had made. The mine would then be put into
production and miners would excavate the ore over the life of the mine according to this
schedule. However, uncertainty often distorts our best laid plans to such an extent that
we are forced to pause, reflect on our goals, and sometimes change our strategy. Mining
is no different. Factors such as unexpected mineral price levels or costs, operational
interruptions, improved technology or production capacities, and new environmental
policies, can render a previously optimal mining schedule obsolete.
Significant changes in primary value drivers at a mining operation might well prompt
management to change its business strategy resulting in new monthly production goals,
a change in classification of waste versus ore, or perhaps a different target range for
production grade, all factors that could require a redefining of the life of the mine. In
some mining operations, the change in strategy can be addressed by simply changing
the rate of production in the short term, as one might turn the nozzle on a hose to
control water flow. However, complex mining operations exist for which simply ramping
production up or down would not satisfy the new operational strategy. In such cases,
the decisions on when and where to mine need to be reevaluated and a new mining
schedule that is synchronised with the company's business strategy must be developed.
In this paper, we examine an operational underground mine and we formulate a mixed
integer programming model that incorporates the strategy outlined by the company
management while optimising the mining operation's long-term production schedule.
We examine a complex mining operation that extracts a base metal from a hard rock
mine. The strategy chosen by management is one that maximises production levels of refined
ore concentrate. The mine planners must try to meet these targets while being careful not
to sterilise areas and inadvertently shorten the life of the mine. The extraction sequencing
decisions are also complicated by the variation in ground quality within the mine, which
necessitates multiple mining methods as well as backfilling of mined out areas.
Integer programming has previously been employed to produce schedules for
underground mining operations. Carlyle and Eaves [1] present a model that maximises
revenue for a sublevel stoping platinum and palladium mine. Scenario-based solutions
for a ten-quarter time horizon incorporate ore production and mine design decisions.
Sarin and West-Hansen [2] develop a weekly production schedule that maximises npv
for an underground coal mine. Material is extracted using longwall, room-and-pillar,
and retreat mining methods. More recently, Newman and Kuchta [3] produce a mining
schedule that minimises deviation between production and contract requirements for
an iron ore mine. In this case, the ore is extracted using sublevel caving.

mining methods
Our mine employs a mixture of mining methods to excavate the ore: room and pillar,
long hole stoping, and drift and fill. The ground quality together with the thickness and
angle of the orebody determines the mining technique that is used in an area.
Where the ground quality is good and the orebody is not steeply angled, room and
pillar mining is most efficient. With this method, some pillars of ore are left in place to
support the hanging wall as the rest of the ore is excavated. When only pillars remain,
they too are excavated and backfilled or a retreat mining method is used that allows
the mine to cave in when pillars are removed.
CHAPTER III 247

Figure 1 Room and pillar mining.

Areas where the ground quality is good and orebody is thicker are more suited to the
long hole stoping method. This is a large scale and economically efficient way to mine.
A sublevel drift is developed below the ore to be excavated. The ore is then drilled and
blasted. The broken ore falls to the sublevel where Load Haul Dump vehicles scoop and
load it into trucks for transport to a crusher. The cavity may be backfilled to support
additional mining or as a means to dispose of tailings.
Finally, where the mine has poor host rock strength, drift and fill mining is most
applicable. With this approach, a slice of ore is cut out by developing a drift through the
orebody. The excavated cavity is then backfilled using a mixture of tailings and cement,
thus ensuring structural integrity before an adjoining drift is cut from the ore.

Backfilling
The poor ground condition in many areas of the mine requires that backfilling of some
areas must take place before a neighbouring area can be mined. These backfilling
activities must be incorporated into the mining schedule. In addition, backfilling also
provides a way to dispose of tailings from the milling process.

Figure 2 Long hole stoping.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
248 L ong-Term E x trac tion and Back f ill Scheduling in a Comple x ...

Figure 3 Drift and fill mining.

model
We develop an optimisation model to schedule the extraction of ore so that the amount
of metal produced by the mill is maximised over the time horizon. The mine is divided
into areas of ore that vary in shape, grade and volume. The primary elements of the
model are as follows:
We use a single binary variable to control when an area is mined and backfilled.

Ore is milled into concentrate at the mine site; this means that production at the mine
is constrained by milling capacity. Thus, the rate at which the mill operates places an
upper bound on the production of ore within each time period.
We use the following objective function to maximise the volume of ore extracted over
the time horizon.

(1)

Where is the volume of ore obtained in time given we started mining area a at time t.
We include the next constraint to ensure that an area can only be mined and backfilled
at most once.
(2)

We enforce maximum levels of monthly ore production, , with the following constraint:

(3)

Excavation decisions are also driven by the goal to remove as much ore from the mine
as possible over the life of the mine. If too much high-grade material is removed too
CHAPTER III 249

early, the mine might find it difficult to meet grade requirements later in the mine
life. Therefore, management requires the average grade of ore extracted in a month to
remain below a certain maximum grade level. We ensure that the average monthly grade
requirements are satisfied as follows:

(4)

Where g a is the grade percentage of an area a and is the upper bound for the average
grade percentage across all areas extracted in a period.
We set a limit on the amount of paste that can be applied to backfill areas in a month
with the constraint, where is the paste applied in time given we started backfilling
area a at time t' and is the maximum amount of paste available for backfilling in a period.

(5)

When and if an area is mined is not only dependent on the grade and volume of the ore
in the area, but also on the physical state of the mine in the area's zone. An area cannot
be mined or backfilled unless the areas that were judged by the mining engineers as
mining precedents to that area have been mined in advance. The following is a typical
mining precedence constraint.

(6)

This constraint requires that if we mine area a at time t, then the each precedent area k
must be mined at time u, where u≤t-t k m and t k m is the time required to mine area k. In
addition to this mining before mining precedence constraint, we also require precedence
constraints that require backfilling before mining, mining before backfilling, and
backfilling before backfilling.

Precedence sets

The mining method often dictates a precedence set in a zone of a mine, but sometimes
there is more than one way in which a zone can be mined; thus, it is possible to have a
set of mining precedences for an area, only one of which can hold. The following simple
top-down view of a hypothetical mine illustrates this. Segments α, β, γ, and δ, all contain
ore of various grades and quantities. There are existing drifts between α and β and also
between γ and δ.

Figure 4 Precedence Example.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
250 L ong-Term E x trac tion and Back f ill Scheduling in a Comple x ...

The mining engineer determines the following mining rules:

• δ must be taken last.

• Two of α, β, γ may be mined at the same time.

• If α is mined before β, then α must be filled before β may be mined. Alternatively, if β


is mined before α, then β must be filled before α may be mined.

If we were to ignore backfilling requirements, an optimal scheduling solution may


require that we extract all of this ore, or alternatively, may require us to mine only one
or two drifts. However, what we can extract depends on how we have defined the mining
and backfilling sequences in an area.
If we decide a priori to extract all of the ore, then the precedence constraints must
include backfilling in the following sequence, e.g., mine α and γ, backfill α, mine β, mine δ.
This sequence would therefore adhere to prescribed precedences and is currently used as
the basis for a precedence set.
However, if we wish to mine β and γ before mining α, we would find that the previous
precedence set would not require that we backfill β before mining α and this would violate
the mining engineer's rules (given above). Clearly, this different mining scenario would
require another precedence set. Therefore, we need to account for a variety of mining
and backfilling scenarios, and corresponding precedence sets; in turn, this expands our
solution space.

preliminary results
The precedence data provided by the management at our mine was overly constrained. It
was a schedule, rather than a set of mining rules. However, within this schedule, there
were still some degrees of freedom that allowed us to obtain a long-term extraction
sequence with which we could compare our results with the current schedule.
Our model was tested against a schedule that was developed manually by the mine sche-
duling team. We ran our model over 24 monthly periods. The resulting mixed integer program
contained 126,338 binary variables and 27,177 constraints. Our model produced a solution that
was within 1.5% of optimality. The corresponding value of production over the time horizon
was 2.84 million tonnes. The manual schedule produced 2.76 million tonnes over the same
period. This translates to an improvement in production of about 3% for this two-year period.

conclusions and extensions


We expect that our model will produce better results as the degrees of freedom within the
precedence set are increased. This will involve the creation of alternative precedence sets that
will allow a choice between different mining and backfilling scenarios within a mining zone.
Once the optimised long-term schedule has been found, we plan to use it as a basis
for a short-term scheduling model that will allow certain aspects of the operation, such
as area-specific mining rates, to be considered in more detail.

references
Carlyle, W. & Eaves, B. (2001) Underground Planning at Stillwater Mining Company. Interfaces, 31(4):
pp. 50–60. [1]

Sarin, S. C. & West-Hansen, J. (2005) The Long-term Mine Production Scheduling Problem. iie Transactions,
37(2): pp. 109–121. [2]

Newman, A. & Kuchta, M. (2007) Using Aggregation to Optimize Long-Term Production Planning at an
Underground Mine. European Journal of Operational Research, 176(2): pp. 1205–1218. [3]
Optimising Open Pit Block
Sequencing Using Graph
Theoretic Ideas

abstract
Christopher cullenbine Maximising net present value for an open pit mine requires
Colorado School of Mines, USA optimal sequencing of block extraction. Common formulations
of the “open pit block sequencing model” maximise net present
Alexandra newman value subject to precedence (“sequencing”) rules between blocks,
Kevin wood
and to lower and upper bounds on resources such as production
Naval Postgraduate School, USA
and processing capacity. We assume a fixed cutoff grade, that is,
a block's destination, mill or waste dump, is determined a priori.
We demonstrate how graph-theoretic techniques can help solve
to optimality this variant of the open pit block sequencing model.
We also suggest that the techniques can be extended to allow for
a variable cutoff grade.
252 Optimi sing Open Pit Block Sequencing Using Graph...

introduction
Extracting blocks from an open pit mine in specific order can maximise net present value.
The problem of determining such an order is a challenging problem to solve, especially
for large mine models with millions of blocks. Many techniques have been applied to
improve the tractability of the open pit block-sequencing problem, [1] but these techniques
usually reduce solution fidelity: for instance, details such as variable cutoff grade and/
or lower bounds on resource consumptions are omitted, or blocks are aggregated that
should not be in an optimal solution.
The Ultimate Pit Limit problem (upl) is a similar problem whose solution specifies
which blocks to extract to maximise some measure of profit, but not the order of
extraction. Lerchs and Grossmann [2] developed an efficient algorithm to solve upl by
exploiting the underlying network structure of its integer-programming formulation.
The algorithm solves the upl as a maximum-weight closure in a directed acyclic graph
G = (N,E) where N is the set of nodes (blocks) and E is the set of directed edges (see
Picard [3] for a treatment of maximum-weight closures). The edges represent precedence
relationships: edge if and only if block j touches block i, and block j must be
extracted before block i. (We usually say that block j must be extracted immediately before
block i in this case.) In particular, the Linear-Programming (lp) relaxation can be solved
using efficient network-flow techniques that yield optimal integer solutions. ( if
block b is extracted in an optimal solution, and , otherwise).
When we add side constraints to upl to enforce, say, upper and lower bounds on the
total weight extracted during a time period (maximum or minimum production per time
period), the network structure is compromised. In this case, we must expand the upl by
time period ( is replaced by , where t indicates time period) and we must apply more
sophisticated Integer-Programming (ip) techniques to solve the resulting open pit block-
sequencing problem. Dagdelen and Johnson [4] and later Akaike and Dagdelen [5] use
Lerchs and Grossman's upl algorithm to solve a Lagrangian Relaxation (lr) of the open
pit block-sequencing problem. Unfortunately, their approach guarantees a solution that
satisfies the precedence constraints, but not side constraints. The current paper uses their
approach of Lagrangian Relaxation, but describes graph-theoretic techniques that always
enforce precedence constraints and can move from a solution that does not satisfy side
constraints to one that does, and prove that we obtain an optimal solution for a variant
of the open pit block-sequencing problem.
The remainder of this paper is organised as follows: Section two specifies a standard
open pit block-sequencing model and a simplified version of its Lagrangian Relaxation;
Section three proposes a graph-theoretic technique for solving the open pit block-
sequencing problem and provides recommendations for future research; and Section
four provides conclusions.

block sequencing: formulation and


lagrangian Relaxation
The following specifies a standard by formulation of the Open Pit Block-Sequencing (opbs)
model [6] (which is adapted from the air traffic management model of Bertsimas and
Patterson [7] ). By indicates that the key variables have this interpretation: if block
b is extracted by time t, and , otherwise.
CHAPTER III 253

Indices, index sets, and parameters


set of all blocks b
set of periods within the horizon
set of blocks that must be excavated immediately before block b
value associated with the extraction of block b in period t
consumption of resource associated with the extraction of block b (tonnes)
maximum production level in time period t (tonnes)
minimum production level in time period t (tonnes)

Variable
1 if block b is extracted by time period t, 0 otherwise

Formulation (P1)

(1)

s.t. (2)

(3)

(4)

(5)

(6)

The objective function (1) maximises the net present value of extracting blocks from the
mine. Constraints (2) enforce temporal precedences and limit extraction of each block
to a single instance. Constraints (3) limit the maximum amount of production in any
time period, while constraints (4) require a minimum level of production in any time
period. Constraints (5) prevent extraction of a block prior to extraction of its spatial
predecessors. All variables are binary by constraints (6) .
Constraints (3, 4) destroy the network structure of the model, which implies the
model's lp relaxation may yield fractional solutions that do not satisfy constraints (6).
(The network structure consists of the linear-programming dual of a network-flow
problem.) Our solution approach dualizes constraints (3, 4) , i.e., removes each from the
constraint set and places it in the objective function with (roughly speaking) a penalty for
its violation [4, 5]. The relaxed model exhibits only network structure, i.e., a maximum-
weight closure, so an integer solution is always available by solving its lp relaxation. It
is well known that the correct Lagrangian multipliers lead to a solution whose objective
value equals that of the lp relaxation, and this is useful information. Unfortunately, such

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
254 Optimi sing Open Pit Block Sequencing Using Graph...

a solution is likely to violate several of the side constraints. After giving the Lagrangian
formulation, next, we develop methods that can move from an infeasible solution to a
feasible one and prove optimality.

Formulation (L1)

(7)

s.t. (8)

(9)

(10)

The Lagrangian multipliers, and , in expression (7) are nonnegative constants that
may be used (roughly speaking) to penalise a solution if it does not satisfy the dualized
constraints. For simplicity, we assume that we compute an optimal solution to the lp
relaxation of (P1), and extract optimal dual variables from (P1) for constraints (3) and (4).
We use those as Lagrangian multipliers for relaxing the respective constraints. We can
then rewrite the Lagrangianised objective function to obtain (11) , which facilitates its
transformation into a simpler problem.

(11)

Constraints (8, 9) are identical to those in (P1), though constraints (9) are not typically
part of upl because it has no dimension of time. We relax the integrality requirements (6)
by replacing them with lower and upper variable bounds (10) . Since formulation (L1) is a
maximum- weight closure in a directed graph G, the underlying network problem yields
integer solutions when solving the lp relaxation.
Johnson [8] transforms the maximum-weight closure with spatial precedences like the
upl to a maximum-flow problem in a network. However, opbs has two types of precedence
constraints, spatial (9) and temporal (8). We account for both types of precedences in the
construction of the maximum-flow network as follows: First, create a source and sink node
and a node for every block in the mine. Then, connect the source node to every block node
with positive objective- function value ( ) using a directed edge with capacity .
Next, connect every block node with negative objective-function value ( ) to the
sink node using a directed edge with capacity . Finally, connect every block node to
its list of spatial and temporal predecessors using directed edges with infinite capacity
and solve the resulting maximum-flow problem.
CHAPTER III 255

Figure 1 Maximum flow networks with Langrangian-Objective Function values for OPBS.

The objective-function coefficients on the variables may change sign in going from (P1),
1
to (L ). That is, a block with objective-function value in (P1) may have Lagrangian-
objective-function value in (L1) if . If the objective-
function value is positive and the Lagrangian-objective-function value for a block is
negative, we simply delete the edge from the source and add a directed edge to the sink
from the block node with capacity equal to . If the objective-function
value is negative and the Lagrangian-objective-function value for a block is positive,
we delete the edge to the sink and add a directed edge from the source to the block
node with capacity equal to . Figure ➊ shows a three-block, two-time-
period maximum-flow network before and after modifying objective-function values by
subtracting and adding . In this example, block 3's objective-function value is
positive while its Lagrangian-objective-function value is negative.

minimum-cut enumeration algorithm


The network structure of (L1) enables the use of efficient maximum-flow or minimum-cut
algorithms for its solution. Ford and Fulkerson [9] show that the maximum s-t flow in
a network equals the weight of the minimum s-t cut. An s-t cut is a partitioning of the
nodes of a graph into two disjoint sets, one containing node s and the other containing
node t. For example, in the maximum-flow network in Figure ➊ a , the node sets
S={(s), (1, 1), (1, 2), (2, 2)} and ={(t), (2, 1), (3, 1), (3, 2)} form an s-t cut. The weight of the
cutset is the total weight of all edges with tail node in S and head node in . An s-t cut
in this example is easy to understand from a mining standpoint: nodes in set S represent
blocks to mine and nodes in represent blocks to leave in the ground. For example, the
s-t cut described above corresponds to mining block one in time period one, block two in
time period two and leaving block three in the ground.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
256 Optimi sing Open Pit Block Sequencing Using Graph...

Erken [10] solves a Network Diversion Problem (ndp) using a branch-and-bound algorithm
that enumerates near-minimum-weight s-t cuts by recursively fixing edges into and out
of the solution. While ndp and opbs share a similar network structure, their formulations
are different and thus their Lagrangian Relaxations are different. An optimal solution
to the ndp requires a minimum weight, minimal s-t cut that includes a specific edge,
while an optimal solution to opbs corresponds to a minimum-weight, minimal s-t cut
that satisfies all dualized constraints. Therefore, we solve opbs as follows: First, we set
up the opbs lr and convert it to a maximum-flow problem. Then, we run the branch and
bound algorithm described in Erken [10] except that feasibility checks involve evaluating
constraints (3) and (4) for the given restricted solution, rather than just checking for
minimality of a cut.
Though this method finds an optimal integer solution to opbs, there are formulation-
strengthening and variable-elimination ideas that may reduce solution times. For
example, Gaupp [11] improves solution speeds for opbs by eliminating variables based
on early and late start times, i.e., the earliest or latest a given block could be mined given
maximum and minimum production capacities per time period. We could utilise such
an approach to reduce the overall size of the network or eliminate, a priori, nonoptimal
solutions. Additionally, it may be possible to employ updates of the Lagrangian multipliers
to expedite solutions, in a manner similar to Lambert et al. [12] .
We can use this minimum-cut enumeration technique to solve opbs with a variety
of resource-constraint types (i.e., not just one or two, and not just with both lower and
upper bounds), incorporate a variable cutoff grade and track inventory costs associated
with a slack resource constraint in any given period t. Some modifications, such as a
variable cutoff grade, may require additional variables and extra bookkeeping, but the
approach and methodology we describe in this paper will extend as long as, after relaxing
the necessary constraints: (a) the network structure of the precedence constraints is
decoupled from other remaining constraints and variables, (b) any decoupled optimisation
problems involving the new variables are easy to solve. By easy to solve, we mean that
the solution involves a deterministic calculation of variable values, or the solution of a
simple mathematical program. For instance, imagine a simple extension of opbs that
allows inventorying of extracted blocks for later processing. Given a fixed production plan
from the Lagrangian solution, and a simple inventory-cost model, it may be possible to
compute inventories, and their total cost, using a simple first-in/first-out rule. (Penalised
negative inventories would be allowed.) A more complicated but manageable inventory
model might require the solution of a minimum-cost flow problem to compute implied
inventory values.

conclusion
This paper first defines a standard Lagrangian Relaxation of the open pit mine block
sequencing problem, and then describes a new branch-and-bound solution method
based on that relaxation. An easy-to-compute minimum-weight s-t cut in a network
defines the Lagrangian solution; the branch-and-bound algorithm enumerates near-
minimum-weight cuts. This technique can certainly be improved by incorporating
existing formulation-strengthening and variable-reduction methods, and perhaps by
updates of the Lagrangian multipliers. We also indicate how additional side constraints,
or inventory, or a variable cutoff grade could be successfully modelled and solved within
our framework.
CHAPTER III 257

references
Newman, A. M., Rubio, E., Caro, R., Weintraub, A. & Eurek, K. (to appear) A review of operations research
in mine planning. Interfaces. [1]

Lerchs, H. & Grossmann, I. (1965) Optimum design of open-pit mines. Canadian Mining Metallurgical
Bull, lxviii: pp. 17–24. [2]

Picard, J. C. (1976) Maximal closure of a graph and applications to combinatorial problems. Management
Science, 22(11): pp. 1268–1272. [3]

Dagdelen, K. & Johnson, T. (1986) Op t imum op en p it mine produ c t ion scheduling b y L agrang i an
parameterization. Proceedings of 19 th apcom Symposium of the Society of Mining Engineers
(aime), pp. 127–142. [4]

Akaike, A. & Dagdelen, K. (1999) A s trategic produ c tion scheduling me thod for an open pit mine.
Proceedings of 28th International Application of Computers and Operations Research in the
Mineral Industry (apcom), pp. 729–738. [5]

Caccetta, L. & Hill, S. P. (2003) An application of branch and cut to open pit mine scheduling. J. of Global
Optimization, 27(2-3): pp. 349–365. [6]

Bertsimas, D. & Patterson, S. S. (1998) The air traffic flow management problem with enroute capacities.
Oper. Res., 46(3): pp. 406–422. [7]

Johnson, T. B. (1968) Optimum open pit mine production scheduling. Ph.D. thesis, University of California,
Berkeley, California. [8]

Ford, L. & Fulkerson, D. (1956) Maximal flow through a network. Canadian Journal of Mathe-matics,
pp. 399–404. [9]

Erken, O. (2002) A Branch and Bound Algorithm for the Network Diversion Problem. Master's thesis, Naval
Postgraduate School, Monterey, California. [10]

Gaupp, M. (2008) Methods for improving the tractability of the block sequencing problem for an open pit
mine. Ph.D. thesis, Division of Economics and Business, Colorado School of Mines, Golden,
Colorado. [11]

Lambert, W., Gaupp, M., Newman, A. M. & Wood, R. (2010) Open pit block sequencing using Lagrangian
R elaxation. Working Paper. [12]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Creating Competitive Advantage in
Mining: An Illustrative Comparison
with the Oil Industry

abstract
Jose garcia This paper discusses several management practices from the oil
Juan camus industry to support the proposition that financial performance
Peter knights
in natural resource-based businesses relates more to upstream
University of Queensland, resource-related activities than downstream industrial-type
Australia
activities concerned to production management. Outcomes of two
studies conducted at the University of Queensland for nine oil and
gas companies and 14 mining firms corroborated that those that
excelled in increasing reserves were in turn those that delivered
greater value to shareholders. The oil industry, historically more
flourishing than mining, counts on management practices that
focus more on the upstream segments of the business compared to
the traditional downstream focus of mining. This paper appraises
several ideas from these two strategies to propose and conclude
that a new organisational framework is what mining may need
to improve its competitive advantage.
262 Creating Compe titive A dvantage in Mining: An I llu s trative...

introduction
In order to understand how value is created in mining, research underway at the uq
Mining Engineering Program [1] has modelled the mining business using the value
chain framework proposed by Harvard University professor Michael Porter [2] . It considers
the primary activities that deal with the value chain, which are overarched by support
activities that provide other common resources to the business, such as depicted in the
following figure:

Figure 1 Mining value chain.

Upstream are the resource-related activities that embody the holistic function of mineral
resource management. Its aim is to discover new resources and transform them into
economic mineable reserves. Its output is a business plan that defines how and at which
pace a deposit will be exploited.
Downstream activities are accountable for the execution of this plan. These industrial-
type activities begin with the project management task, the area responsible for the
engineering and construction section of the plan. It then follows the operations
management unit, which is accountable for the production section of the plan. At the
end is the marketing function responsible for revenue capture.
In the mining industry, there is a deep-rooted belief that value creation primarily
rests on the downstream, industrial-type activities. These focus on production and costs,
which in turn determine earnings. Instead, research underway at uq proposes that value
in mining is mainly the result of managing effectively the upstream, resource-related
activities, which focus on reserves growth.
This point has also been raised not long ago by Standard & Poor's, the world's largest
providers of investment ratings and financial research data. In a white paper [3] , it
commented:
Analysing a mining company i s a bit dif ferent than analysing mos t companies… Mining
companies are valued not according to earnings so much as assets and so factors such as material
reser ves and production must be taken into account.
The above proposition is supported comparing variations in the company share price
plus dividends over time with variations in company mineral reserves plus production. In
business parlance, the former variable is commonly known as Total Shareholder Return
(tsr) whereas the latter has been defined in the research and referred to as Total Reserves
Increment (tri).
Results obtained from a group of 14 mining companies over the period 2000–2008
are shown in Figure  ➋. These seem to confirm the hypothesis that leading companies
that surpass the group's average tsr in the period also exceed the group's average tri.
CHAPTER IV 263

Figure 2 Financial and technical performance of mining companies [1] .

The sample adequately represents the worldwide mining industry as 8 out of the 14
companies surveyed belong to the world's top ten market capitalisation list recently
released by PricewaterhouseCoopers [4] , a global accounting firm.
The previous model proves eloquently that the disciplined growth of mineral recourses
and their effective conversion into mineral reserves underpin the creation of value in
the mining business.
This research also suggests that the structures, processes, and systems used by mining
companies to manage their mineral resources (the upper part of the value chain) play
a pivotal role in their effectiveness. This issue is not always addressed appropriately in
the mining industry so there seems to be ample room for innovation and developments
in these areas.
The next section presents a comparative analysis of the mining and oil and gas sectors.
Their realities will be portrayed as introduction to the subsequent sub-section, which will
address the upstream/downstream value chain widely used in the oil and gas industry.
A third section aims to tackle the organisational divergence between both sectors, which
gives way to the concluding remarks towards the end of the paper.

a different reality
Mining is one of the earliest industrial activities. Despite being called an industry of the
‘old economy’, it plays a leading role in today's world economy. However, in spite of the
latest resource super cycle, mining appears to lag behind other sister industries such as
the oil and gas. The perseverance of mining companies for growth through acquisitions
and technical developments has not been enough to surpass the oil and gas industry in
creating shareholder value.
The oil industry seems to have been more innovative in the way of organising its
business as it was perceived after the several crises that struck the sector over the last
three decades. Comparisons between mining and oil price equity indices over the last
decades indicate that shareholders of the former haven't been recompensed equally as

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
264 Creating Compe titive A dvantage in Mining: An I llu s trative...

Philip Crowson [5] pointed out in 2001. In truth, the performance of the mining sector
was well below its oil counterpart for most of the last decades as it is shown in Figure  ➌.
This trend reverted a bit during the last lustrum but apparently it was caused by the
skyrocketing metal prices that affected more favourably the mining industry profitability.

Figure 3 MSCI metals & mining and oil production indices.

The oil shocks of 1974 and 1979/80 transformed the business environment of the oil
industry from one of stability to one of turbulence. As a result, the international oil
majors were forced to reformulate their strategies and redesign their organisations. In
effect, the major oil companies had to reconfigure their structures and management
systems to reconcile flexibility and responsiveness with the integration required to exploit
the resource advantages of giant corporations [6] .
Accordingly, a new business understanding blossomed across the oil and gas sector. The
use of the value chain in the business became ordinary over the years with a particular
focus on the aforementioned upstream/downstream decoupling that will be analysed
in the next section.

Upstream/Downstream in the oil and gas business


Similar to mining, Figure  ➍ presents a value chain to show how activities add value
to the overall oil and gas business. Oil and gas producers divide their business into two
large blocks of value; upstream activities accountable for exploration and production and
downstream activities responsible for the crude transformation, petrochemical business
and marketing.

Figure 4 Oil industry value chain.

What is striking is the fact that this model is not only extensively used but omnipresent
in the oil producers. This practice, shaped in the 1980s, was aimed at symmetrising
CHAPTER IV 265

each activity's influence and weight when facing one of the most dramatic periods in
the history of the oil industry.
Consequently, the transactions costs of intermediate markets fell, while the costs
of internal transfer rose. Royal Dutch Shell was the first company to free its refineries
from the requirement to purchase oil from within the group. Between 1982 and 1988, all
the sample members granted operational autonomy to their upstream and downstream
divisions, placing internal transactions on to an arms-length basis. Upstream divisions
were encouraged to sell oil to whichever customers offered the best prices, while
downstream divisions were encouraged to buy oil from the lowest cost sources.
Since then, all major oil players completed a steady evolution from the fully integrated
scheme to an Upstream focused scenario, such as depicted in the Figure  ➎ which shows
the evolution of the major oil companies of the time from a totally balance upstream/
Downstream earnings to an Upstream-centred setting.
Nowadays, a glance of some major oil companies' annual reports illuminates the fact
that the ratio of upstream to total earnings is normally around 75% for those integrated
companies such as bp, Shell, and Exxon, and close to 100% for the almost exclusive
Upstream-focused companies, such as Saudi Aramco and Apache Oil.

Figure 5 Upstream/Downstream earnings ratio evolution [6] .

Conversely, mining companies don't split earnings along the value chain, although these
can be tracked across business divisions, operations, and geographic areas as shown
in their annual reports. In short, this means that no value is allocated to each of the
business primary activities.
The econometrics model issued in the publication Upstream-Downstream - specialization
by integrated firms in a partially integrated industry [7] , depicts the asymmetry between the
Upstream and Downstream activities in the oil industry, as well as its implication for
strategic considerations and interaction with the non-integrated sector of the industry.
It concludes: There is no however ambiguity in the effect of Upstream cost asymmetries: the
integrated firm with the lower Upstream cost will produce more both Upstream and Downstream
than the one with the higher Upstream cost, but its Downstream production will be less important
relative to its Upstream production.
Such econometrics model shed light on what may appear obvious to oil companies'
management but not to other extractive industries. This experimental model shows, in
short, that the most value-accretive activities in the resources industry take place in the
Upstream area of the business. Therefore, it is easy to conclude that all those companies

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
266 Creating Compe titive A dvantage in Mining: An I llu s trative...

that excelled in managing the Upstream—in other words, increasing the reserves base
of the company—should be in turn the most successful ones.
In order to prove this hypothesis, a new study was carried out at the uq Mining
Engineering Division. This time, nine of the most prominent oil companies that trade
in the New York Stock Exchange (nyse) were examined under similar parameters to
calculate their respective tsr and tri.
Results depicted in Figure  ➏ show a marked correlation between both parameters,
indicating that those companies that excelled in incrementing their reserves base, had
higher returns to their shareholders.

Figure 6 Oil companies' performance.

The model presents a few companies that are little deviated from the general trend
however. In detail, it is envisaged that some reserves reporting issues may be affecting
oil markets behaviour. All companies that trade in the nyse are under the regulations of
the U.S. Securities and Exchange Commission (sec). In the case of oil and gas companies,
the SEC's disclosure rules, set in 1978, only allow the reporting of proved reserves. But this
is just one category of the overall pool of oil and gas resources controlled by companies in
the industry. The impediment to report less reliable reserves, which is aimed at protecting
shareholder integrity, deters the market to operate more openly. This issue has already been
raised from financial and accounting firms over the last years. For instance, Deloitte [7]
claimed in 2005: Regulators globally, should co-operate to seize the opportunity to embrace the
comprehensive and current reser ves definition and categorization structure already endorsed by
petroleum engineering professionals worldwide.
Nevertheless, there are other non-visible mechanisms of resource reporting. The
refining giant ExxonMobil, for instance, is acknowledged as a company that handles
reserves superbly trough a very effective reserve replacement activity. The company
reports in its annual report not only its proven reserves but also its resource base. This
includes quantities of oil and gas that are not yet classified as proved reserves but which
ExxonMobil believe will likely be moved into the proved reserves category and produced
in the future. Other channels used to inform investor are foreign stock exchanges, the
Canadians for instance, that allow reserves reporting in the categories of proven, probable
and possible.
CHAPTER IV 267

Figure  ➏ also shows bp in a lagging position, which could be the consequence of


an aggressive and untimely acquisition policy. By contrast, Repsol ypf completed better
results through an extraordinary cash position after the nationalisation of some of its
assets as well as some remarkable discoveries not reported yet.
Outcomes from both studies clearly indicate that resource growth is the engine of
value creation in the non-renewable resource industry. However, the model representing
the mining and oil industry profitability provides no details as to why some companies
perform this activity better than others or how this could be executed more efficiently.
Some ideas to advance in this open question will be focus of the following sections.

an organisation for managing the upstream


After establishing the fundamentals of strategy in the non-renewable resource business,
the centre of attention focuses on getting the right organisation to accomplish that
strategy. This includes the design of administrative structures, systems, and procedures
along with people, which apparently makes the difference in the oil and gas industry.
The decentralisation that the oil industry exerted on times of greatest difficulty
explains what has characterised the oil companies' model for decades, a splendid approach
to strategy and entrepreneurship. The understanding of two main blocks of primary
activities was crucial for the success of the crusade as the generation of competitive
advantage requires a company to understand the entire value creation system, not just
the portion of the value chain in which it takes part [8] .
The fact that all companies adopted such changes so quickly was most likely due to an
abrupt administrative change. More complex question could be answered in the theory
of business isomorphism [9] which aims to explain the several motives why companies
generally tend to resemble each other. According to this theory there are several
mechanisms of corporate homogenisation; one of them, the mimetic isomorphism,
occurs when companies are experiencing similar standard response to a certain degree
of uncertainty. Also according to Hawley [10] , coercive isomorphism is a process that
forces one unit in a population to resemble other facing the same set of environmental
conditions; which seems to be one of the main causes after the great crisis.
The previous study shows that some of the best performers are exclusively focused on
the Upstream area of the business (e.g.: Apache and Devon). This reality not only confirms
the hypothesis that exploration and production activities are the cornerstone of the
business, but suggests that growth is a much tougher assignment for those larger and
less flexible companies. According to the theory of transaction costs, the size of a firm,
or its degree of integration, is determined by the transfer costs in the open markets. A
company that enjoys lower external transaction costs, will likely dwindle, otherwise it
will increase its size [11] . The boundary of a company depends on the comparative costs
of those margins [12] .
Besides, when smaller companies are not competitive in a certain business area, it
is advisable that they abandon it and focus on other areas more akin to its productive
knowledge. To some extent, this harks back to Jim Collins's [13] hedgehog concept, which
is employed by ‘great’ companies that know one big thing and stick to it. This could well
be the concept that explains why some oil and gas companies focus on the Upstream
segment and specialise on the oil resources management.
The manner in which people are grouped in the oil industry reflects plainly the
importance it gives to the Upstream management. The relevance of the exploration

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
268 Creating Compe titive A dvantage in Mining: An I llu s trative...

and production role is crucial as it has a great deal of authority within the oil and gas
firm. This position could be pragmatically redefined as the resource executive. In the
mining world, this role does not exist, although in many instances it is common the
existence of an exploration role who barely has any executive empowerment and whose
function is far from the holistic concept of managing the whole mineral resource assets.
Mining companies do not only lack the mineral resource management function but more
fatally, they usually relegate the resource-related personnel to low-ranked areas of the
business, normally reporting to inadequate managers, working under wrong directions
and reporting expenses instead of value added. This misalignment of the organisational
design from the business course normally leads to project failures, economic losses and
eventually, lower shareholder returns.
Other aspects of the organisation design, not focused of this paper, are the systems,
the procedures and the people. A comprehensive analysis of those adopted in the oil and
gas industry could lead to further organisational design improvement for the mining
industry and will be subject of additional research.
To recap, it is envisaged that the oil and gas industry counts on more appropriate practices
to manage the Upstream segment of the business, which is core to the business strategy.
Replicating its success to manage the mineral resources in the mining business, may imply
considerations of some managerial practices as recommended in the following section.

conclusions
This study highlights the proposition that the main drivers of value creation in mining
are in the Upstream activities. This function is not clearly defined in the mining industry,
although lately it has become known as mineral resource management. At the corporate
level, it serves the mission of generating new economic reserves for the company in
addition to replace those consumed. At the business unit level, it aims to plan the resource
extraction so that value is maximised.
The mineral resource that a mining company has access to, represents the majority
of the value that exists within the company, as well as the competitive advantage that
the company has over its peers. To profit from this fact it would require a fundamental
re-appraisal of the way mining companies plan and execute their businesses; this
means focusing more attention on real value adding activities [14] . This seems to be
a widely-accepted facet of the mining business. However, this does not translate into a
formal strategy, much less in systematic behaviour, so it appears that only a few mining
companies have a full understanding of this matter and act accordingly. Moreover, those
who do something apparently use a great deal of intuition because the organisational
setting prevailing in mining presents plenty of room for improvement.
Despite the crude reality, some awareness has appeared in recent years when a number
of service firms raised this issue. For instance, Standard and Poor's states in its White
Paper Mining 2008 [3] : Higher reser ves for a mine translates to a greater longevity and higher
worth ... Mining companies are valued using market cap per ounce of reser ve and market cap per
ounce of production. Additionally to this, and in its homonymous 2008 Oil and Gas White
Paper [15] , Standard & Poor's states in relation to the Exploration and Production (E &
P) segment of the business, An E & P company's reserve base indicates the ability when
produced, to generate positive cash flow…An E & P company's resources can also be though
in terms of acreage as a statement for exploration strategy enhancement.
The importance of oil reserves not only applies for private companies that trade in
open markets but also for state-owned corporations. This is reflected in the fact that the
13 largest oil companies in terms of reserves are totally or partially state-owned [16] .
CHAPTER IV 269

A managerial separation of the mining value chain similar to the one that occurred
in the oil and gas business seems to be far away as mining is a very mature industry that
has been habitually focused on production and cost. To enhance resource management
practices, mining companies may interpret the oil industry business model and readapt
it to mining. In doing so, the primary activities of the value chain should be assessed
incrementally, assigning an economic value added to each step of the value chain. By
using a market-based transfer price for inter-company sales, an integrated company could
assess each segment of the value chain as an independent profit centre [11] .
Successful value chain models need accepted methods to determine costs, margins, and
investments [17] . Logically, in a successful company, everyone in the value chain uses
the same numbers, speaks the same language, and aims towards the same set of goals.
To do so, this study proposes tools like eva® [18] as a standard for value metric. This tool
should help mining companies to increase their focus on changing the organisational
setting in mining and operationalising the notion of increasing shareholder value.
The creation of a new organisational setting across the mining firm would help address
the issues raised in this paper. In fact, according to Bartlett et al. [19] —a proponent
of a new managerial theory of the firm—in the emerging organisational model, the
elaborate planning, coordination and control systems are to be drastically redesigned ...
as management attention would shift towards the creation and management of process
more directly to add value.
To succeed in this task, mining leaders will have to realise that mining is a peculiar
business that requires an extra managerial function – the management of the mineral
resource. The market will certainly reward those companies that not only work harder,
but above all, that mine smarter [20] .
Paradoxically, it is a reality that some of the major oil companies entered the mining
business in the 70s, quitting it during the 90s. The causes of entry could have been born in
a trip to diversification after the oil crisis. The way out, could have been possibly because
of the rigidity of a mature mining industry that couldn't compete in profitability with
its oil counterpart.

acknowledgements
The authors are thankful to the University of Queensland's Mining Engineering Division
for its support to this research work in the field of mineral resource management. In
addition, the authors wish to express their gratitude to Rio Tinto for its financial support
to the uq Mining Engineering Program that makes possible these types of initiatives.

references
Camus, J., Knights, P. & Tapia, S. (2009) Value Generation in Mining: A New Model, 2009 Australian
Mining Technology Conference, Brisbane, 27–28 Oct 2009. [1]

Porter, Michael (1985) Competitive Advantage: Creating and Sustaining Superior Performance, Simon &
Schuster, New York. [2]

Standards & Poor's (2008) Data Navigator White Paper: Mining Industr y-Specific Data. [3]

PricewaterhouseCoopers International Limited, (2009) Mine R e view of Global Trend s in the Mining
Industr y. [4]

Crowson, P. (2001) Mining Industr y Profitability? Centre for Energy, Petroleum and Mineral Law and
Policy, University of Dundee, Nethergate, Dundee, dd1 4hd, Scotland, UK. [5]

Grant, R. and Cibin, R. (1996) Strategy, Structure and Market Turbulence: The International Oil Majors,
1970–1991. Scandinavian Journal of Management. Vol. 12, No. 2, pp. 165–188. [6]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
270 Creating Compe titive A dvantage in Mining: An I llu s trative...

Deloitte Touche Tohmatsu (2005), Presenting the Full Pic ture: Oil and Ga s R eser ves Mea suring and
R eporting in the 21 st Centur y. [7]

Shank, J. K., Spiegel, E. A., & Escher, A. (1998) Strategic Value Analysis for Competitive Advantage: An
Illustration from the Petroleum Industr y. Strategy and Business, 10. Booz Allen & Hamilton. [8]

DiMaggio, P. & Powell, W. (1983) The Iron Cage R e vi sited: In stitutional I somorphi sm and Collec tive
R ationality in Organizational Fields. America Sociological Review, Vol. 48. No. 2. pp. 147–160. [9]

Hawley, A. (1968) Human Ecology. International Encyclopaedia of Social Sciences. pp. 328–7. [10]

Williamson, O. & Winter, S. (1993) The Nature of the Firm: Origins, Evolution and Development. Oxford
University Press. 256 pages. [11]

Coase, R. (1937) The Nature of the Firm. Economica 4(16) 386–405. [12]

Collins, J. (2001). Good to Great: Why Some Companies Make a Leap and Others Don't. Harper Collins. [13]

Macfarlane, S. (2007) Leveraging the Bottom Line of Your Mining Business Through Effective Management
of the Mineral R esource. South African Institute of Mining and Metallurgy. [14]

Standards & Poor's (2008) Data Navigator White Paper: Oil and Gas Exploration and Production Industr y-
Specific Data. [15]

The Economist (2009) The R ise of the Hybrid Company. 3rd Dec. [16]

Walters, D., Halliday, M. & Glaser, S. (2002) Added Value and Competitive Advantage. Macquarie Business
Research Papers. [17]

Stern, J. & Shiely, J. (2003) The eva Challenge. Wiley & Sons. [18]

Bartlett, C. & Sumantra, G. (1993) Beyond the M-form: Toward a Managerial Theor y of the Firm. Strategic
Management Journal, No. 14. [19]

Camus, J. (2002) Management of Mineral R esources: Creating Value in the Mining Business. Society for
Mining, Metallurgy and Exploration. [20]
Forecasting Energy Spot Prices

abstract
Viviana fernández In this article, we forecast crude oil and natural gas spot prices at
Pontificia Universidad a daily frequency based on two classification techniques: Artificial
Católica de Chile Neural Networks (ann) and Support Vector Machines (svm). As a
benchmark, we utilise an Autoregressive Integrated Moving Average
(arima) specification. We evaluate out-of-sample forecasts based on
encompassing tests and Mean-Squared Prediction Error (mspe). We
find that at short-term horizons (e.g., 2–4 days), arima tends to
outperform both ann and svm. However, at long-term horizons
(e.g., 10­–20 days), we find that in general arima is encompassed by
these two methods, and that linear combinations of ann and svm
forecasts are more accurate than their corresponding individual
forecasts. Based on mspe calculations, we reach similar conclusions:
the two classification methods under consideration outperform
arima at longer time horizons.
272 Foreca s ting Energ y Spot Prices

introduction
Forecasting economic activity has received considerable attention over the past 50 years.
An increasing number of statistical methods, which frequently differ in structure, have
been developed in order to predict the evolution of various macroeconomic time series,
such as consumption, production and investment [1, 2] . In the area of natural resources,
commodity prices have been the focus of various studies [3–6] . Two recent articles, Dooley
and Lenihan [7] , and Lanza, Manera and Giovannini [8] , deal with base metals and
crude oil, respectively. Dooley and Lenihan consider a forward-lagged price model and
an Autoregressive, Integrated Moving Average (arima) specification to assess the cash
price forecasting power. They conclude that arima modelling provides with marginally
better forecasting results. Lanza, Manera and Giovannini in turn utilise cointegration
and an Error Correction Model (ecm) to predict crude oil prices. The authors conclude that
an ecm outperforms a naïve model that does not involve any cointegrating relationships.
In recent years, the forecasting literature has shown that the combination of multiple
individual forecasts from different econometric specifications can be used as a vehicle to
increase forecast accuracy [9] . In particular, Fang [10] illustrates that, for the case of the
U.K. consumption expenditure, forecast encompassing tests are a useful tool to determine
whether a composite forecast can be superior to individual forecasts. In addition, Fang
argues that forecast encompassing tests are potentially useful in model specification,
as forecast combination implicitly assumes the possibility of model misspecification.
Our study focuses on forecasting spot prices of crude oil and natural gas at a daily
frequency for the sample period 1994–2005. The contribution of our work is twofold. First,
we utilise one novel, non-linear forecasting technique, which is based on Support Vector
machines (svm). svm is a relatively new data classification technique which has arisen
as a more user-friendly tool than artificial neural networks [11, 12] . Applications of
svm to forecasting are fairly recent and have dealt primarily with financial and energy
issues [13–17] .
The second contribution of this article is to perform encompassing tests for various time
horizons by resorting to three statistical techniques: arima, Artificial Neural Networks
(ann) and svm. Our computations show that the time horizon is a key element to decide
which model or combination of models can be preferable in terms of forecast accuracy.
This article is organised as follows: The Methodology section briefly discusses the svm
technique, which is relatively recent in the forecasting literature, and it presents forecast
accuracy and encompassing tests. The Results and Discussion section describes our data
set and discusses our estimation results. Finally, the Conclusions section summarises
our main findings.

methodology
svm represent a novel neural network technique, which has gained ground in
classification, forecasting and regression analysis [18–20] . One of its key properties is
that training svm is equivalent to solving a linearly constrained quadratic programming
problem, whose solution turns out to be always unique and globally optimal. Therefore,
unlike other networks' training techniques, svm circumvent the problem of getting
stuck at local minima. Another advantage of svm is that the solution to the optimisation
problem depends only on a subset of the training data points, which are referred to as
the support vectors.
Let us consider a set of data points (x1, y 1), (x 2, y2), ..., (x m, ym), which are independently
and randomly generated from an unknown function. Specifically, x i is a vector of
CHAPTER IV 273

attributes, yi is a scalar, which represents the dependent variable, and m denotes the
number of data points in the training set. svm approximates such unknown function by
mapping x into a higher dimensional space through a function φ, and by determining
a linear maximum-margin hyperplane.1 In particular the smallest distance to such
a hyperplane is called the margin of separation. The hyperplane will be an optimal
separating one if the margin is maximised. The data points that are located exactly the
margin distance away from the hyperplane are denominated the support vectors.2
Mathematically, svm utilise a classifying hyperplane of the form f(x)=ω'φ(x)+b=0,
where the coefficients ω and b are estimated by minimising a regularised risk function:

(1)

where ∥ ω ∥ is denoted as the regularised term, is the empirical error, and C>0 is

an arbitrary penalty parameter called the regularisation constant. Basically, svm penalise
f(x i) when it departures from y i by means of an ε- insensitive loss function:

(2)

so that the predicted values within the e-tube have a zero loss, with ε arbitrary. In turn,
the minimisation of the regularised term implies to maximise the margin of separation
to the hyperplane. The minimisation of Expression (1) is implemented by introducing
the slack variables and . Specifically, the ε-Support Vector Regression (ε-svr) solves
the following quadratic programming problem [12] :

(3)

subject to

The solution to this minimisation problem is of the form

(4)

where and are the Lagrange multipliers associated with the constrains
a nd , res p e c t ively. The f u nc t ion
represents a kernel, which is the inner product of the two vectors
and in the space and .

1 A maximum-margin hyperplane separates two clouds of points, and it is at equal distance from the two.

2 The distance of a vector x to the hyperplane is given by |ω'φ(x)+b|/||ω||2. The margin distance is given by 2/||ω||.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
274 Foreca s ting Energ y Spot Prices

Figure 1 Graphical representation of the SVM technique for a linear kernel.

Figure  ➊ provides the graphical representation of the svm optimisation problem for a
linear kernel. As illustrated, we seek to minimise when y i is above f(x), and to minimise
when y i is below f(x).
Well-known kernel functions are (linear), , γ>0
(polynomial), , γ>0 (radial basis function), and
tanh  (sigmoid). The radial kernel is a popular choice in the svm literature.
Therefore our computations are based on such a kernel.
Granger and Newbold proposed the following statistic [20] which assumes that under
the null hypothesis models one and two have the same Mean-Squared Prediction Error
(mspe), i.e., :
(5)

where r xz is the sample correlation coefficient between x t=e1t+e 2t and z t=e1t –e 2t , and H
is the length of the forecast error series. If rxz is positive and statistically different from
zero, model one has a larger mspe than model two. Conversely, if r xz is negative and
statistically different from zero, model two has a larger mspe3.�
We also resort to a forecasting evaluation technique in Fang [10] , denominated
forecasting encompassing. In particular, one of the specifications utilised by Fang is
the following:

(6)

where is the forecast of based on information available at time t, and


. (The difference operator is used due to non-stationarity of the time

series).4 When b1=0 and b2≠0, the second model forecast encompasses the first. Conversely,
if b1≠0 and b2=0, the first model forecast encompasses the second. In the case that both
forecasts contain independent information for h-period ahead forecasting of y t, both b1
and b2 should be different from zero. It is worth noticing that no constraint is imposed
on the sum (b1+b2).

3 W
 e also utilised Diebold and Mariano’s test [21] but, except for very short-term forecast horizons,
our results were inconclusive as to the performance of one model relative to another.

4 G
 iven that we utilise the natural logarithm of the time series,
represents the return on y between times t and (t+h).
CHAPTER IV 275

Equation (6) can be estimated in principle by ordinary least squares, utilising


standard errors robust to the presence of both heteroskedasticity and serial correlation.
Nevertheless, if the two forecasts are highly collinear, Fang advises to resort to ridge
regression.

results and discussion


The estimation results reported in this section were carried out with routines written
by the author in S-Plus 7.0. In addition, the libsvm and nnet S-Plus library were utilised
for implementing the svm and the ann techniques, respectively5.

Our data set comprises daily observations of oil and natural gas spot prices (Crude Oil-Arab
Gulf Dubai fob U$/bbl and Henry Hub $/mmbtu, respectively), and of the Dow Jones aig
commodity index (djaig) and amex oil and gas index for the sample period 1994–2005. The
data source is DataStream. Descriptive statistics of daily returns are shown in Table 1.
Natural gas experienced sharp fluctuations over the sample period, and all of the four
series show an increasing trend from 2002 onwards.

Table 1 Statistics of daily returns: January 1994-December 2005

Statistic Natural gas DJAIG AMEX oil & gas Crude oil
Minimum -1.273 -0.043 -0.061 -0.129
1st Qu. -0.018 -0.005 -0.006 -0.012
Median 0.000 0.000 0.000 0.001
Mean 0.001 0.000 0.000 0.000
3rd Qu. 0.018 0.005 0.008 0.013
Maximum 0.876 0.048 0.069 0.147
Std. deviation 0.062 0.008 0.012 0.021
Skewness -1.422 0.028 -0.129 -0.224
Excess Kurtosis 103.29 1.78 1.96 3.27
Observations 3,077 3,077 3,077 3,077

The Autocorrelation Functions (acf) of crude oil and natural gas decay very slowly,
suggesting the presence of a unit root. Indeed, the Elliott-Rothenberg-Stock, Augmented
Dickey-Fuller (adf), and modified Phillips-Perron tests do not reject the presence of a unit
root in either series. Therefore, an arima specification is considered as a benchmark to
assess the forecast performance of ann and svm. Specifically, an arima (2, 1, 0) appeared
as satisfactory to both price series. In order to fit the ann and svm specifications, we use
as predictors the djaig and amex oil & gas indices. The ann model comprises one hidden
layer and two units in the hidden layer. The svm specification in turn is based on a radial
kernel.
Our estimation strategy consists of leaving five months of data approximately for
forecast evaluation. Specifically, we take a rolling window of about 2,900 observations,
which allows us to obtain a series of 150 forecast errors for a time horizon that ranges
between one and twenty days ahead.
Tables 2 and 3 provide information on how the forecast performance of the three
model specifications evolves over time. Specifically, Table 2 reports the Granger-Newbold
statistic and its corresponding p-value for all of the three possible paired combinations of

5 E
 xamples on the use of the libsvm library are given in the textbook by [18]. Documentation on the
SVM technique can be found at Chih-Jen Lin´s website, www.csie.ntu.edu.tw/~cjlin/papers/.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
276 Foreca s ting Energ y Spot Prices

models. For oil, arima has a smaller mspe than ann within 10 days ahead. However, for a
longer time horizon, ann outperforms arima. svm is the specification with the poorest
mspe performance, as both arima and ann have consistently smaller mspe. For natural
gas, our findings slightly differ. arima always outperforms ann, and it outperforms svm
for forecast horizons between one and fifteen days ahead. In contrast, for this commodity
svm has always a better performance than ann in terms of mspe.

Table 2 Granger-Newbold test for out-of-sample forecast evaluation

Crude oil
ARIMA-ANN ARIMA-SVM SVM-ANN
horizon (days) statistic p-value statistic p-value statistic p-value
5 -8.80 0.00 -11.82 0.00 1.97 0.03
10 -2.76 0.00 -7.27 0.00 3.93 0.00
12 -0.91 0.18 -6.72 0.00 5.32 0.00
15 1.49 0.07 -6.44 0.00 7.60 0.00
18 3.41 0.00 -6.25 0.00 9.84 0.00
20 4.66 0.00 -5.96 0.00 11.02 0.00
Natural gas
ARIMA-ANN ARIMA-SVM SVM-ANN
horizon (days) statistic p-value statistic p-value statistic p-value
5 -13.45 0.00 -10.81 0.00 -4.04 0.00
10 -7.87 0.00 -6.14 0.00 -3.21 0.00
12 -6.69 0.00 -4.73 0.00 -3.52 0.00
15 -5.51 0.00 -3.27 0.00 -3.53 0.00
18 -3.93 0.00 -1.94 0.03 -3.08 0.00
20 -2.89 0.00 -1.34 0.09 -2.17 0.02

Note: The ARIMA-ANN pair notation implies that ARIMA is model one and ANN is model two, etcetera.

Table 3 Forecast encompassing

Oil Natural gas


h=2 h=2
ARIMA ANN SVM ARIMA ANN SVM
slope prob slope prob slope prob slope prob slope prob slope prob
0.47 0.00 0.05 0.02 – – 0.45 0.00 0.02 0.23 – –
0.50 0.00 – – -0.01 0.63 0.46 0.00 – – 0.01 0.60
– – 0.07 0.01 -0.02 0.55 – – 0.03 0.25 0.00 0.97
h=4 h=4
ARIMA ANN SVM ARIMA ANN SVM
slope prob slope prob slope prob slope prob slope prob slope prob
0.43 0.00 0.13 0.00 0.41 0.00 0.05 0.08 – –
0.48 0.00 – – 0.01 0.77 0.41 0.00 – – 0.05 0.15
– – 0.15 0.00 -0.02 0.53 – – 0.05 0.28 0.02 0.76
h=10 h=10
ARIMA ANN SVM ARIMA ANN SVM
slope prob slope prob slope prob slope prob slope prob slope prob
0.35 0.02 0.37 0.00 – – 0.34 0.14 0.14 0.00 – –
0.45 0.01 – – 0.10 0.06 0.34 0.14 – – 0.18 0.00
– – 0.40 0.00 -0.05 0.29 – – 0.03 0.78 0.16 0.12
CHAPTER IV 277

h=15 h=15
ARIMA ANN SVM ARIMA ANN SVM
slope prob slope prob slope prob slope prob slope prob slope prob
0.19 0.19 0.56 0.00 – – 0.35 0.15 0.22 0.00 – –
0.46 0.03 – – 0.07 0.27 0.31 0.18 – – 0.30 0.00
– – 0.62 0.00 -0.14 0.00 – – 0.06 0.48 0.25 0.01
h=20 h=20
ARIMA ANN SVM ARIMA ANN SVM
slope prob slope prob slope prob slope prob slope prob slope prob
0.06 0.68 0.70 0.00 – – 0.29 0.29 0.33 0.00 – –
0.50 0.05 – – 0.03 0.66 0.18 0.50 – – 0.40 0.00
– – 0.78 0.00 -0.23 0.00 – – 0.20 0.01 0.23 0.02

Note: Parameter estimates are obtained from expression (6). The slopes correspond with b1 and b2, whereas “prob” denotes the
p-value of the t-statistic of each parameter estimate.

Table 3 in turn reports forecast encompassing tests. As we see, at short-term horizons


(e.g., 2–4 days), arima tends to outperform both ann and svm. However, at long-term
horizons (e.g., 10–20 days), we conclude that arima is in general encompassed by the other
two methods, and that linear combinations of ann and svm forecasts are more accurate
than their corresponding individual forecasts in most cases. These findings corroborate
what we concluded from Table 2 , in that arima is best for short-term horizons.
In sum, arima in general provides with more accurate step-ahead forecasts than svm
and ann at short-term horizons. However, its performance gets poorer relative to these
two classification methods as we move further away in time.

conclusions
In this article, we have utilised two classification techniques to forecast future spot prices
of two commodities: Artificial Neural Networks (ann) and Support Vector Machines (svm).
Whereas the former is already well-known in the forecasting literature, the latter has
gained ground in economic and financial applications very recently.
The forecast performance of the two above techniques is contrasted with that of a
standard one, namely, arima. Our computations, based on forecast encompassing and
mspe, show that arima can be preferable to forecasting spot prices at very short-term
horizons. However, at long-term horizons, ann and svm outperform it, and, in addition,
combined forecasts of these two techniques are more accurate than individual forecasts.

references
Diebold, F. (1998) The past, present, and future of macroeconomic forecasting. Journal of Economic
Perspectives 12, pp. 175–192. [1]

Clements, M. & Hendry, D. (1998) Forecasting Economic Time Series. Cambridge: Cambridge University
Press. [2]

Roche, J. (1995) Forecasting commodity markets. Probus Publishing Company, London. [3]

Labys, W. (1999) Modelling mineral and energy markets. Kluwer, USA. [4]

Morana, C. (2001) A semiparametric approach to short-term oil price forecasting. Energy Economics 23(3),
pp. 325–338. [5]

Radetzki, M. (2008) A Handbook of Primar y Commodities in the Global Economy. Cambridge University
Press. [6]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
278 Foreca s ting Energ y Spot Prices

Dooley, G. & Lenihan, H. (2005) An assessment of time series methods in metal price forecasting. Resources
Policy 30, pp. 208–217. [7]

Lanza, A., Manera, M. & Giovannini, M. (2005) Modeling and forecasting cointegrated relationships among
heavy oil and product prices. Energy Economics 27(6), pp. 831–848. [8]

Clemen, R. (1989) Combining forecasts: A review and annotated bibliography. International Journal of
Forecasting 5, pp. 559–583. [9]

Fang, Y. (2003) Forecasting combination and encompassing tests. International Journal of Forecasting
19(1), pp. 87–94. [10]

Burges, C. (1998) “A tutorial on support vector machines for pattern recognition.” Data Mining and
Knowledge Discovery 2(2), pp. 955–974. [11]

Christianini, N. & Shawe-Taylor, J. (2000) An introduction to support vector machines and other kernel-based
learning methods. Cambridge University Press. [12]

Tay, F. & Cao, L. (2001) Application of support vector machines in financial time series forecasting. Omega
29(4), pp. 309–317. [13]

Kim, K. (2003) Financial time series forecasting using support vector machines. Neurocomputing 55 (1–2),
pp. 307–319. [14]

Dong, B., Cao, C. & Lee, S. E. (2005) Applying support vector machines to predict building energy consumption
in tropical region. Energy and Buildings 37(5), pp. 545–553. [15]

Huang, W., Nakamori, Y. & Wang, S. Y. (2005) Forecasting stock market movement direction with support
vector machine. Computers & Operations Research 32(10), pp. 2513–2522. [16]

Lu, W. Z. & Wang, W. J. (2005) Potential assessment of the “support vector machine” method in forecasting
ambient air pollutant trends. Chemosphere 59(5), pp. 693–701. [17]

Venables, W. & Ripley, B. (2002) Modern Applied Statistics with S. Forth edition. Springer-Verlag New
York, Inc. [18]

Chang, C. C. & Lin, C. J. (2005) libsvm: a library for support vector machines. Retrieved from
http://www.csie.ntu.edu.tw/~cjlin. [19]

Enders, W. (2004) Applied Econometric Time Series. Second edition. Wiley Series in Probability and
Statistics. [20]

Diebold, F. & Mariano, R. (1995) Comparing Predictive Accurac y. Journal of Business and Economic
Statistics 13, pp. 253–263. [21]
Financing Minerals Exploration
in Chile: The Economic
Governance Mechanism in
Venture Capital Funds

abstract
Francesco bressi The economic governance mechanism in venture capital funds
Christian moscoso has been proven to be extremely efficient in allocating incentives,
Universidad de Chile particularly on start-up companies. Based on both empirical
evidence and the existing literature on agency problems and
financial contracts, this paper finds that venture capital, and
specifically its economic governance mechanism, is suitable for
the Chilean minerals exploration industry, even for senior mining
companies.
This multidisciplinary research gives rise to new opportunities
for both investors and mining entrepreneurs through the
development of a Chilean venture capital industry specialised in
minerals exploration.
As a secondary result, it makes a contribution to the resource
allocation optimisation for senior mining companies, introducing
a challenge by suggesting to partially change the corporate
governance mechanism in which explorations are carried out
these days.
Finally, there are still some issues that could slow the development
of a venture capital industry, however, address them require
constitutional modifications.
Nevertheless, the findings throw some light on the next
challenges and further research topics that aim to finally develop
a local market for minerals exploration.
280 Financing Mineral s E x ploration in C hile: T he E conomic Governance...

introduction
Financing of minerals exploration depends on both, exploration company size and
exploration stage. From the size perspective it is worth pointing out that senior companies
do not have problems to obtain capital for exploration; they are mainly dedicated to
minerals exploitation and hence can afford to spend a certain amount of their income
to exploration. Instead, junior companies are small and typically have to raise venture
capital from public stock exchanges in countries such as Canada and Australia. On the
other hand if we consider exploration stage, the more the prospect is closer to production,
the lower the risk and higher availability of capital.
Chile, despite the fact of being considered a mining country, does not have an
exploration financing industry because of strong information asymmetries and as result
the intersection between supply and demand leads to a non optimal balance [1] .
The solution to this problem, in countries where the financing industry for explorations
has been successfully developed, has been the information standardisation through
measures such as the creation of the jorc code 1 and the figure of the competent person 2.
Therefore, to solve the problem of the financing of explorations in Chile, the logical
solution would be at least to try to reproduce the conditions of the countries where there
is a functioning industry. Unfortunately, implementing such measures in Chile has taken
longer than expected and there is no certainty about when they will actually happen.
Nevertheless, in 2007 the legal framework of the capital market was modified
introducing incentives aiming to encourage the development of a venture capital industry
which opened a new possibility for financing minerals exploration. Our work is based
on the literature of financial contracts and consists in analysing the venture capital
mechanism and its adaptability to the Chilean legal framework for minerals exploration,
considering that it has to be able to overcome the information asymmetries of this
particular activity. As a side result, we also show that the mechanism can be easily
adapted to senior companies, resulting in stronger organisational incentives and hence
better outcome.

Economic governance mechanism in venture capital funds


Although venture capital funds provide financing to several sectors, such as Healthcare,
Biotech/Medical, IT/Software, Telecom, Retail and others, normally, they are highly
specialised and focused only in one specific area [2] . Size and cost can vary from one to
another but all of them share a common structure which creates what has been named
the economic governance mechanism. A simple schematic illustrating the mechanism
and the main roles in venture capital funds is provided in Figure  ➊.

Figure 1 Economic governance mechanism in venture capital funds.

1 Code for reporting of mineral resources and ore reserves, from the Australasian Joint Ore Reserves Committee (JORC).

2 P
 erson who prepares or directs, and sign the documentation on which the
report of mineral resources and ore reserves is based.
CHAPTER IV 281

External investors provide capital to the fund in return of a right upon the pool raised
proportional to the amount invested and can only be made effective, at the agreed time
of liquidation of the fund. Typically funds last 10 years, even though, venture capitalists
have to report results periodically.
Venture capitalists (vc) are responsible of investing the capital raised from private
investors in the most promising projects, and their payment is mostly given by the
value they can create. It would seem they are just mere intermediaries but reality
shows us otherwise. vc's do not provide just capital, they play a major role in the daily
administration of ventures, giving support, expertise, monitoring and taking control
over the venture when needed.
Entrepreneurs present their projects to the vc's and if selected for funding, they form a
very special partnership, where depending on the degree of information asymmetries it
is possible to allocate separately different kind of rights regardless the amount of capital
invested and dependant on performance (state contingency). In order to do so, the vast
majority of venture capital firms use preferred convertible shares [3] .
Financing of the venture is made by stage, entrepreneurs receive just the necessary
amount to reach the next stage and get compensated per performance and value addition,
moreover vc's always preserve the option to leave the venture if not satisfied, which
makes the entrepreneur do his best and this way keeps being financed. Table 1 shows
the effect of information asymmetries faced by the vc's, in the design of the contract
that regulates the partnership [4] .

Table 1 Measures venture capitalists take to overcome information asymmetries

Venture capitalists concerns (agency problems) Solution by contract


The entrepreneur will not work hard to maximise value VC will make the entrepreneur's compensation
after investment is made. strongly dependant on performance.
The entrepreneur knows more about his or her quality/ VC can design contracts with greater pay-for-
ability than the VC. performance that good entrepreneurs will be more
willing to accept.
After the investment is made there will be circumstances Control theories show that the solution is to
when the VC disagrees with the entrepreneur and the VC give control to the VC in some states and to the
will want the right to make decisions. entrepreneur in others.
The entrepreneur knows more about his or her quality/ VC can reduce the entrepreneur's incentive to
ability than the VC (Hold up problem). leave by vesting the entrepreneur's shares.

As we can see in Table 1 , agency problems arise from the existence of information
asymmetries and interests over control of the venture, but both problems can be solved
by a contract addressing those issues. So far, venture capital has been very successful
in the United States and many countries have tried to replicate its success without the
results expected. The main explanation for this fact is that in order to develop a venture
capital industry, a capital market orientated to public stock exchanges is needed [5] .
The reason behind this fact is that the possibility of an exit through an ipo creates an
implicit incentive for the entrepreneur because once liquidated the venture he will have
the majority of the shares/property and hence will control the company.
Theoretically the structure of venture capital funds could be used to finance any business,
but from what is seen in reality business financed by venture capital share some unique
characteristics. Normally, they can present very strong information asymmetries, being
strongly related to human capital, being very innovative in their business models or being
related to development of technology, in some cases it might even be a combination. In the
next section it is going to be discussed whether minerals exploration is a suitable business
to being financed by venture capital and what would be the main issues if adapted to the
Chilean reality.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
282 Financing Mineral s E x ploration in C hile: T he E conomic Governance...

Venture capital fund specialised in minerals


exploration in Chile
Minerals exploration is a high risk activity and involves several risks. If we take risk as a
combination of probability of occurrence of an unwanted event and the amount of capital
that could be lost if the event actually happens, it is possible to see that the figures in
explorations are going to be quite elevated, in other words there is a big possibility of
losing lots of capital.
The literature on venture capital distinguish several ways of addressing risk by
contracts but given the characteristics of minerals exploration we will consider the
approach of separating risks by their nature [4] . This means in internal, external and
of execution. It is worth highlighting that the economic governance mechanism will not
reduce every risk, but it will just create the incentives to reduce and overcome internal
risks which are mainly given by information asymmetries. External risks, considered as
those that are uncertain for both the entrepreneur and the vc are part of the business and
the entrepreneur should not have his contract contingent on factors that are beyond his
reach. Instead, execution risks, considered as those that are still uncertain for both the
entrepreneur and the vc but at least partially under control of the entrepreneur, should
be assumed partially depending on who is in control. Table 2 shows examples of risks
and measures that could be taken depending on their nature.

Table 2 Measures venture capitalists take depending on nature of risks

Risk Contract solution


•• The entrepreneur always pretends •• Control rights will be assigned according to the
to make all the decisions. performance of the venture, the better the performance
•• The entrepreneur hides geological the greater the control rights for the entrepreneur.
Internal Risks

information with the aim of being •• Design a contract with a high degree of payment upon
financed. performance, in the way that the entrepreneur gets
•• The entrepreneur has other almost nothing if the prospect has low geological
prospects and does not allocate potential.
funds appropriately. •• Very low commitments of capital just enough to finance
the venture to go to the next stage. The performance of
the entrepreneur should be easily measurable.
•• Price of the underlying metal is •• The fund's financial experience will be determinant to
External Risks

below a reasonable threshold. decide the right moments to liquidate the venture or to
•• Country risk, changes in legislation raise fresh capital.
not friendly with foreign •• For all the external risks, solutions should not be made
investment. by contract because there are risks that do not depend
on the entrepreneur.
•• The entrepreneur has no •• The fund's exploration experience will be determinant
experience in explorations and to monitor deadlines and requirements of the
Execution Risks

do not ask for the environmental environmental laws, the fund will have to monitor
permissions needed at the right intensively.
time. •• The fund will have to assess whether it is worth to take
•• The entrepreneur uses non the risk of using non conventional technologies taking
conventional technology to explore care of keeping the necessary control rights to make the
reducing significantly its cost. final decision.

Venture capital applied to minerals exploration should be seen as an intermediate stage


of financing until ventures are mature enough to be liquidated through an ipo. Typically,
from the beginning of exploration until an ore body is found several years pass, increasing
even more the risk perception of investors and resulting in institutional investors as the
main source of venture capital. In the USA the contribution of individuals and families
to venture capital, only accounts for the 10%.
In Chile just until recently, institutional investors were not allowed to invest privately
in venture capital, they could only invest in big companies traded in stock exchanges
which explains why the national industry of investments funds was, and still is, very
CHAPTER IV 283

small. Table 3 shows the structure of the investment funds industry, where it is possible
to notice that not only is it small but the vast majority of the funds are invested in low
risk business [6] .

Table 3 Chilean structure of investment funds

Type of Fund Capital Invested [mUS$] Market Share [%]

Debt funds, mostly invested in debt obligations, such as


4,970 74.4
treasury bonds, corporate bonds and others.

Real estate funds, mostly invested in shares of construction


1,300 19.5
companies, mortgage debts and others.

Private funds, mostly invested in small companies with high


172 2.6
return expectations and high risk.

Other funds 236 3.5

TOTAL 6,678 100.0

Finally, modifications introduced in June 2007 by the law 20.190, allowed institutional
investors and corfo3 to provide venture capital making a big contribution towards
the development of an industry. Table 4 shows, the limits introduced by the new
modifications and a rough estimate of what could be the maximum size of the market
considering only institutional investors.

Table 4 Limits of investment for institutional investors

Maximum capital's availability Rough estimate [mUS$]

Pension funds
•• 1% of C and D Funds 2,000
•• 3% of A and B Funds

Banks
2,200
•• 1% of the assets a Bank that owns

Insurance companies
•• 10% of the investments of insurance 110
companies

CORFO
•• 40% of the shares of a venture capital fund, 150
with a limit of 2.000.000 UTM

TOTAL ~ 4,500

It is worth pointing out that, as seen on Table 4 , availability of resources to invest


in venture capital for exploration should not be a problem, but in order to develop the
industry, some other factors are needed. One of them is the demand for capital, given by
the amount of exploration projects that need financing, calculating an estimate can be
very hard and the result might not be representative because there is little information
available and there are few entry barriers. Table 5 shows the main difficulties in the
development of a venture capital industry specialised in minerals exploration.

3 Corporación de Fomento de la Producción: National Agency for the Economic Development.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
284 Financing Mineral s E x ploration in C hile: T he E conomic Governance...

Table 5 Main difficulties and solutions for the development of a venture capital fund specialised in explorations

Aspect Main difficulties Solution found

Estimation of projects that could need This is probably the most complicated issue and
financing: its solution is not very clear. Measures should
•• Very low rotation of land rights. aim to allocate incentives for land right owners
Demand for •• Very poor regional geological to explore their lands. Public solutions could
capital information. be to raise the cost in time of the rights, to
•• Intermediaries who own land rights make the land rights contingent on a business
but not necessarily want to explore. plan. Private solutions could be to give the
entrepreneur more shares vested in time.

The venture capital fund has to invest Chilean legislation establishes that when a
in ventures that have at least five years partnership is transformed from one kind to
of existence as an equity partnership another, at all the effects the new partnership is
but due to tributary benefits, most of continuer of the old one. In case the SVS4 does
Regulation the ventures are organised as a mining not accepts this solution, the venture capital
contractual partnership which is a fund could buy a shell company older than the
special organisational form existing required date.
only in the mining sector.

The Chilean stock exchange might not Once the venture is big enough, there is no need
be suitable for liquidation of ventures to liquidate using the local stock exchange, but
IPO availability because it is not big enough. it is possible to make an IPO in any other stock
exchange, such as the AIM, ASX or TSX.

Even though the estimate of capital Investors are not attracted because there are
supply seems to be enough to start, strong information asymmetries which makes
there is always the issue that the them hesitate about something they do not
institutional investors do not know know for sure. Standardising the information on
Supply of the exploration business and therefore the mineral deposits, building a good reputation
capital might not be attracted. of the fund administration and creating a
system to standardise risk of a prospect will
make investors more confident about investing
in minerals exploration.

As we can see on Table 5 many of the difficulties can be solved, but there is still a lot of
space for further investigation on how to estimate the demand and making attractive
venture capital funds as a source of financing for minerals exploration.4

Adaptability of the venture capital mechanism


to senior companies
Even though, Senior companies do not have problems allocating resources for minerals
exploration, the economic governance mechanism of venture capital funds could be easily
adapted aiming to replicate the success. A simple schematic illustrating how this could
be done is provided in Figure  ➋.

Figure 2 Adaptation of the economic governance mechanism to Senior companies.

4 Super Intendencia de Valores y Seguros: supervises the Chilean Stock Exchange and Investment Funds
CHAPTER IV 285

The structure is very similar; nevertheless some key issues have to be taken into
account to ensure incentives will conduct behaviour of vc and entrepreneurs to the
best possible results.

• Board of directors need to be independent from each other.

• Board of directors from the senior company should act as a principal, therefore should
have the right to demand distribution of the rewards of investing, for example by
imposing a finite period of life of the subsidiary.

• Teams leading the prospects exploration should be compensated according to the value
created, it has to be said though, that if no mineral deposit is found not necessarily
means it is a bad exploration team, geological risk is just part of the business.

A final point should also be made about the natural resistance these changes could face.
Changing economic incentives is hardly ever accepted even though these changes might
conduct to greater payment for employees with greater performance. So probably, in the
event of deciding the implementation of such idea, the main concern should be about
how to accomplish these organisational changes.

results and discussion


Venture capital funds specialised in minerals exploration seem a plausible alternative to
provide financing for explorations, few things though, should be said about risk.
These days exploration is carried out raising capital through public stock exchanges
being the flow through shares the most common mechanism. Investors are varied, in
the way that they are able to spend large or very modest amounts of money depending
on their risk profile resulting in lots of investors investing modest amounts of money.
Instead venture capital funds are not public hence investors are less and therefore will
have to invest more, assuming more risk.
This characteristic could jeopardise the success of a venture capital fund specialised
in exploration, because it could be argued that the market is not big enough to attract
those few big investors needed. But considering that normally institutional investors are
the main providers of venture capital it should not be a problem unless the underlying
business is totally unknown for them. Traditionally, exploration and finance have not
been very close, but we believe that the reputation built by the administrators of the
fund based on their knowledge and the strong incentives of the economic governance
mechanism will conduct the industry properly. Besides, currently there are very strong
economic incentives given by the availability of venture capital from the government
through corfo that will help to start the industry.

conclusions
Financing of minerals exploration can be conducted using the mechanism given in
venture capital funds. The most important aspect is its economic governance mechanism
which overcomes information asymmetries by allocating the rights of the venture in
both the administration of the fund and the entrepreneur, making them contingent
on performance.
In addition, the entrepreneur knows that if the venture is successful the administrators
of the fund will eventually liquidate the venture, resulting in an implicit contract where
the entrepreneur regains his rights of ownership and control after liquidation.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
286 Financing Mineral s E x ploration in C hile: T he E conomic Governance...

Senior companies, even though they do not have financing problems for explorations,
could still adapt the economic governance mechanism of venture capital funds by
replicating its incentives. Finally, this research opens further lines of investigation
related to the estimation of the demand for capital (intended as the availability of
prospects to finance) and how to make venture capital funds more attractive than any
other source of early stage financing.

references
Moscoso, C., Méndez, M. & Contreras, E. (2005) Análisis del Marco Institucional del Financiamiento a la
Mediana Minería en Chile. Atacama Resource Capital, fondef d02i–1087. [1]

Sahlman, William A. (1990) The Structure and Governance of Venture Capital Organisations. Journal of
Financial Economics, Vol. 27, pp. 473–521. [2]

Kaplan, S. & Strömberg, P. (2003) Financial Contracting Meet s the R eal World: An Empirical Study of
Venture Capital Contracts. Review of Economic Studies, Vol. 70, pp. 281–315. [3]

Kaplan, S. & Strömberg, P. (2004) Characteristics, Contracts, and Actions: Evidence from Venture Capital
Analyses. Journal of Finance, Vol. 59, pp. 2177–2210. [4]

Black, B. S. & Gilson, R. J. (1997) Venture Capital and the Structure of Capital Markets: Banks versus Stock
Markets. Journal of Financial Economics, Vol. 47, pp. 243–277. [5]

ACAFI,(Estadísticas Anuales 2007 y Propuestas 2008, retrieved 1 December 2009 from http://www.
acafi.com/4_documentos.html [6]
The Impact of Macroeconomic
Variables in Mineral Commodity
Prices Cycle

abstract
Fernando acosta This paper assesses the impact of some macroeconomic variables
Superintendencia de Bancos e in the prices of five commodities: aluminium, copper, lead, tin
Instituciones Financieras, SBIF and zinc. Three macroeconomic variables are chosen for seven
important economies of the oecd plus China: the index of
Viviana fernández industrial production, exchange rates and interest rates. By means
Pontificia Universidad of the Kalman Filter, a common component was extracted for these
Católica de Chile series and is related to the macroeconomic indicators. The results
were sensitised by means of the Hodrick-Prescott and Band-Pass
Christian moscoso
Filters. It was found that there is a common factor influencing the
Universidad de Chile prices for these metals and the main difference is for tin prices
which can be explained by specific aspects of its market.
288 T he Impac t of Macroeconomic Variables in Mineral...

introduction
In the past few years an impressive cycle of commodities prices. For example, according
to the imf, in nominal terms, aluminium and copper have had an increase of about 357%
and 95% on their prices between 2002 and 2007 respectively.
It is important to note that different commodities have different cycles, in both their
duration and the mechanisms that provoke them. Industrial commodities involve a
more volatile production and the duration of booms and downfalls are longer in metals,
minerals and oil than in agricultural commodities because of the longer lag between
the investment decisions and the increase in production and the effect on the supply. On
the mechanisms side, industrial commodities respond mainly to demand shocks, and
quantities and prices move in tandem, while in the case of agricultural commodities the
situation is different since prices tend to move in opposite direction to supply shocks. It
is also interesting to note that there is certain pattern in the price evolution for some
commodities and there must be something that influences them as a whole to make
them behave this way.
Results from past studies related to commodity prices and the economy indicate that
the economic cycle has a much more important effect in industrial commodities than
for those in the agricultural market. In the case of commodities, the excess co-movement
occurs since important macroeconomic variables that affect them can not explain the
co-movement that can be seen in some unrelated commodities [1] . However, when
applying some tests to different commodities prices series with the purpose of analysing
if there is a long run relation between unrelated commodities, it is not clear that there
is an actual excess co-movement, and with lower frequency data the explaining power of
the macroeconomic variables is higher [2] . About prices models in metals, for copper the
random walk is best suited for forecasting prices in the short term and an autoregressive
model of the first order is best for the medium term forecasting [3] . This is in line with
the difficulty for these time series to overcome the null hypothesis of a random walk.
There is also an important topic related to the existence of super cycles in metal prices [4] .
By means of the Band-Pass filter they extract particular cyclical components to each of
the price series (aluminium, copper, tin, nickel, lead and zinc). In another work they
extend this study to include steel, pig iron and molybdenum [5] . There is considerable
evidence supporting the existence of super cycles for these metals.
In this study the existence of specific factors for each metal and a common factor to
them is analysed. The commodities are: aluminium, copper, lead, tin and zinc. This
is an update [6] , where by means of the Kalman Filter they obtain the common factor
which is able to explain between the 71% and the 13% of the variation of the metals in
study. They consider macroeconomic variables (industrial production and exchange rates)
for seven important economies of the oecd, and by a linear regression they relate the
common factor to macroeconomic indicators. The industrial production is significant in
explaining the common factor, which does not happen with the exchange rates.
There is an important observation that must be pointed out before the analysis. It
is true that prices are affected by the state of the global economy; however there is an
operational side that has a repercussion in them too. For example, the environmental
regulations now are much stricter than in the past and operational costs have had a
tendency to grow. Here the analysis is ceteris paribus (considering only macroeconomic
variables). It is important to have this in mind when concluding.
The work of Labys, W.C., Achouch, A. & Terraza, M. [6] is also updated considering the
impact of the Chinese economy and sensitising the results making use of monthly series
and other econometric tools for the analysis of cycles.
CHAPTER IV 289

The paper is organised as follows: section two presents the data. In section three
there is the static and dynamic factor analysis using the Kalman Filter (kf) and the
sensitisation with Band-Pass (bp) and Hodrick-Prescott (hp) filters. Section four presents
the conclusion.

data
Prices
The metals used for this investigation are: aluminium, copper, lead, tin and zinc (monthly
series are used). The macroeconomic indicators are: the index of industrial production,
exchange rates and interest rates for Canada, China, France, Italy, Germany, Japan, the
United States and the United Kingdom. The period covered is from January 1971 to
November 2006. The data is obtained from the annual imf statistics for the world economy.
Figure  ➊ shows the monthly price series for the metals in this work.

Figure 1 Metal prices vs. time.

Markets
From Crowson, P. [7] , in 2006 primary aluminium was produced in 42 countries, where
China, Russia, Canada and the United States, in diminishing order, produced more than
a half of the world whole. In the United Sates, the producer price (Alcoa) was rejected in
1986. In 1979 they started to be determined in the lme and progressively it was imposed
on the producer price.
In the case of copper, the United States produces approximately 8% of the world
production behind Chile, which generates 36% of the whole. China represents 21% of
global consumption of this metal. Producers from Chile and Central Africa established
a producer price between 1961 and 1966, year in which it collapsed.
Of the 20 countries in which tin was produced in 2006, the first five account for the 93%
of the world production. China was the principal producer (41%), followed by Indonesia

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
290 T he Impac t of Macroeconomic Variables in Mineral...

(30%), Peru (13%), Bolivia (6%) and Brazil (4%). Most tin reserves are located in Asia and
South America. The great majority of tin trade was controlled by the International Tin
Council (itc) from 1964 to 1985. With the collapse of the itc prices became entirely
determined by the market.
In 2006, led was extracted in 38 countries, where the top five generated about 79%
of the world total production. China was the principal producer, with 35%, followed by
Australia (20%), the United States (12%), Peru (9%) and Mexico (3%). The great majority of
lead industry uses the prices established by the lme as the base for its commerce.
China, the principal consumer of refined zinc, represented approximately 30% of the
refined zinc consumption in 2006. The principal producers of concentrate, in descending
order, were China, Australia, and Peru. The majority of the global trade is based on the
lme prices. Inside the United States the producer price lasted until 1993. In all the other
places this type of price was kept between 1964 and 1988.

analysis
Static factor analysis
Before the analysis with the kf, a prior analysis using the factor analysis technique is
shown. It is known that this kind of analysis has some disadvantages, for example, the
weights for the different factors are not unique (but if the author has a prior idea of what
to expect this can become an advantage). In any case it is an interesting approach to get
a first impression about the difference between the series in study.
The model is the following:

p i  =  β i0  +  β i1F 1  +  β i2F 2  +  ε i (1)

p i represents the price of metal i. The F i are the factors and the β ij the factor loadings.
The ε i terms indicate that the relation is not exact. Factors and errors are unrelated. The
variance that is explained by the common factors is called communality and the rest is
the specific variance.
The analysis is done for standardised data. When considering two common factors, they
can explain the tin variance in a minor proportion than for the rest of the commodities
(See Table 1). The common factors make account for the 97.7% of copper variance and
43.9% for tin. They explain the 76.3% of the total variation. At first it seems that there is
an unobservable common component for at least four of the five series being analysed.
Now the dynamic factor model is presented [8] .

Table 1 Static factor analysis

Variable Observed Factor loadings+ Communality Percentage


Variance explained

(5)
pi Si2 F 1,b i1 F 2,b i2 b i12 +〖b i22 = 100x
(2)
(1) (2) (3) (4) (5) (6)

Aluminium 1.000 0.766 0.080 0.593 59.316


Copper 1.000 0.976 -0.155 0.977 97.660
Tin 1.000 0.188 0.636 0.440 43.984
Lead 1.000 0.861 0.455 0.948 94.835
Zinc 1.000 0.904 -0.201 0.858 85.762
Total 5.000 3.133* 0.682* 3.816 76.311
+ Factor loadings are denoted by b ij to differentiate them from the theoretic values. Observed variance
is denoted by S i 2
* Sum of square loadings
CHAPTER IV 291

Dynamic factor analysis


Before the analysis, the behaviour of the prices series is studied and as expected they
are non stationary (both tests Dickey-Fuller and Phillips Perron are not able to reject the
null of unit root).
The dynamic factor model used to extract the unobservable common component using
the kf is now explained. By this way a specific factor for each of the series and a common
factor to all of them are obtained. The specification is:

(2)

〖〖 CF t = ρ CF CF t-1 + (3)
(1x1) (1x1) (1x1) (1x1)

SF it =ρ SF it-1 +w (4)
(mx1) (mxm) (mx1) (mx1)

Where

(5)

Then, if η t represents the difference between y t and its best estimate using information
up to (t-1) the following equation is true:

〖η η t = y t - ŷ t|t-1 (6)

The vector of unknown parameters φ t can be estimated with maximum likelihood

L(ψ) = constant + (log |φt | + η t 'φt -1 η t )〗 (7)

From Equation (2) , p it consists of two components. An idiosyncratic component SF it


and a common component CF t . Both, the common and specific factors are modeled as
autoregressive processes AR (1).
In State-Space representation [10] :

FC t ρ FC 0 0 0 0 0 FC t-1 w

FE Al,t 0 ρ 0 0 0 0 FE Al,t-1 w
FE Cu,t 0 0 ρ 0 0 0 FE Cu,t-1 w
FE Pb, 0 0 0 ρ 0 0 FE Pb,t-1 w
FE Es,t 0 0 0 0 ρ 0 FE Es,t-1 w
FE Zn,t 0 0 0 0 0 ρ FE Zn FE Zn,t-1 w

p Al,t λ λ 0 0 0 0 FC t ε Al,t
p Cu,t λ 0 λ 0 0 0 FE Al,t ε Cu,t
p Pb,t λ 0 0 λ 0 0 FE Cu,t ε Pb,t
p Es,t λ 0 0 0 λ 0 FE Pb,t ε Es,t
p Zn,t λ 0 0 0 0 λ FE Es,t ε Zn,t
FE Zn,t

F r o m
m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
292 T he Impac t of Macroeconomic Variables in Mineral...

Equation (2), co-movements in the metals studied are due just to this underlying component
to all the series. With the kf the common factor, the specific factors and the parameters for
the AR(1) process are extracted. From Table 2, and as expected from the exploratory analysis,
the common factor has the bigger loadings for copper and lead, and tin has the lower one.
Metals with the highest correlation have associated the bigger loadings on the common
factor and the higher correlation with it (Table 3). From Figure ➋ , it is clear that there is
a good fit between the price series evolution and the common factor obtained by the kf.

Table 2 Common, specific factors and autoregressive coefficients

Common factor Specific factors


Metal
λ̂ λ̂

0.011 0.016
Aluminium
(300,565) (370,949)
0.023 0.024
Copper
(653,844) (379,393)
0.013 0.037
Tin
(946,009) (3043,243)
0.018 0.016
Lead
(327,283) (279,756)
0.014 0.019
Zinc
(466,470) (534,193)
Common factor Specific factor
0.979
(142,888)
0.978
Aluminium –
(94,374)
0.976
Copper –
(82,958)
0.974
Tin –
(173,428)
0.982
Lead –
(91,983)
0.986
Zinc –
(102,454)
t statistics in italics bold

Table 3 Correlation between prices and the common and specific factors

ρρρ* Common factor Aluminium Copper Lead Tin Zinc


Specific factor / 0.904 0.777 0.891 0.955 0.897

/ 0.849 0.931 0.922 0.304 0.862


/ 0.788 0.889 0.926 0.846 0.601
Common factor
/ 0.811 0.946 0.840 -0.425 0.856
/ 0.944 0.961 0.947 0.810 0.877

/ 0.736 0.696 0.194 0.676


/ 0.480 0.866 0.965 0.081
Aluminium –
/ 0.744 0.450 -0.446 0.621
/ 0.944 0.897 0.755 0.898

0.736 / 0.770 0.085 0.914


0.480 / 0.671 0.559 0.778
Copper –
0.744 / 0.709 -0.459 0.872
0.944 / 0.890 0.751 0.930

0.696 0.770 / 0.451 0.688


0.866 0.671 / 0.895 0.332
Lead –
0.450 0.709 / -0.261 0.640
0.897 0.890 / 0.865 0.788
CHAPTER IV 293

0.194 0.085 0.451 / 0.042


0.965 0.559 0.895 / 0.237
Tin –
-0.446 -0.459 -0.261 / -0.412
0.755 0.751 0.865 / 0.634

0.676 0.914 0.688 0.042 /


0.081 0.778 0.332 0.237 /
Zinc –
0.621 0.872 0.640 -0.412 /
0.898 0.930 0.788 0.634 /

*The values in rows represent the correlation coefficient in the different samples. The first is for the entire sample data. Second row:
January 1971-January 1981. Third row: February 1981-February 1991. Fourth row: March 1991-November 2006.

Next a linear regression using the common factor as the dependant variable and the
macroeconomic indicators as the exogenous variables is estimated. The first regression
involves a joint industrial production index for all the countries, the US interest rate
(the other macroeconomic variables had a poor impact in the regression) and the first
lag of the common factor. Both the common factor and the indicators are used in first
difference to avoid a spurious regression. The results in Table 4 show that none of the
indicators used as explaining variables are significant when considering the interest
rate, the joint industrial production and the common factor.

Table 4 Regression between the common factor and the macroeconomic indicators

∆FC t = c + ∆FC t-1 + ∆IP Cjto + ∆IR USA + εit ∆FC t = c + ∆FC t-1 + ∆IP Cjto + εit
t t t
Variable
Coefficient p-value Coefficient p-value

C -0.141680 0.5492 0.338451 0.1223


∆FC(-1) 0.157533 0.0076 0.188973 0.0001
∆〖IP ODCE 0.206672 0.2570 0.261018 0.1715
∆〖IR USA 0.785816 0.1179 – –

∆FC t = c + ∆GDP it + ∆IMP China + εit *


t

C 202.1149 0.0020 – –
∆GDP USA 0.059999 0.0006 – –
∆GDP France -0.340333 0.0005 – –
∆IMP China 0.000292 0.0032 – –
* Quarterly data

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
294 T he Impac t of Macroeconomic Variables in Mineral...

Figure 2 Metal prices and the common factor. Standardised data.

Sensibility analysis
After the results obtained in both the static and dynamic analysis, the Hodrick-Prescott
and Band-Pass filters are used to get another perspective on the results. In this case, the
aim is to obtain the tendency for each of the metals and for the industrial production
index for the group of the countries.

Hodrick-Prescott filter
For the Hodrick-Prescott filter we assume that each price series can be interpreted as:

y t = τ t + ςt (8)

Where τ t represents the tendency component and ςt the cyclical component of the series
(in a wide horizon its expected mean is zero). To obtain them, hp solves:

(9)

CHAPTER IV 295

Where λ controls the smoothness of the adjusted tendency (when λ → 0 the tendency
approximates the real serie and when λ → ∞ it becomes linear). The term y t -τt corresponds
to the deviations from the long run path.
In this case a standard value for λ (that controls the acceleration of the tendency
component) is chosen: 12,400 for monthly series.

Band-Pass f ilter

The idea of this tool is to separate a series in different components according to a specified
frequency. Thus a time series yt can be written as the sum of a short cyclical component
plus a long term trend:

yt = yt + yt (10)
L_R S_R

The Christiano-Fitzgerald bp filter is used to decompose the series. In this case the
cyclical component corresponds to cycles with periods less than 24 months. A detailed
explanation for bp filter is given in Everts, M. [11] .
From both hp and bp filters similar results are obtained, where four of the five
commodities considered and the joint industrial production index showed a similar long
run trend. In line with the prior results from kf, only tin has a major difference with the
rest of the metals. Figure ➌ presents the tendency components for each commodity
and the joint industrial production index for both filters.

extensions
Study with subsamples
With the idea to get another impression, the sample is divided in three subsamples and
then the dynamic factor analysis repeated. The periods are: January 1971-January 1980;
February 1980-February 1990 and March 1990-Novemeber 2006.
The results are different when considering the subsamples, because in the new analysis,
only in the mid sample, tin has a clear difference with the other metals (in fact it has a
negative correlation with the others). See Table 3 for details.
A possible explanation for the difference between tin and the other metals is the effect
that it may have had the International Tin Council (itc) in the eighties to maintain high
and stable prices.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
296 T he Impac t of Macroeconomic Variables in Mineral...

Figure 3 Tendency component for Hodrick-Prescott and Band-Pass filters.

Different macroeconomic variables


The initial macroeconomic indicators were not significant in explaining the common
factor obtained by the kf. In this point other indicators are used for the economies: the
gdp and in the case of China the value for total imports. Quarterly data is used and the
period selected is from the first quarter of 1988 and the fourth quarter of 2006.
In this case the results (Table 4) show that the gdp for both USA and France and the
total imports for China are significant for explaining the common factor. This result is
affected by the fact that the data is in a lower frequency than the monthly data used at
first (which is an expected result considering the work of Palaskas, T.H. & Varangis, P.N. [9].
CHAPTER IV 297

conclusions
The main results in this investigation show that as a whole, the metal prices seem to
be affected by something common to all of them. There also exist differences between
them, especially when comparing tin with the rest of the metals. Particularly, copper
and lead seem to be highly influenced by the business cycle, while the price of tin is
affected mainly by specific aspects to its market (represented by the specific factor). All
of this reflects the complementary and supplementary uses for this kind of commodities,
intensively used in construction and industrial activities, while idiosyncratic components
to each market take account for the differences between these commodities. The last is
corroborated by the results obtained when analysing subsamples (differences between
metals are minor), so the industrial organisation in each metal market play a key role,
especially the itc in the eighties, where tin shows an important difference with the other
industrial commodities considered in this study (which does not happen in the other
two subsamples). It would be interesting to have a detailed work related to the market
structure for industrial commodities and then relate it to the findings of this work.
The results were sensitised with hp and bp filters using the complete original sample.
Only tin has a negative correlation with the tendency of the joint industrial production
index, which confirms the results obtained in the dynamic factor analysis.
The effects of the macroeconomic indicators in the common factor are variable. Interest
rates have a negative (although not important) effect in the business cycle indicator because
agents modify their portfolio, hold money and liberate assets. The effect of the exchange
rates is variable. For a particular country, if its exchange rate gets stronger the world
demand for commodities for this country will fall (and prices will follow it). These variables
are not significant as exogenous variables to explain the common factor. The case of
industrial production is someway disappointing, especially in the case of China. In theory,
a higher level for this indicator is associated with higher demand and thus higher prices.
The regressions confirm this idea, but the impact is poor, although when using lower
frequency data, the industrial indicator has a much more important explaining power.

references
Pindyck, R. & Rotenberg, J. J. (1990) The excess Co-Movement of Commodity Prices. Economic Journal
100, pp. 1173-1189. [1]

Palaskas, T. H. & Varangis, P. N. (1991) Is there Excess Co-movement of Primar y Commodity Prices? A Co-
integration Test. International Economic Department. The World Bank, Washington D.C., wps 758. [2]

Engel, E. & Valdés, R. (2001) Prediciendo el Precio del Cobre: ¿Más allá del Camino Aleatorio?. [3]

Jerret, D. & Cuddington, J. ( 2008) Super Cycles in Real Metal Prices?. imf Staff Papers, Vol. 55, No. 4. [4]

Jerret, D. & Cuddington, J. (2008) Broadening the Statistical Search for Metal Price Super Cycles to Steel
and related Metals. Resources Policy 33, pp. 188-195. [5]

Labys, W. C., Achouch, A. & Terraza, M. (1999) Metal prices and the Business Cycle. Resources Policy,
Vol. 25, No. 4, pp. 229-238. [6]

Crowson, P. (2006) Mineral Markets, Prices and the R ecent Performance of the Minerals and Energy Sector.
Australian Mineral Economics, Monograph 24. Chapter 7. [7]

Stock, J. H. & Watson, M.W. (1988) A Probability Model of the Coincident Economic Indicators. National
Bureau of Economic Research, Working Paper No. 2772. [8]

Hamilton, J. (1994) Times Series Analysis. Princeton University Press. [9]

Harvey, A. (1989) Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University
Press. [10]

Everts, M. (2006) Band-Pass Filters. mpra paper 2049, University Library of Munich, Germany. [11]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Sampling Optimisation of a
Volcano-Sedimentary Deposit Using
Geostatistical Simulations

abstract
Jacques deraisme Exploration drillholes usually provide block estimates with
Javier miranda large confidence intervals. Additional drilling is then required
Geovariances, France to achieve a more accurate resource classification. This paper
presents a methodology based on geostatistical simulations to
Orlando rojas
quantify the relationship between the additional drillhole density
Compañía Minera Doña Inés
and the confidence in the resource classification.
de Collahuasi, Chile
The key principle is to generate, by means of simulations, several
realisations of the main parameters of the mineralisation, i.e.
geological features and grades. In a second stage the simulated
deposits are sampled by fictitious drillholes that may be added to
the existing ones. These drillholes are then used for estimating ore
tonnages and grades. The comparison with the simulated values
provides statistics on the estimation errors. This exercise can be
repeated with another set of planned drillholes. The optimum
between the number of drillholes and the confidence in the resource
estimation can then be obtained from a limited number of tests.
An application to the Rosario Oeste deposit of Collahuasi copper
mine is presented. The mineralisation is concentrated in a high
sulphidisation vein system slightly inclined and located below a
leached zone covering the orebody.
The resource classification is based on tonnage and grades
estimates obtained after a three step approach: (i) determination
of the volume of the leached zone by simulating the surface
boundary between this volume and the mineralised zone, (ii)
simulations of the geometry of the mineralised veins and (iii)
simulations of grades within the mineralised veins.
This paper emphasises the original approach, combining
different techniques chosen to best fit the characteristics of the
deposit. Discussion about the practical results will be presented.
302 Sampling Optimi sation of a Volcano -Sedimentar y...

introduction
The Rosario Oeste Cu-Ag (Au) deposit is an orebody that is under several works of
advanced exploration by Compañía Minera Doña Inés de Collahuasi. It is a huge, high
sulphidisation system of sub vertical fault (veins and breccias located inside an area of 3.5
by 2.5 km toward the southern part of Rosario Mine). The drillhole campaigns performed
in this project until today totalise over 101,000 metres in 384 diamantine drillholes. The
sampling grid is approximately equivalent to 150 x 150 metres.
The preliminary resource estimation shows inferred resources for more than 800
million tonnes with an average grade of 0.8% of total copper. The major part of the ore
is related to secondary sulphides (chalcocite) and is mainly controlled by north-south
regional structures (major faults). The upper part of the deposit is covered by an important
leached and barren zone and the deep zone is not open yet. The mineralisation is present
over 800 metres from the surface. A simplified geological model is shown in Figure   ➊.
We could define Rosario Oeste like a high sulphidisation system of a structurally
controlled zone with copper sulphide related to veins and breccias.
The uncertainty in the resource estimates from the actual drillholes has to be quantified
in order to achieve appropriate resource classification. Another issue is predicting how much
the uncertainty is reduced by additional drilling. To give appropriate answers to these two
questions, an approach based on geostatistical simulations has been carried out.

Figure 1 Schematic vertical section EW of the Rosario Oeste deposit showing the principal domains.

methodology
The geostatistical framework considers the observations such as rock type and grades
as realisations of stochastic processes. One can then generate by stochastic simulations
other realisations of the same processes that are as many possible realities. The spread of
these different realities may characterise the uncertainty in the parameters and can be
used to calculate confidence intervals of ore and metal tonnages on different supports.
Below, the main points for achieving the two objectives of the study are detailed.
CHAPTER V 303

Quantification of the uncertainty in the resources


We have split the sources of uncertainty in the resource into three main points: (i) the
limit between the leached and the mineralised zone, (ii) the volume of the sulphide ore
veins and (iii) the variability of ore grades, namely total copper and arsenic.
For each element described above, simulations have been carried out on a grid of blocks
5 m x 5 m x 5 m (Figure  ➋), using the most appropriate method according with each case:
• Simulation by the turning band method of the bottom of the leached zone (boundary
surface) using as input data the drillholes intercepts and the geological model.

• Simulations of the geological codes (sulphide ore and waste) in the mineralised zone
by means of truncated Gaussian simulations adapted to repeated sequences from West
to East of facies oriented nearly vertically. This part is the most delicate and resource-
consuming one [1, 2]

• Co-simulations of Cu and As ore grades using the turning bands method.

We have considered ordering simulations of each of the three elements by their rank.
As we decided to achieve only 25 simulations of grades, we can make statistics on 25
realisations of ore tonnage and metal.
In addition, the final simulated model is obtained by applying the cookie cutting
procedure in order to keep only the relevant part of it for each simulation. For example,
for each simulation the blocks in the leached zone are eliminated and the ore and metal
of blocks with the waste code are put to 0.

Figure 2 Workflow for simulating leached bottom surface, geological veins and Cu-As grades.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
304 Sampling Optimi sation of a Volcano -Sedimentar y...

Improvement on the resources estimates confidence by


adding more drillholes
Once the simulations are available we sample them by using fictitious drillholes designed
according to a feasible layout. For a given simulation we get a new data set, made of the actual
data merged with the fictitious data. These new data are then used to estimate ore tonnage
and metal quantities that are compared to the original simulation, representing the reality.
Repeating the estimation process on the 25 simulations we obtain 25 errors, on which
we calculate statistics, that are compared with the errors obtain from the actual data
only (Figure   ➌).
The grade estimates are obtained by co-kriging, while the ore tonnage is estimated
by making 10 simulations with the truncated Gaussian method conditioned by the new
data set [3] .

Figure 3 Workflow for estimating ore tonnage and metal quantities by adding new drillholes.

data analysis and modelling


Simulations of the leached zone bottom surface
The bottom of the leached zone volume, modeled as a wireframe, is interpolated on a
regular 2-d grid with a resolution 5 m x 5 m. The uncertainty on that surface is simulated
by adding residuals with the following characteristics:
• They are equal to zero at the drillhole intercepts

• They have a spatial correlation structure made of a spherical variogram with an 800m
range as indicated by the experimental variogram of the elevation values of the bottom
leached surface at the drillhole intercepts.

• The variability (sill value) is evaluated by taking the difference of the dispersion
variances between the modeled surface and the surface interpolated by kriging.
CHAPTER V 305

Leached zone bottom wireframe

Simulation of the leached zone bottom surface

Variogram of the leached zone bottom elevation

Figure 4 Modelling of the leached zone bottom from the leached zone wireframe and drillholes.

Ore veins modelling


A change of coordinates has been applied to transform the vertical reference plane,
changing the general orientation of the ore veins to horizontal. We then calculated the
vertical proportion curves by grouping the drillholes into eight groups (Figure   ➎). Hence,
they show the variations from West to East of the ore and waste proportions.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
306 Sampling Optimi sation of a Volcano -Sedimentar y...

In the truncated Gaussian method, the lithotypes are obtained by the truncation of a
Gaussian random function. In the case of 2 lithotypes (ore and waste), one threshold based
on the proportions is sufficient. As there are no data for the Gaussian variable, we fit the
indicator variograms, giving the models for the Gaussian random function. It is possible
to proceed with that indirect fitting because the relationships between the variograms
of the Gaussian function and the variograms of the indicators are known. In Figure   ➏
we show the variograms calculated in the three main directions.
25 simulations have been achieved, then transformed to the real space after a rotation
around the OY axis in the opposite direction and truncated by the leached zone bottom
surface. In Figure   ➐ we can see how the vertical structures are reproduced.

Figure 5 East-West proportion curves in eight sectors.

Figure 6 Indicator variograms fitted for the truncated Gaussian method in the three main directions (the dotted
line is the experimental indicator variogram and the solid line is the modeled indicator variogram)
CHAPTER V 307

Figure 7 two simulations of the ore/waste codes on a vertical XOZ section.

Cu and As ore grades


The simulations of grades require achieving a transformation (anamorphosis) into
Gaussian distributions. Because of the long tail distribution on the 2 m composite grades,
it was decided to cap high grades by cutting off the grades above the 99% quantile. Then
declustering weights were applied before performing the anamorphosis. An expansion
into Hermite polynomials was achieved, providing satisfactory reproduction of the grades
distribution (Figure   ➑).

Figure 8 Gaussian anamorphosis and histograms of Cu (experimental


function in black and modeled funtion in green)

A bivariate variogram model is fitted in order to perform co-simulations (Figure   ➒).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
308 Sampling Optimi sation of a Volcano -Sedimentar y...

Figure 9 Variograms in the three main directions of space of Gaussian transforms


of Cu and As (dotted line=experimental, solid line=model)

The cookie cutting procedure is applied to match one simulation of the bottom of the
leached zone, one simulation of the geological code and one simulation of Cu-As grades
(Figure   �� )

Figure 10 Vertical cross section in the simulated model representing the


Cu grades of the mineralised veins below the leached zone.
CHAPTER V 309

results
The simulations of the 5 m x 5 m x 5 m grid are regularised to calculate 25 simulations
of ore and metal tonnages in blocks of 20 m x 20 m x 15 m. From these simulations the
following statistics are made: dispersion variance and confidence interval at the risk
level of 80% calculated as the difference between quantile 90% and 10%. It shows very
large uncertainties (100% relative standard deviation or more, particularly for As content),
which is not surprising as the support is relatively small compared to the drillhole
spacing. By making similar statistics on a larger support —a week of production or
more — we get more reasonable figures, i.e. from about 30% for ore tonnage and Cu metal
content to 50% for As metal content.
The simulated models of 5 m x 5 m x m have been sampled by drillholes dipping 60°
located on a regular pattern of drillholes on a mesh of 100 m x 100 m. These new samples
�� ) are added to the actual data for estimating the real values represented by
(Figure  
each simulation in turn.

Figure 11 Simulated
model with Cu grades
sampled using fictitious
drillholes.

The reprocessing of the different simulations has been carried out in order to estimate
the 20 m x 20 m x 15 m blocks from the actual and new drillholes as follows:

• The ore tonnage is estimated as an average of 10 simulations using the truncated


Gaussian method. This procedure, although resource demanding, has been preferred
to a more straightforward indicator kriging method because it better fits the
characteristics of the vein type mineralisation.

• The grades are estimated by ordinary block kriging.

The comparison on the support of a week of production quantifies the improvement


(called gain in Table 1) on the standard deviation of the estimate by means of the
difference of standard deviations with or without additional drillholes, divided by the
standard deviation with the actual drillholes.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
310 Sampling Optimi sation of a Volcano -Sedimentar y...

Table 1 Standard deviations of the estimation errors on ore tonnage and Cu-As metal tonnage

Actual drillholes With drillholes 100x100 Gain on stdev (%)

Stdev Ore Tonnage (kT) 95.9 70.9 26


Stdev Metal Cu (T) 1710.4 1406.2 17.8
Stdev Metal As (T) 49.4 44.2 10.4

conclusions
The application of geostatistical simulations has provided a quantification of the
uncertainty in the resource estimates, taking into account the variability of the lithology
as well as of the grades. This study shows an example of the application of modelling
techniques, initially dedicated to sedimentary oil reservoirs, to porphyry copper deposits
where the ore/waste make sequences of sub-vertical bodies. The simulated models can
then be processed to generate statistics on different supports and help classify resources.
The sampling of the simulated models by using fictitious inclined drillholes makes
it possible to characterise the distribution of estimation errors. The results can then be
used in the scope of optimising a budget for additional drilling.

acknowledgements
The authors would like to thank Compañía Minera Doña Inés de Collahuasi for their
funding and permission to publish this study.

references
Armstrong, M., Galli, A., Le Loc'h, G., Geffroy, F. & Eschard, R. (2003) Plurigau ssian Simulations in
Geosciences, Springer. [1]

Carrasco, P., Carrasco, P., Ibarra, F., Rojas, R., Le Loc'h, G., Seguret, S. (2007) Application of the
Truncated Gaussian Simulation Method to a Porphyr y Copper Deposit, Magri ed., apcom2007, 33rd
International Symposium on Applications of Computers and Operations Research in the Mineral
Industry, pp. 31–39. [2]

Deraisme, J., Farrow, D. (2004), Geostatistical Simulation Techniques Applied to Kimberlite Orebodies
and R isk Assessment of Sampling Strategies, Geostatistics Banff 2004, Springer, pp. 429–438. [3]
Conditional Co-Simulation of Copper
Grades and Lithofacies in the Río
Blanco – Los Bronces Copper Deposit

abstract
Alejandro cáceres Geostatistical simulation is widely used to generate realisations
Geoinnova Consultores, Chile of the spatial distribution of mineral grades in ore deposits. At
present, the most common approach is to divide the deposit into
Xavier emery
rock-type domains (lithofacies) and to simulate the grades within
Universidad de Chile
each domain separately. The rock-type model can be obtained by
a geological interpretation of the deposit and be deterministic, or
be simulated prior to grade simulation. Even though this ‘cascade’
approach allows establishing an uncertainty model for the mineral
resources in the deposit, it implicitly assumes a lack of stochastic
dependence between the grades across rock-type boundaries and
does not fully account for the spatial relationship between the
grades and the occurrence of given rock types.
This work presents a method that allows simultaneously
simulating mineral grades and rock types and taking into
account their spatial dependence by using a combination of the
multi-Gaussian model (for simulating grades) and truncated
Gaussian model (for simulating rock types). The method is able
to incorporate hard data (assays and geological logging from drill
hole or blast hole samples) as well as prior geological knowledge as
conditioning information for the realisations of both grades and
rock types. It is applied to the Río Blanco – Los Bronces porphyry
copper deposit to co-simulate copper grades and the occurrence of
tourmaline breccia, and it is compared to traditional approaches
against production data.
312 Conditional Co -Simulation of Copper Grades...

introduction
The uncertainty associated with the recoverable tonnages and grades in a mineral deposit
is a key factor in the decision-making process of a mining project. Currently, the most
common approach to model the uncertainty in the spatial distribution of mineral grades
is the following [1, 2] :

• Define the spatial extension of the rock type domains (lithofacies). This is usually done
by either of the two methods:

––Deterministic modelling, consisting of an interpretation of the relevant lithofacies


using the available information and geological knowledge of the deposit.
––Stochastic modelling, consisting of simulating the occurrence of each lithofacies.
Several methods can be used to this end, including truncated Gaussian
simulation [3, 4] , plurigaussian simulation[5, 6] or sequential indicator
simulation [7, 8] . The outputs are several realisations (alternative models) of the
spatial distribution of lithofacies.

• Simulate the mineral grades within each lithofacies conditionally to the data
belonging to this lithofacies only.

From the above steps, the only relationships between the grades and the occurrence
of a lithofacies are the membership of the grade data to the lithofacies and the spatial
domain where the simulation of grades takes place. Accordingly, the following aspects
can be identified:

• The previous approach assumes a stochastic independence between lithofacies


occurrences and grade values, which is often a simplification of reality. This can be
illustrated by the following trivial but evocative example. Given a sample of a highly
mineralised rock type, it is probable that the grade of the sample is high. Reciprocally,
a sample with a high grade value but no geological logging is likely to belong to the
highly mineralised rock type.

• The deterministic lithofacies modelling approach provides just one interpretation


of the geology of the deposit without offering any measure of the uncertainty in the
lithofacies boundaries.

• The incorporation of external or soft data with information on the lithofacies


occurrence has no effect on the grade realisations, except for a possible modification
of the spatial extension of the lithofacies.

• Since grades from different lithofacies are assumed independent, the grade realisations
exhibit discontinuities at the boundaries between lithofacies, a feature that is not
necessarily true in the available data.

To improve the decision-making process in the mining industry, it is of interest to account


for the spatial relationships between lithofacies and grades and to incorporate geological
knowledge that would exert an active role in the simulated grade values. To this end, this
paper presents a methodology that links two known geostatistical models: the multi-
Gaussian model for simulating mineral grades and the truncated Gaussian model for
simulating lithofacies. Several proposals [9-12] have already been made in this direction.
However, most of them consider the simulation of grades conditioned to the lithofacies,
but not the lithofacies conditioned to grades, or ignore the cross-correlation between
grades and lithofacies occurrences.
CHAPTER V 313

methodology
Simulation of grades
The proposed method uses the well-known multi-Gaussian framework to simulate the
mineral grades. The workflow is as follows [1] :

• Normal score transformation of the grade data into standard Gaussian values
• Variogram analysis of the transformated data
• Multi-Gaussian simulation over the domain using the transformed values as
conditioning data [13]
• Back-transformation of the simulated Gaussian values into grade values.

Simulation of lithofacies
The second algorithm used in the proposed method is Truncated Gaussian simulation
(tgs), in which the lithofacies (codified as a categorical variable) is obtained by
thresholding an underlying Gaussian random field. The workflow is [5] :

• Determine the lithofacies proportions and contact relationships. Summarise this


information in a truncation rule (thresholds to apply to the underlying Gaussian
random field)

• Model the covariance function of the Gaussian random field via the fitting of the
lithofacies indicator variograms

• Generate a set of Gaussian values at the data locations that are consistent with the
lithofacies coding and the modelled covariance function. This step is performed
with the Gibbs sampler algorithm [14] . Because the relationship between lithofacies
indicators and Gaussian values is not bijective, several realisations should be considered
for the next steps.

• Perform multi-Gaussian simulation using the Gaussian values of the previous step
as conditioning data

• Truncate the realisations according to the truncation rule.

Joint simulation of grades and lithofacies


Keeping in mind the multi-Gaussian and truncated Gaussian simulation workflows, the
proposed method is supported by the following aspects:

• Multi-Gaussian simulation uses a Gaussian random field (Ygrd) to construct realisations


of grades

• Truncated Gaussian Simulation (tgs) also uses an auxiliary Gaussian random field
(Ylith) to construct realisations of lithofacies

• The procedure to co-simulate two or more Gaussian random fields is well established
in the multi-Gaussian framework [1]

• The normal score transformation and truncation rule can be understood as two quite
similar transformation procedures, from a non-Gaussian variable to a Gaussian one.
Therefore tgs converts a discrete problem (simulation of lithofacies) into a continuous
one by using the truncation rule and the Gibbs sampler algorithm.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
314 Conditional Co -Simulation of Copper Grades...

The key idea of the proposed approach is to link the previous two methods by cross-
correlating the Gaussian random fields Ygrd and Ylith . The workflow to jointly simulate
grades and lithofacies is:

• Determine the lithofacies proportions and contact relationships. Summarise this


information in a truncation rule

• Transform the grade data into Gaussian values (Ygrd)

• Fit a coregionalisation model for Ygrd and Ylith . The covariance model of Ygrd is obtained
from the transformed grade data. The covariance model of Ylit is obtained by fitting the
lithofacies indicator variograms. The cross-covariance between Ygrd and Ylith is obtained
by fitting the cross-variograms between transformed grade data and lithofacies
indicators.

• Generate a set of simulated Gaussian values for Ylith at the data locations that are
consistent with the lithofacies coding and the coregionalisation model. This step is
performed using the Gibbs sampler, modified in order to consider Ygrd as a covariate.

• Perform multi-Gaussian co-simulation of Ygrd and Ylith using the Gaussian values
obtained at the previous steps as conditioning data

• Back-transform Ygrd to obtain the simulated grades

• Apply the truncation rule on Ylith to obtain the simulated lithofacies.

This way, non-independent realisations of grades and lithofacies are simultaneously


generated by taking advantage of the coregionalisation model.

Incorporation of extra geological knowledge


It is common to know (with a certain confidence level) about the occurrence of a lithofacies
even if no sample is available. This geological knowledge can be incorporated into the
proposed method using control points that register the lithofacies expected by experts in
the area. These control points have no information about grades, leading to a heterotopic
dataset (with more data on lithofacies than on grades). The workflow remains unchanged,
insofar as the Gibbs sampler and co-simulation can be adapted to heterotopic cases. The
additional information about lithofacies will exert control over the grade realisations.

application
Presentation of the data
The area under study is part of the Río Blanco – Los Bronces porphyry copper deposit, a
breccia complex located in the Chilean central Andes. A set of 2,376 diamond drill hole
samples, located in a volume of 400 m × 600 m × 130 m, are available with information
on rock types and total copper grades. The main lithofacies are [15] :

• Granodiorite (gdt) located in the eastern and southern parts of the area. It is one of the
host rocks of the breccia complex

• Tourmaline Breccias (bxt) located in the central part of the area. It consists of gdt clasts
surrounded by matrix cement dominated by tourmaline and sulphides (chalcopyrite,
pyrite, molybdenite and minor bornite). The rock emplacement is related to the main
alteration-mineralisation event of the breccia complex
CHAPTER V 315

• Other Breccias (obxt) outcropping in the western and southern parts of the sampled
area. This group comprises different types of breccias with textural and compositional
variations.

The lithofacies controls the copper grade distribution. bxt is the highly mineralised
lithofacies, whereas gdt and obxt have low copper contents. Maps of the copper
grade and lithofacies data are presented in Figure  ➊. Near the boundaries between
lithofacies, copper grades show gradational transitions, a feature that is usually found
in disseminated deposits or in diffusive processes.

Figure 1 Location map of copper grade (left) and lithofacies (right) data.

In order to simplify the start-up and inference, the lithofacies are grouped into just two
types: bxt and nobxt (gdt + obxt).

Simulation approaches
Six different methods will be compared:

• Copper simulation without geological control (M1): this method uses the available
copper grade data without considering the lithofacies information

• Joint simulation of copper grades and lithofacies (M2) using the proposed approach

• Joint simulation of copper grades and lithofacies incorporating additional geological


knowledge on the occurrence of bxt and nobxt. Two subcases are defined: incorporating
5% (M3) and 30% (M4) of additional data at control points that mainly belong to the
nobxt unit (in the borders of the area under study)

• Simulation of copper grades using a stochastic geological model (M5): the occurrences
of bxt and nobxt are obtained using the tgs approach, and then copper grades are
independently simulated within each simulated domain

• Simulation of copper grades using a deterministic geological model (M6): this


method uses an interpreted model of the bxt extension. Copper grades are simulated
independently within the bxt and nobxt domains.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
316 Conditional Co -Simulation of Copper Grades...

The methods, features and colours used in the next figures are presented in Table 1 .

Table 1 Method names, features, colours and code names

Code Colour Short Name Lithofacies Grades Additional Knowledge

M1 Red No geological control - Simulation -


M2 Blue Joint simulation Co-simulation -
M3 Green Joint simulation Co-simulation 5%
M4 Pink Joint simulation Co-simulation 30%
M5 Black Independent simulation Simulation Simulation -
M6 Yellow Independent simulation Deterministic Simulation -
(interpreted)

implementation
For each approach, 100 realisations are generated on a grid of 390 × 590 nodes with a
1 m × 1 m spacing, in a representative bench, using the turning bands algorithm with
1000 lines [1, 13]. The anisotropy directions for variogram analyses are the horizontal and
vertical directions for every model. Figure  ➋ indicates the truncation rule and threshold
for tgs, while Table 2 presents the parameters of the coregionalisation model between
the Gaussian random fields Ygrd and Ylith.

Table 2 Coregionalisation model for the two Gaussian random fields under consideration

Structure Range (m) Sill Contribution

Type Horizontal Vertical Ylith × Ylith Ygrd × Ygrd Ygrd × Ylith

Nugget 0 0 0 0.12 0
Cubic 60 120 0.1 0.05 -0.03
Cubic 280 4,500 0.8 0.148 -0.333
Cubic 138 350 0.1 0.507 -0.218
Spherical 22 40 0 0.238 0

Figure 2 Truncation rule.

results
The simulated copper grades are compared on the basis of several criteria, which are
presented next.

Basic statistics: Figure   ➌ presents the distribution of the average copper grade per
realisation for each method. The distributions of the joint simulation models (M2, M3,
CHAPTER V 317

M4) present lower values than the independent simulation models (M5, M6) and the
simulation without geological control (M1). This feature is more patent as samples are
added in the low-graded nobxt lithofacies (M4), showing the influence of the additional
geological knowledge in the copper grade realisations.

Figure 3 Distribution of average copper grade per realisation.

Expected values: Table 3 presents the tonnages and mean grades above two selected cut-
offs for the conditional expectation of copper grades (average of the 100 copper grade
realisations). Visual and statistical inspections indicate that there is no substantial
difference between the models in terms of expected values.

Table 3 Tonnages and mean grades above selected cut-offs for each model

Cut-off 0.5% Cu 1% Cu

Model % of total mean grade % Cu % of total mean grade % Cu

M1 83.67 1.14 40.2 1.55


M2 82.52 1.14 40.34 1.53
M3 82.54 1.13 39.47 1.53
M4 82.13 1.13 39.19 1.52
M5 87.71 1.11 43.07 1.45
M6 85.95 1.12 42.46 1.47

Local uncertainty measures: Table 4 gives the basic statistics on the conditional variance
of copper grades (variance of the 100 copper grade realisations). Figure   ➍ shows the
corresponding local coefficients of variation for the joint simulation (M4) and independent
simulation (M6) models. It is seen that the joint simulation model presents lower values
of the uncertainty measure than the other model.

Table 4 Basic statistics on the conditional variance for each model

Model Min Max Mean Std. Dev

M1 0 4.231 0.236 0.433


M2 0 4.005 0.208 0.375
M3 0 4.050 0.206 0.372
M4 0 4.008 0.198 0.368
M5 0 4.509 0.281 0.377
M6 0 4.501 0.264 0.380

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
318 Conditional Co -Simulation of Copper Grades...

Figure 4 Local coefficients of variation for joint simulation (M4, left) and
independent simulation (M6, right) models.

Performance evaluation
Given a set of 20,893 blast hole data in which the copper grades are known, a validation
model is created by ordinary kriging on 5 m × 5 m blocks in the bench where the copper
grade realisations were generated. Every realisation is then regularised to that block size
and compared against the validation model.
Error distribution: The percentual mean error (pme) is defined by:

(1)

where S k (i) is the simulated grade at block i, R(i) is the grade of the validation model and
n is the number of blocks where both models are defined. Figure   ➎ (left) presents the
pme distribution for each model. The joint simulation models (M2, M3, M4) have a pme
distribution closer to zero than the independent simulation models (M5, M6). Model M4
(with 30% of additional geological data) presents the best results.

Grade correlation distribution: The linear correlation coefficient between every realisation
and the validation model is calculated. As seen in Figure   ➎ (right), the joint simulation
models have higher correlations than the independent simulation models. Again, the
best performance is achieved by the joint simulation model that includes the largest
amount of additional geological information.

Figure 5 PME distribution (left) and correlation coefficient by realisation (right).


CHAPTER V 319

Destination mismatch: The percentage of correctly classified blocks (mill/dump) for several
cut-off grades is calculated for each realisation. Figure   ➏ (left) presents the expected
percentages for each model and cut-off. The joint simulation models perform consistently
better than the other approaches.

Accuracy plot: Given an interval of probability p in the distribution of simulated block


grades, the effective percentage of blocks of the validation model that fall within the
interval is calculated. The closest this percentage is to p, the better the model is in terms
of accuracy and precision [16] . Figure  ➏ (right) displays the accuracy plots, showing a
moderately better performance of the joint simulation models.

Figure 6 Percentage of correct classification (left); accuracy plot (right).

conclusion
A geostatistical approach to co-simulating mineral grades and lithofacies has been
presented. It is suitable when grades have gradational transitions across the lithofacies
boundaries (soft geological boundaries) and allows one to incorporate additional
geological information on lithofacies that is likely to influence the grade realisations.
Even if the proposed approach looks similar to other approaches in terms of expected
grades, differences have been observed when looking at uncertainty measures and when
validating the realisations against production data. According to these differences, is
it seen that the traditional approaches overstate the uncertainty associated with the
mineral resources. Stochastic mine planning approaches should therefore consider the
intrinsic relationship between lithofacies and mineral grades in order to allow better
decision-making for mine executives.

acknowledgements
The authors would like to acknowledge Fondecyt project N° 1090013, Geoinnova
Consultores and the Advanced Laboratory for Geostatistical Supercomputing (alges) at
the Universidad de Chile for supporting this research. Thanks are also due to Codelco
Chile for providing the dataset used in this work and to F. Ibarra and R. Riquelme for
their comments.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
320 Conditional Co -Simulation of Copper Grades...

references
Chilès, J. P. & Delfiner, P. (1999) Geostatistics: Modeling Spatial Uncertainty. Wiley, New York, p. 695. [1]

Dubrule, O. (1993) Introducing More Geology in Stocha s tic R eser voir Modelling. In Soares, A., ed.,
Geostatistics Troia'92. Kluwer Academic, Dordrecht, pp. 351–369. [2]

Galli, A., Beucher, H., Le Loc'h, G., Doligez, B. & Heresim Group (1994) The Pros and Cons of the Truncated
Gau ssian Me thod. In Armstrong, M., Dowd, P.A., eds., Geostatistical Simulations, Kluwer
Academic, Dordrecht, pp. 217–233. [3]

Matheron, G., Beucher, H., de Fouquet, C., Galli, A., Guérillot, D. & Ravenne, C. (1987) Conditional
Simulation of the Geometr y of Fluvio Deltaic R eser voirs. spe 16753, pp. 123–131. [4]

Armstrong, M., Galli, A., Le Loc'h, G., Geffroy, F. & Eschard, R. (2003) Plurigau ssian Simulations in
Geosciences. Springer, Berlin, p. 160. [5]

Betzhold, J. & Roth, C. (2000) Characterizing the Mineralogical Variability of a Chilean Copper Deposit
Using Plurigaussian Simulations. Journal of the South African Institute of Mining and Metallurgy
100(2), pp. 111–120. [6]

Alabert, F. (1987) Stochastic Imaging of Spatial Distributions Using Hard and Soft Information. Master's
Thesis, Stanford University, Department of Applied Earth Sciences. [7]

Journel, A. G. & Gómez-Hernández, J. J. (1993) Stochastic Imaging of the Wilmington Clastic


Sequence. SPE 19857, SPE Formation Evaluation. [8]

Bahar, A. (1997) Co-Simulation of Lithofacies and Petrophysical Proper ties. PhD Thesis, University of
Tulsa. [9]

Dowd, P. (1994) Geological Controls in the Geostatistical Simulation of Hydrocarbon R eser voirs. Arabian
Journal for Science and Engineering 19, pp. 237–247. [10]

Dowd, P. (1997) Structural Controls in the Geostatistical Simulation of Mineral Deposits. In Baafi, E.Y,
Schofield, N.A. (eds.) Geostatistics Wollongong ‘96. Kluwer Academic, Dordrecht, pp. 647–657. [11]

Freulon, X., de Fouquet, C. & Rivoirard, J. (1990) Simulation of the Geometr y and Grades of a Uranium
Deposit Using a Geological Variable. In Proceedings of the X XII International Symposium
on Applications of Computers and Operations Research in the Mineral Industry, Berlin,
pp. 649–659. [12]

Lantuéjoul, C. (1994), Nonconditional Simulation of Stationary Isotropic Multigaussian R andom Functions.


In: Armstrong, M., Dowd, P.A., eds., Geostatistical Simulation. Kluwer Academic, Dordrecht,
pp. 147–177. [13]

Geman, S. & Geman, D. (1984) Stochastic R elaxation, Gibbs Distribution and the Bayesian R estoration of
Images. I.E.E.E. Transactions on Pattern Analysis and Machine Intelligence 6, pp. 721–741. [14]

Serrano, L., Vargas, R., Stambuk, V., Aguilar, C., Galeb, M., Holmgren, C., Contreras, A., Godoy, S.,
Vela, I., Skewes, M. A., & Stern, C. R. (1996), The Late Miocene to Early Pliocene Río Blanco-Los Bronces
Copper Deposit, Central Chilean Andes. In: Camus, F., Sillitoe, R.H., and Petersen, R., eds., Andean
Copper Deposits: New Discoveries, Mineralizations, Styles and Metallogeny. Society of Economic
Geologists, Special Publication no. 5, Littleton, Colorado, pp. 119–130. [15]

Deutsch, C. V. (1997) Direct Assessment of Local Accuracy and Precision. In: Baafi, E.Y, Schofield, N.A.
(eds.) Geostatistics Wollongong ‘96. Kluwer Academic, Dordrecht, pp. 115–125. [16]
A Comparison of Three Geostatistical
Approaches for Co-Simulating
Mineral Grades

abstract
Patrick rivera This work aims at comparing three methods for simulating
Xavier emery mineral grades in polymetallic deposits in order to reproduce their
Eduardo magri
spatial variability as well as their dependence relationships. These
Universidad de Chile methods include: separate simulation of each grade variable,
co-simulation under a Linear Model of Coregionalisation (lmc)
and simulation of factors obtained by Principal Component
Analysis (pca). They are applied to two databases, one of which
is heterotopic (i.e., not all the grade variables are known at all
the sample locations), while the other one is isotopic. The first
database comes from a porphyry copper deposit in which eight
grades have been measured (related to Cu, Fe, Mn, Mo, Cl), whereas
the second database corresponds to a lateritic nickel deposit in
which six variables (Al 2O3, Cr, Fe, MgO, Ni, SiO2) are of interest.
The results call for the following conclusions. First, simulating
the grade variables separately does not allow for reproducing
the dependencies between these variables and is therefore not
recommended, unless the variables are mutually independent.
Second, the co-simulation under a Linear Model of Coregionalisation
takes into account the cross-correlations between variables
and provides better results in both the isotopic and heterotopic
cases. However, the method is complex and the Linear Model of
Coregionalisation may poorly fit the cross-variograms when
many variables are considered. Third, the co-simulation by pca
factorisation is difficult to apply in the heterotopic case, insofar
as variables with few data have to be removed from the analysis,
although it provides the best reproduction of the correlations
between variables. Fourth, none of the methods is able to reproduce
inequality relationships, such as that between total and soluble
copper grades. For these variables, ad-hoc simulation methods have
to be designed.
322 A Compari son of T hree Geos tati s tical Approaches for Co -Simulating...

introduction
Kriging and simulation are popular geostatistical methods used to evaluate the amount
of mineral resources in ore deposits and to assess the uncertainty in this amount [1–3] .
In the multivariate context (e.g., in polymetallic deposits), these methods are known
as co-kriging and co-simulation. A crucial aspect of simulation is the reproduction of
the spatial variability of the variable under study. For co-simulation, one also aims
at reproducing the dependence relationships between variables, in particular their
cross-correlations. In this study, we are interested in comparing three approaches to
co-simulating mineral grades in polymetallic deposits, and in determining their ability
to reproduce not only the spatial variability of grades, but also their cross-correlation.

methodology
In this study, three approaches will be considered for simulating co-regionalised
variables (namely, the grades of several mineral species).

Simulation of each variable separately


This approach is the simplest one and consists of the following steps [2, 3] .
For each variable:

• Transform the original data into normal scores data


• Perform variogram analysis of the transformed data
• Simulate the transformed variable, conditionally to the data on this variable. In this
study, the turning bands algorithm [1, 3] is used
• Back-transform the simulated variable to the original scale (grade)
• Process the results.

Co-simulation under a Linear Model of Coregionalisation (LMC)


The Linear Model of Coregionalisation assumes that all the simple and cross variograms
are nested functions of the same basic structures, associated with different sill
contributions [2, 4] . There are three main aspects to consider when constructing this
model: the first one relates with the shape of the model (choice of the nested structures),
the second one with its parameters (sills and ranges), the third one with the positive
semi-definiteness constraint of the sill matrices. The model is quite flexible, but becomes
complex when the number of variables increases. In practice, with more than three
variables, semi-automatic algorithms have to be used to fit the sill matrices [5] .
The steps for co-simulation are the following:

• Transform each variable into a normal variable


• Perform a joint variogram analysis of the set of transformed variables; fit a Linear
Model of Coregionalisation
• Co-simulate the variables, conditionally to the transformed data. Again, the turning
bands algorithm is used [6]
• Back-transform the simulated variables to the original scale
• Process the results.
CHAPTER V 323

Co-simulation via factorisation by Principal Component


Analysis (PCA)
pca amounts to factorising a set of cross-correlated variables into a set of variables
(factors) that are not correlated at lag h = 0. The non-correlation holds for any lag if the
correlation structure of the variables is the same at all spatial scales (intrinsic correlation
model) [4, 7] . In other cases, the factors may not be spatially independent, although their
point-wise correlation is zero. The co-simulation via pca consists of the following steps:

• Calculate the factors from the original grade data


• Transform the factors into normal scores
• Perform variogram analysis of each transformed factor separately (assuming
independence of the factors, or negligible cross-correlations)
• Simulate each factor separately, conditionally to the data on this factor
• Back-transform the simulated factors into the original grade variables
• Correct for negative grade values, setting the negative simulated grades to zero
• Process the results.
Here, the cross-correlation between grade variables is exclusively reproduced by means
of the pca transformation and back-transformation (steps 2 and 5).

first case study: porphyry copper deposit


The first case study corresponds to a drill hole data subset from the oxide zone of the
Radomiro Tomic mine, located in Northern Chile and operated by Codelco Chile (Codelco
Norte Division). Eight variables are of interest: total and soluble copper grades (CuT, CuS3
and CuS4), iron grade (Fe), molybdenum grade (Mo), manganese grade (Mn), total and
soluble chlorine grades (ClT and ClS). Basic statistics on the data are indicated in Table 1 .

Table 1 Statistics on original grade data (Radomiro Tomic data set)

Variable Number of data Minimum Maximum Mean


CuT 2,709 0.01% 5.42% 0.44%

CuS3 1,244 0.01% 5.86% 0.35%

CuS4 640 0.00% 2.39% 0.25%

Fe 1,082 0.60% 9.81% 1.90%

Mo 975 0.00% 0.93% 0.03%

Mn 727 0.01% 0.67% 0.06%

ClT 816 0.01% 0.45% 0.05%

ClS 401 0.01% 0.06% 0.02%

An interesting feature of this data set is the heterotopic sampling, insofar as some
variables (Cl S , CuS 4, Mn, ClT) are under-sampled with respect to the others (CuT, CuS3,
Fe, Mo). The correlation matrix between variables is indicated in Table 2 . In general,
one observes weak correlations, except between total (CuT) and soluble (CuS3 and CuS 4)
copper grades.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
324 A Compari son of T hree Geos tati s tical Approaches for Co -Simulating...

Table 2 Correlation matrix between grade data (Radomiro Tomic data set)

CuT CuS3 CuS4 FeT Mo Mn ClT ClS


CuT 1 0.72 0.82 -0.19 0.12 0.05 0.48 0.06
CuS3 0.72 1 0.018 -0.27 0.09 0.02 0.57 0.01
CuS4 0.82 0.01 1 -0.01 -0.03 -0.01 0.51 0.05
FeT -0.19 -0.27 -0.01 1 -0.15 0.16 -0.17 0.01
Mo 0.12 0.09 -0.03 -0.15 1 -0.07 0.05 -0.09
Mn 0.05 0.02 -0.01 0.16 -0.07 1 -0.01 0.01
ClT 0.48 0.57 0.51 -0.17 0.05 -0.01 1 0.18
ClS 0.06 0.01 0.05 0.01 -0.09 0.01 0.18 1

When implementing the co-simulation approaches described in the previous section,


we were able to simulate all the grade variables only with the first two approaches
(i.e., separate simulation and co-simulation under a Linear Model of Coregionalisation).
Indeed, pca transformation is restricted to equally-sampled variables, reason for which
some of the under-sampled variables (CuS4, ClT and Cl S) had to be removed (too few data
were available on these variables).

Because all three simulation approaches are supposed to reproduce both the univariate
distribution (histogram) and the spatial variability (variogram) of each variable, we
will focus the comparison on the reproduction of the dependence relationships between
variables, specifically on the CuT -CuS3 couple. pca co-simulation clearly outperforms
the other two approaches, although none of the approaches reproduces the inequality
between total and soluble copper grades (Figure  ➊). This can be explained because the
separate simulation approach ignores the dependence between the variables, and because
the Linear Model of Coregionalisation may poorly fit the cross-variogram between total
and soluble copper grades due to the high number of other variables considered in this
model. In contrast, the dependence between total and soluble copper grades seems to be
better modelled by the pca transformation and back-transformation.

Rho=0.72 Rho=0.12
Cu T

Cu T

CuS 3 CuS 3

Rho=0.43 Rho=0.63
Cu T

Cu T

CuS 3 CuS 3

Figure 1 Scatter diagrams between soluble (CuS3 , abscissa) and total (CuT, ordinate) copper grades. Top left:
original data, top right: separate simulation, bottom left: co-simulation (LMC), bottom right: co-simulation (PCA).
CHAPTER V 325

second case study: lateritic nickel deposit


The second data set corresponds to blast hole data in a lateritic nickel deposit (Cerro
Matoso), located in Northern Colombia. The deposit can be divided into three main
geological domains. In the following, we will focus on one of these domains, in which
the variables of interest (alumina, chrome, iron, magnesium oxide, nickel and silica) are
equally sampled and highly correlated (Tables 3 and 4). A vertical cross-section showing
the distribution of blast hole samples is displayed in Figure  ➋. ➋
Table 3 Statistics on original grade data (Cerro Matoso data set)

Variable Number of data Minimum Maximum Mean

Al2O3 2,596 0.20% 13.20% 3.65%

Cr 2,596 0.22% 4.27% 1.34%

Fe 2,596 5.00% 58.40% 23.84%

MgO 2,596 0.20% 39.17% 9.72%

Ni 2,596 0.21% 6.57% 1.64%

SiO2 2,596 4.20% 71.05% 35.89%

Table 4 Correlation matrix between grade data (Cerro Matoso data set)

Al2O3 Cr Fe MgO Ni SiO2

Al2O3 1.00 0.87 0.93 -0.70 -0.07 -0.91

Cr 0.87 1.00 0.89 -0.75 0.02 -0.84

Fe 0.93 0.89 1.00 -0.81 -0.04 -0.93

MgO -0.70 -0.75 -0.81 1.00 -0.11 0.58

Ni -0.07 0.02 -0.04 -0.11 1.00 0.07

SiO2 -0.91 -0.84 -0.93 0.58 0.07 1.00

Figure 2 Cross-section showing the location of blast hole data and the delimitation of the geological
domain under study (UG2).

As in the first case study, we shall compare the ability of the simulation approaches to
reproduce the dependence between variables. Specifically, we will consider the couple
Fe-Al 2O3 (correlation of 0.93). Again, it is seen that pca co-simulation outperforms the
other two approaches (Figure  ➌).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
326 A Compari son of T hree Geos tati s tical Approaches for Co -Simulating...

Rho=0.931 Rho=0.61

Fe

Fe
Al 2O 3 Al 2O 3

Rho=0.88 Rho=0.92
Fe

Fe

Al 2O 3 Al 2O 3

Figure 3: Scatter diagrams between alumina (abscissa) and iron (ordinate) grades. Top left: original data,
top right: separate simulation, bottom left: co-simulation (LMC), bottom right: co-simulation (PCA).

concluding remarks
The approaches under consideration comply with the principles of stochastic simulation
in terms of generating realisations that reproduce the spatial variability of each grade
variable separately. However the dependence relationships between variables are critical
when these variables are cross-correlated (for instance, consider a main product like
copper strongly correlated with a contaminant like arsenic), so the reproduction of spatial
variability alone is not enough.
According to the presented case studies, in which the dependence relationships are
rather complex, pca co-simulation turns out to better reproduce these relationships,
which are implicitly modelled by the pca transformation and back-transformation. In
contrast, the separate simulation ignores the dependence between variables and has to
be avoided as soon as this dependence is not negligible. Co-simulation under a linear
model of coregionalisation takes the correlations between variables into account, but the
model may poorly fit some of the cross-variograms when dealing with many variables.
An advantage of this last approach is the ability to handle heterotopic data sets, whereas
pca only works with data sets in which the variables are equally-sampled. As a particular
result, none of the proposed approaches can reproduce inequality relationships, such as
that between total and soluble copper grades. In such a case, ad hoc techniques have to
be developed [8, 9] .
Future works include the design of more versatile coregionalisation models in order to
better fit complex dependence relationships, e.g., minimum/maximum autocorrelation
factors [10, 11] or bilinear coregionalisation models [4] .
CHAPTER V 327

acknowledgements
This research has been funded by the Chilean Commission for Scientific and Technological
Research (conicyt) through fondecyt Project nº1090013. The authors acknowledge
Codelco Chile and Cerro Matoso S.A. for providing the data sets used in this study and
the support of the Advanced Laboratory for Geostatistical Supercomputing (alges) and of
the Advanced Mining Technology Center (amtc) at the Universidad de Chile.

references
Journel, A. G. (1974) Geostatistics for Conditional Simulation of Orebodies. Economic Geology 69(5),
pp. 673–687. [1]

Goovaerts, P. (1997) Geostatistics for Natural Resources Evaluation. Oxford University Press, New York,
p. 480. [2]

Chilès, J. P., & Delfiner, P. (1999) Geostatistics: Modeling Spatial Uncertainty. Wiley, New York, p. 695. [3]

Wackernagel, H. (2003) Multivariate Geostatistics: an Introduction with Applications. Springer, Berlin,


p. 387. [4]

Goulard, M. & Voltz, M. (1992) Linear Coregionalization Model: Tools for Estimation and Choice of Cross-
variogram Matrix. Mathematical Geology 24(3), pp. 269–286. [5]

Emery, X. (2008) A Turning Band s Program for Conditional Co-simulation of Cross-correlated Gau ssian
random Fields. Computers & Geosciences 34(12), pp. 1850–1862. [6]

Goovaerts, P. (1993) Spatial Or thogonalit y of the Principal Components Computed from Coregionalized
Variables. Mathematical Geology 25(3), pp. 281–302. [7]

Leuangthong, O. & Deutsch, C. V. (2003) Stepwise Conditional Transformation for Simulation of Multiple
variables. Mathematical Geology 35(2), pp. 155–173. [8]

Emery, X. Carrasco, P. & Ortiz, J. (2004) Geostatistical Modelling of Solubility R atio in an Oxide Copper
Deposit. In: Magri E.J., Ortiz, J.M., Knights, P., Vera, M., Henríquez, F., and Barahona, C., eds.,
First International Conference on Mining Innovation MININ 2004. Gecamin Ltda, Santiago,
pp. 226–236. [9]

Desbarats, A. J. & Dimitrakopoulos, R. (2000) Geos tati s tical Simulation of R egionali zed Pore-si ze
Distributions Using Min/Max Autocorrelation Factors. Mathematical Geology 32(8), pp. 919–942. [10]

Vargas-Guzmán, J. A. & Dimitrakopoulos, R. (2003) Computational Properties of Min/Max Autocorrelation


Factors. Computers & Geosciences 29(6), pp. 715–723. [11]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Implicit 3-D Modelling – A New Era
in Geological Evaluation

abstract
Simon mortimer The necessity of generating 3-d geological models is now well
Atticus Consulting, Peru understood within the resource industry and is an integral part
of the resource evaluation process.
The traditional methodology applied in the creation of 3-d
models is based on basic 3-d cad technology which has not changed
since its integration in mining back in the early 80's. This classical
approach to defining geological models involves time consuming
manual digitisation of lines drawn as a series of 2-d parallel
sections and then connected in the third dimension via a complex
array of ‘tie’ lines. This ‘dot to dot’ line construction methodology is
an inefficient use of a professional geologist's time and can detract
from the more important task of interpreting geology.
The past few years have seen the emergence of implicit
modelling technologies that are rapidly becoming a practical
alternative to the traditional 3-d cad methodology. Implicit models
are defined by mathematical functions and geological rules that
take into account structural and stratigraphic relationships. Using
technologies that are currently available, implicit models can be
generated extremely quickly, with time savings of between 10 and
100 fold when compared with traditional modelling methods. This
ability to construct complex geological models in a very short time
frame can enable the geologist to evaluate multiple scenarios for
complex geological situations  — not just a single model case as
it is commonly realised. Another advantage with this method
is that the models can be almost instantly updated as new data
becomes available and remove the need to tediously rebuild
wireframe models, thus allowing the continual development of
geological models throughout the life of the project or a single
drill campaign. This paper discusses in more detail the practical
application of implicit modelling, using Norsemont Mining's
Constancia porphyry skarn project as a case study.
330 Implicit 3-D Modelling – A Ne w Era in Geological Evalu ation

introduction
The creation of 3-d geological models to better understand geology and constrain resource
estimates is well known and is now routinely performed by exploration and mine
geologists. The representation of ore body boundaries and geological contacts are the basis
of resource and reserve models. To attain the accurate, validated and geologically correct
models that are required by the industry is typically a time consuming task, and a very
essential one, as variations in the model can dramatically influence downstream mining
processes and essentially impact the mine economics. The need to generate complex
and accurate geological models often within restricted time frames is a key issue facing
resource geologists.
The traditional method of generating 3-d geological models employed by industry
standard general mining software packages relies heavily on manual digitisation and is
therefore very time consuming to create, update or modify. In this paper we look at how
the implicit modelling methodology can reduce the time to complete geological models,
dynamically update the model and improve the interpretive process.

traditional method of modelling 3-d geology


3-d geological models have traditionally been created using 3-d cad technology, where
the geometries of the triangles are definite, built from a series of points fixed in 3-d
space which are interconnected by straight lines, hence termed explicit modelling. The
explicit modelling method requires digitisation of points, strings and polygons on a
series of parallel 2-d sections; these individual sections are then tied together using
digitised tie lines and then triangulated in order to produce the 3-d solid geometry. Built
on technology that hasn't really changed since its inception in the mining industry
back in the 80's, this methodology is inefficient, inflexible and often complicated by
construction issues.
The interpretations are required to be built on parallel series of sections. The geologist
must first select the best orientation of the sections, or indeed the preferred orientation in
which the model will be built. Typically this should be perpendicular to the direction of
the mineralisation, however this is not always possible, as the best direction to interpret
can often vary throughout the deposit. Building the 2-d geological interpretation
should be carried out in the same plane as the drill string intercepts from which the
interpretation is developed, however this is not always possible as holes are often drilled
in varying directions out of the planes of the parallel sections.
The creation of the interpretation in each of the 2-d sections is a time consuming task
and one required to be completed by an experienced modelling geologist. The repetition of
drawing similar interpretation lines throughout the series of parallel sections, increasing
the frequency of sections in areas of structural complexity, is not the most effective use of
an experienced geologist's time. Adding to the inefficiencies of the manual digitisation of
the 2-d sections is the slow and often tedious process of adding all the tie lines necessary
to convert the 2-d polygons to a valid 3-d solid.
Due to the amount of time required to build and /or modify an existing 3-d geological
model, there is rarely an opportunity to model alternate interpretations and compare
resource evaluations based on the alternative models. Furthermore, it is uncommon
for an operating mine or advanced project to radically change their working model
as new drill hole data becomes available because of the time consuming nature of
the modelling process. When changes are made to the models, it is always after the
completion of a drill campaign, not as soon as the data from each drill hole becomes
CHAPTER V 331

available. In the traditional methodology, the time required to make the changes to
the model is greater than the time taken to drill and process the data extracted from
that hole.
Using the traditional 3-d cad - based methodology, geological shapes of any geometry
can, if time permitting, be manually digitised. However, the limitations to this metho­
dology are:

• The resulting models are heavily dependent on the interpretations of the individual
geologist. Therefore the model cannot be easily replicated by other geologists, which
can add an unknown risk to downstream mining procedures.

• The manual digitisation required in the generation of the geological model makes the
traditional modelling process extremely inefficient.

• Modifications and additions to an existing model involve time consuming


manipulation of digitised lines and points, hence updates of the model are seldom
carried out on a continual basis as new data is available.

• The resulting model is basically a 2-d interpretation extrapolated into a 3-d model.

The 3-d cad technology on which the traditional modelling methodology is based is the
cause of all the limitations mentioned; clearly basing the modelling methodology on
an alternative technology would be desirable. In this paper we review the use of implicit
modelling techniques as a viable alternative to the traditional method.

Implicit modelling methods


An implicit model is defined by a mathematical function throughout 3-d space; the
surfaces are implied to exist with their definition being given by the function. Therefore
the surfaces that are modelled are not triangulated directly from points in space,
but rather are a mesh of defined resolution that approximates an isosurface within a
continuous volume function.
A simple example of a volume function that defines geometry, and hence implies
the existence of a surface / solid, is that of a simple unit radius sphere. Consider the
equation: x2 + y2 + z2 - 1 = 0, or in the form ƒ (x, y, z) = c, where c is a constant, describes
the infinite number of x, y, z coordinates that lie on the surface of the sphere. In order
to determine the position of the sphere in space, various x, y, z coordinates are entered
into the equation and the scalar values returned will indicate whether the point is inside
(< 0) or outside (> 0) of the sphere surface (= 0). [1]
To implicitly model a geological contact, a volume function with an isosurface that
includes the contact points identified in drill strings must be created. This volume
function is defined by the function values at the selected points and interpolation
through the rest of space. The key ingredient in the spatial definition of the implicit
model is the interpolation method used to create the volume function. Here, we will
utilise a fast rbf (Radial Basis Function) as the base interpolation method to construct
the volume function.
As the basis of the implicit modelling method is an interpolation, all the elements
that were causing issues in the traditional modelling method do not exist.

• The implicit model is built using full 3-d interpolation, not as an extrapolation of a
2-d interpretation into the third dimension.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
332 Implicit 3-D Modelling – A Ne w Era in Geological Evalu ation

• The implicit model can be constructed with little or no manual digitisation, which
means that the model can be built in a much shorter time frame. It also means that
several different variations on the model can be realised: the model can be updated
rapidly as new data becomes available.

• The final model is not so dependent on the personal interpretation of the modelling
geologist.

Practical applications of the implicit modelling method


Geological modelling of the Constancia Porphyry Skarn deposit, Cusco, Peru

The Constancia Porphyry Skarn deposit, owned by Norsemont Mining, is an exploration


project currently at feasibility level. As in all advancing projects, the geological modelling
process is required to be completed within a tight schedule. Under certain time constraints
and faced with generating lithology, alteration, mineralogy and grade models from a
100,000 m drilling database, all within a three week period, it was decided that an implicit
modelling method would be used to complete the task.

Lithology modelling

The geology of the Constancia deposit comprises a sequence of monzonitic porphyries


and porphyritic dykes intruded into a cretaceous limestone and quartzite sequence. The
principal objectives of this particular phase of modelling the lithology were to gain a
greater understanding on the controls of mineralisation, to define the contacts of the
different porphyry intrusions, to define the skarn domain and to extract the late intrusive
dykes as barren domains.
The implicit model is based upon the volumes extracted from the contact between
two lithologies, therefore understanding the stratigraphy or the relative ages of the
different rock types is key to building up the geological model. The key contacts to
define the model are those of the limestone—skarn—monzonite porphyry which are
very irregular in nature and very difficult to assess in sections. Figure  ➊ is a sectioned
3-d model of the skarn and limestone. It shows how the skarn solid has a very irregular
distribution controlled by a combination of position of the limestone, the intrusion of
the mineralising porphyry and fluid flow along series of structures.

Figure 1 Definition of interconnecting complex geometries without triangle errors.


CHAPTER V 333

The construction of the skarn solid takes into account the structural trends that control
its genesis as it has been defined using an anisotropic interpolant and the mesh definition
has generated its irregular shape with no triangle errors. The construction of the limestone
blocks have been limited to an ‘outside skarn’ domain, to fit exactly around the skarn solid
and any changes to the skarn model are dynamically updated in the limestone.
The modelling of the late stage dykes was carried out using a semi-automated dyke
(or vein) building process. This process uses the implicit surface generation to create the
hanging wall and footwall surfaces from interpolations between contact points, and then
connects the two surfaces to form a solid. In building these solids, all that is required is to
select the intervals that correspond to a particular dyke. Figure  ➋ is a sectioned 3-d view
looking down on the two separate late stage dykes, showing the selected dyke intervals
with their contact points for the hanging wall and footwall.

Figure 2 Definition of dyke solids from selected drillhole intervals.

Mineralogy modelling

Modelling the copper mineralogy within the porphyry is essential to determine the
copper domains required in the metallogenic evaluation of the deposit. The mineral
zones—lixiviated, oxide, supergene, mixed and hypogene—are based primarily upon a
mathematical formula using the analyses of the sequential copper assay results with
secondary input from geological logging. The objective to generate five mutually exclusive
copper domains was made simple using the implicit modelling method.
The model is built up of four different mineralogical boundaries, with each solid being
constructed from rules relating to these surfaces, e.g. the supergene solid is defined by
the volume below base of lixiviated, above base of supergene. The surfaces were all generated
directly from the drill string data with additional points being added manually in areas
of little information. The input of new drill data and minor changes to the interpretation
on any of the contact surfaces resulted in a dynamic update of the model, adapting the
resultant solids to fit to the new data. Figure  ➌ is a sectional 3-d view of the mineral
zone solids: lixiviated, oxide, supergene), mixed and hypogene, and the drill strings.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
334 Implicit 3-D Modelling – A Ne w Era in Geological Evalu ation

Figure 3 Copper mineralogy model.

Alteration modelling

Modelling the alteration suites within the porphyry plays an important part in
understanding the genesis of the deposit and assists in the evaluation of downstream
mining, milling and metallurgical processes. The construction of a series of alteration
models is traditionally an extremely time consuming and difficult process due to the
variability of the interpretation of the drill data and the different controlling factors that
create the alteration. The objective in this case was to determine the porphyry centres by
modelling the intensities of the propylitic and potassic alteration zones.
The data input into the model was an alteration of intensity, (e.g. potassic-2, phyllic-1)
interpreted from the altered minerals observed in the drill core. Each alteration suite was
modelled separately; argillic, potassic, phyllic and propylitic alterations were modelled
directly from the numeric intensity data for the logged drill holes. The values were then
contoured using an isotropic interpolant and limited to a domain to generate solids for
low, intermediate and highly altered rocks. This process of generating the first iteration
of the alteration models took only a few hours to complete and was sufficient enough
to define the porphyry centres at depth as well as to conclude an important exploration
hypothesis. Figure  ➍ shows the isotropic model for potassic alteration.

Figure 4 Potassic alteration intensity defining three porphyry centres.


CHAPTER V 335

conclusions
In recent years, advances in computing technology have advanced the development of
implicit modelling software in the mining industry. Geological modelling tasks that were
only thought possible by slow hand digitisation can now be completed in a fraction of the
time and often with improved results using implicit modelling methods. The process of
generating models directly from drill data is now becoming a practicality as the examples
presented herein demonstrate.

references
Cowen, E. J., Beatson, R. K., Ross, H. J., Fright, W. R., McLennan, T. J., Evans, T. R., Carr, J. C., Lane,
R. G., Bright, D. V., Gillman, A. J., Oshurst, P. A. & Titley, M. (2003) Practical Implicit Geological
Modelling. M5th International Mining Geology Conference Proceedings, Bendigo, Victoria. [1]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Modelling Equivalent Grades in
Polymetallic Deposits

abstract
Carlos corral Quantifying resources and reserves in ore deposits with more than
Xavier emery one mineral species of interest is an arduous task when complex
Universidad de Chile spatial dependency relationships between these species exist.
The multivariate modelling problem can be transformed into a
univariate problem by introducing an ‘equivalent grade’, defined
as a function of the mineral grades of interest. Directly calculating
the equivalent grade at the available data locations considerably
decreases the modelling effort and the cpu time to construct a
block model, but it lacks flexibility because the parameters that
define the equivalent grade cannot be modified any more.
Two case studies, one corresponding to a gold-silver deposit and
the other to a porphyry copper-silver deposit, are considered. In
the first case, the equivalent grade is a linear combination of gold
and silver grades, while in the second case it is a more complex
function of copper, silver, arsenic and antimony grades.
Three geostatistical approaches are compared for modelling
the equivalent grade in the deposit: kriging and simulation of
each grade variable separately, then calculation of the equivalent
grade; co-kriging and co-simulation of all the grade variables,
then calculation of the equivalent grade; and direct calculation
of the equivalent grade at the data locations, followed by kriging
and simulation. Although the three approaches yield similar
results in the first case study (gold-silver deposit), considerable
differences are observed in the second case study. Two reasons are
identified for such differences. First, some variables are under-
sampled. Second, the equivalent grade is not a linear combination
of the original grade variables, as it involves quadratic terms and
indicator functions. In this case, the approach based on a complete
multivariate modelling and co-simulation is deemed preferable
to the other ones.
338 Modelling Equivalent Grades in Polyme tallic D eposit s

introduction
The economic evaluation of a selective mining unit in a polymetallic deposit is a
complex problem because the presence of several elements of interest gives somewhat
heterogeneous information. A current technique to face this problem is the use of an
equivalent grade, defined as a combination (in general, a linear one) of the grades of the
elements of interest with parameters that depend on economic factors [1, 2] . This way,
the multivariate problem is transformed into a univariate one. The objective of this
work is to compare, through two case studies, several geostatistical techniques to model
equivalent grades in ore deposits.

methodology
Three main approaches are put to the test, which are explained next.

First approach: separate modelling of each grade variable


This approach consists in viewing the grade variables as if they were independent, and
performing ordinary block kriging or conditional simulation of each variable separately.
The equivalent grade is then calculated from the kriged or simulated grades. The steps
for simulation are [3, 4] :

• For each variable:


––Cell declustering of the original data
––Normal scores transformation
––Validation of bivariate normal distribution
––Variogram analysis of normal scores data
––Non-conditional simulation by turning bands (100 realisations)
––Conditioning to normal scores data by simple kriging
––Back-transformation to original grade variable
––Change of support, from the sample support to the selective mining units

• Calculation of block-support equivalent grade for each realisation


• Calculation of the expected equivalent grade by averaging the realisations.

Second approach: joint modelling of grade variables


This approach is similar to the first one, except that co-kriging or co-simulation is used
in order to model all the variables simultaneously. This requires a coregionalisation
model, i.e. a model for the simple and cross variograms of the grade data (for co -kriging)
or of their normal scores transforms (for co  -simulation). Here, we used the linear model
of coregionalisation [5, 6] for its simplicity and versatility. Many other coregionalisation
models have been proposed in the literature and could have been used for co -kriging
or co -simulation, such as models based on spectral representations [7, 8] , on square
roots of simple covariance functions [9] , or on the decomposition of the variables into
minimum/maximum autocorrelation factors [10, 11] .

Third approach: direct modelling of equivalent grade


This is a shortcut approach in which the equivalent grade is calculated at the data
locations. Then it is interpolated by block kriging or simulation.
CHAPTER V 339

first case study: gold-silver deposit


The first case study corresponds to an auriferous-argentiferous vein-type deposit. The
data set consists of exploration drill hole samples, with assays of the gold (main product)
and silver (by -product) grades, as well as the codification of the data (1 = inside the vein,
0 = outside the vein). The equivalent gold grade (in g/t) is defined as

(1)

in which both the gold (Au) and silver (Ag) grades are expressed in g/t. The weighting
factor (1/69.5) corresponds to a simplification of the price relationship between gold and
silver during the first semester of 2009.
The spatial location of the samples is displayed in Figure  ➊, while basic statistics of
gold, silver and equivalent gold grades are indicated in Table 1. It is observed that gold and
silver grades are equally sampled (no missing data) and have a significant correlation (0.58).

Figure 1 Maps showing the sample locations (first case study).

Table 1 Basic statistics of the available drill hole data (first case study)

Variable Number of Data Minimum Maximum Mean Standard Deviation


Gold grade 3,712 0.00 g/t 1,190.0 g/t 77.61 g/t 45.01 g/t
Silver grade 3,712 0.00 g/t 8,788.0 g/t 159.9 g/t 597.3 g/t
Equivalent gold grade 3,712 0.00 g/t 1,252.9 g/t 10.03 g/t 51.50 g/t

All the sample variograms for (co -)kriging and for (co -)simulation have been calculated
along the horizontal and vertical directions and fitted with nugget effects and nested
spherical models with ranges of between 8 m and 75 m. An example of fitting is indicated
in Figure  ➋.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
340 Modelling Equivalent Grades in Polyme tallic D eposit s

Figure 2 Simple variograms of normal scores data for gold (left) and silver (middle), and
­c ross variogram (right) along the main anisotropy directions (first case study).

second case study: copper- silver deposit


The second case study corresponds to a cupriferous deposit. The data set consists of copper
(main product), silver (by-product), arsenic and antimony (contaminants) grades assayed
on exploration drill holes. The copper grade is strongly correlated with the silver, arsenic
and antimony grades. Also, arsenic and antimony grades are significantly correlated,
and weakly correlated with silver grades (Table 2).

Table 2 Correlation matrix between grade variables (second case study)

Variable Copper Silver Arsenic Antimony


Copper 1 0.82 0.70 0.66
Silver 0.82 1 0.44 0.33
Arsenic 0.70 0.44 1 0.68
Antimony 0.66 0.33 0.68 1

The spatial location of the samples is shown in Figure  ➌.

Figure 3 Maps showing the sample locations (second case study).


CHAPTER V 341

The equivalent copper grade (in %) is defined by the following formula:

2)

with (3)

In the previous formulae, the copper grade (Cu) is expressed in percentages, while the
silver (Ag), arsenic (As), antimony (Sb) and equivalent arsenic (As eq) grades are expressed
in g /t. The parameters in the first formula are based on a price relationship between
copper and silver during the first semester of 2009 and on a process cost relationship for
arsenic. Basic statistics on the data grades are indicated in Table 3 . Note that antimony
grades are highly under-sampled with respect to the other three variables, a situation
known as heterotopic sampling in multivariate geostatistics [6] .
The sample variograms have been calculated along the East-West, North-South and
vertical directions and fitted with nugget effects and nested spherical and exponential
models with ranges of between 10 m and 75 m. An example of fitting is indicated in
Figures  ➍ and ➎.

Figure 4 Simple variograms of normal scores data along the main anisotropy directions: copper (top
left), silver (top right), arsenic (bottom left) and antimony (bottom right) (second case study).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
342 Modelling Equivalent Grades in Polyme tallic D eposit s

Table 3 Basic statistics of the available drill hole data (second case study)

Variable Number of Data Minimum Maximum Mean Standard Deviation


Copper grade 5,754 0.01% 29.90% 1.66% 2.40%
Silver grade 5,594 0.40 g/t 855 g/t 36.87 g/t 58.16 g/t
Arsenic grade 5,726 5.00 g/t 126,100 g/t 2,000.4 g/t 4,501.3 g/t
Antimony grade 2,160 1.00 g/t 8,240 g/t 220.3 g/t 557.8 g/t
Equivalent copper grade 2,148 0.00% 19.61% 0.86% 1.28%

Figure 5 Cross variograms of normal scores data along the main anisotropy
directions. Top: ­copper -silver, copper -arsenic, silver -arsenic. Bottom:
copper -antimony, silver-antimony, arsenic -antimony (second case study).

results and discussion


• In the first case study, the three proposed approaches produce similar results for
both (co-)kriging and (co-)simulation: the correlation coefficients between estimates
are greater than 0.98 for kriging approaches, and greater than 0.88 for simulation
approaches (Tables 4 and 5). This can be explained because of the following reasons:

• All the variables are equally sampled (isotopic sampling). Therefore the direct calculation
of the equivalent gold grade at the data locations (third approach) does not lose
information. Furthermore, the simple and cross variograms of gold and silver grades
have similar shapes (Figure  ➋) so that co-kriging and the co-simulation average are
not very different from separate kriging and from the separate simulation average
[6, 12] . Direct kriging and direct simulation of the equivalent grade are theoretically
less accurate [13] , but the loss appears to be small in this case.

• The equivalent grade is defined as a linear combination of the original grades, therefore
it is additive and change of support with kriging can be considered in each approach.
CHAPTER V 343

Larger differences are observed when comparing (co -)kriging and (co -)simulation.


The correlation coefficients between estimates are between 0.5 and 0.6 (Table 5). These
differences are mainly due to the long-tailed distributions of the gold and silver data
and to the presence of a few extremely high data values. Linear prediction by kriging is
sensitive to such extreme values, while simulation is more robust as it works on normal
scores transforms [14] . Although each realisation reproduces the spatial variability
of grade, their average yields a smoother model for the equivalent gold grade (smaller
maximum value and standard deviation), indicating that the influence of the extremely
high data values is attenuated with respect to kriging approaches (Table 4).

Table 4 First case study- basic statistics of estimated equivalent gold grades

Approach Minimum Maximum Mean Standard Deviation


Separate kriging 0.00 g/t 316.9 g/t 9.97 g/t 17.6 g/t
Co-kriging 0.00 g/t 316.7 g/t 9.46 g/t 17.9 g/t
Direct kriging 0.00 g/t 315.0 g/t 10.09 g/t 17.2 g/t
Separate simulation 0.05 g/t 186.5 g/t 10.44 g/t 8.31 g/t
Co-simulation 0.04 g/t 197.6 g/t 10.49 g/t 7.90 g/t
Direct simulation 0.03 g/t 193.8 g/t 10.61 g/t 8.28 g/t

The data in Table 4 refer to 6,617 selective mining units with size 5 m × 5 m × 5 m

Table 5 Correlation between estimates of equivalent gold grades on selective mining units (first case study)

Separate Co- Direct Separate Co- Direct


kriging kriging kriging simulation simulation simulation
Separate kriging 1.00 0.99 1.00 0.56 0.53 0.59
Co - kriging 0.99 1.00 0.98 0.56 0.54 0.59
Direct kriging 1.00 0.98 1.00 0.55 0.52 0.57
Separate simulation 0.56 0.56 0.55 1.00 0.89 0.89
Co-simulation 0.53 0.54 0.52 0.89 1.00 0.88
Direct simulation 0.59 0.59 0.57 0.89 0.88 1.00

In contrast, in the second case study, all the approaches produce quite different results
(Tables 6 and 7). The reasons are the following:

• Because of the heterotopic sampling, the equivalent copper grade cannot be calculated
at all the data locations. Accordingly, the third approach loses information with respect
to the other two (Table 3).

• The equivalent copper grade is not defined as a linear combination of the original
variables but as a function involving squares and cut-offs Equation (2). In such a case,
the calculation of equivalent grades after separate kriging (approach 1) or co-kriging
(approach 2) is biased because of the smoothing property of kriging and co -kriging (the
distribution of the estimated grades above a cut -off is not the same as the distribution
of the true grades) [4, 5] . The interpolation by direct kriging or direct simulation
(approach 3) is biased as well, but for a different reason: the equivalent grade is not
an additive variable, therefore change of support (from samples to selective mining
units) does not amount to averaging the point -support equivalent grades.

• Separate simulation (approach 1) is also likely to produce biased results because it


ignores the spatial dependences and cross -correlations between the grade variables.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
344 Modelling Equivalent Grades in Polyme tallic D eposit s

• The only approach that theoretically produces unbiased estimates of the equivalent
grade is co-simulation (approach 2), which is the most demanding in terms of
modelling effort and cpu calculations, insofar as it requires fitting a coregionalisation
model and constructing multiple realisations of the grade variables of interest.

Table 6 Second case study- basic statistics of estimated equivalent copper grades

Standard
Approach Minimum Maximum Mean
Deviation
Separate kriging 0.00% 10.07% 0.91% 0.80%
Co-kriging 0.00% 10.45% 0.92% 0.80%
Direct kriging 0.00% 9.85% 0.83% 0.55%
Separate simulation 0.00% 5.27% 1.29% 0.58%
Co-simulation 0.00% 4.46% 1.05% 0.60%
Direct simulation 0.14% 4.56% 0.94% 0.25%

Table 6 refers to 24,936 selective mining units with size 10 m × 5 m × 5 m.

Table 7 Correlation between estimates of equivalent copper grades on selective mining units(second case study)

Separate Separate Co- Direct


Co-kriging Direct kriging
kriging simulation simulation simulation
Separate kriging 1.00 0.97 0.49 0.66 0.74 0.46
Co-kriging 0.97 1.00 0.49 0.67 0.74 0.46
Direct kriging 0.49 0.49 1.00 0.53 0.54 0.52
Separate simulation 0.66 0.67 0.53 1.00 0.90 0.52
Co-simulation 0.74 0.74 0.54 0.90 1.00 0.53
Direct simulation 0.46 0.46 0.52 0.52 0.53 1.00

conclusions
The equivalent grade is a useful tool when the economic factors are well defined for each
variable, as it allows evaluating a polymetallic deposit without analysing each one of the
variables of interest. The direct calculation and interpolation (by kriging or simulation)
of the equivalent grade is the simplest and fastest approach, but it is accurate only when
the sampling is isotopic and when the equivalent grade is a linear combination of the
original variables. Otherwise, it may produce biased results or lose information in the
case of heterotopic sampling. Also, this approach is not convenient when information of
all the elements is required (e.g., for mine planning and blending) or when the economic
factors defining the equivalent grade are not constant in time.
When the equivalent grade is not a linear function (for instance, when it involves
quadratic terms or cut-off grades), then only co-simulation of the grade variables produces
unbiased predictions. Even if this approach is more tedious in applications, it is likely
to add substantial value to the overall mining project. Separate simulation ignores
the dependence between variables. Separate kriging and co-kriging smooth the grade
prediction and should not be used to calculate a non-linear equivalent grade. Direct
kriging and simulation of the equivalent grade should not be used to calculate a block-
support equivalent grade as this variable is not additive.
CHAPTER V 345

acknowledgements
This research was funded by the Chilean Commission for Scientific and Technological
Research (conicyt) through fondecyt Project nº1090013. The authors acknowledge the
support of the Advanced Laboratory for Geostatistical Supercomputing (alges) and of the
Advanced Mining Technology Center at the Universidad de Chile.

references
David, M. (1988) Handb ook of Applied A dvanced Geos t ati s tical Ore R eser ve E s timation. Elsevier,
Amsterdam, p. 232. [1]

Rendu, J. M. (2008) An Introduction to Cut -  o ff Grade Estimation. Society for Mining, Metallurgy, and
Exploration, Inc., Littleton, Colorado, pp. 37–42. [2]

Journel, A. G., & Huijbregts, C. J. (1978) Mining Geostatistics. Academic Press, London, p. 600. [3]

Journel, A. G. (1974) Geostatistics for Conditional Simulation of Orebodies. Economic Geology 69(5),
pp. 673–678. [4]

Chilès, J. P., & Delfiner, P. (1999) Geostatistics: Modeling Spatial Uncertainty. Wiley, New York, p. 695. [5]

Wackernagel, H. (2003) Multivariate Geostatistics: an Introduction with Applications, 3rd edn. Springer,
Berlin, p. 387. [6]

Pardo-Igúzquiza, E., & Chica-Olmo, M. (1994) Spectral Simulation of Multivariable Stationar y R andom
Functions Using Covariance Fourier Transforms. Mathematical Geology 26(3), pp. 277–299. [7]

Gutjahr, A., Bullard, B., & Hatch, S. (1997) General Joint Conditional Simulation Using a Fast Fourier
Transform Method. Mathematical Geology 29(3), pp. 361–389. [8]

Oliver, D. S. (2003) Gaussian Cosimulation: Modelling of the Cross Covariance. Mathematical Geology
35(6), pp. 681–698. [9]

Desbarats, A., & Dimitrakopoulos, R. (2000) Geos t ati s tical Simulation of R egionali zed Pore-Si ze
Distributions Using Min/Max Autocorrelation Factors. Mathematical Geology 32(8), pp. 919–942. [10]

Boucher, A., & Dimitrakopoulos, R. (20 09) Blo ck Simu lat ion of M u lt i ple Co rrelat ed Var i ab le s.
Mathematical Geosciences 41(2), pp. 215–237. [11]

Subramanyam, A., & Pandalai, H. S. (2004) On the Equivalence of the Co-Kriging and Kriging Systems.
Mathematical Geology 36(4), pp. 507–523. [12]

Myers, D. (1983) Estimation of L inear Combinations and Co-Kriging. Mathematical Geology 15(5),
pp. 633–637. [13]

Armstrong, M., & Boufassa, A. (1988) Comparing the R obu stness of Ordinar y Kriging and L ognormal
Kriging: Outlier R esistance. Mathematical Geology 20(4), pp. 447–457. [14]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Evaluating Mineral Resources
in a Narrow Vein  –  Type Deposit

abstract
Rodrigo zúñiga Currently, kriging and co-kriging are the most widespread
Xavier emery geostatistical methods for quantifying mineral resources in ore
Universidad de Chile deposits. Quite often, especially in Chile, these methods are
applied to disseminated massive mineralisations, such as porphyry
copper deposits, and so far there is little experience with vein - type
deposits. This work aims at applying geostatistical estimation
methods to a narrow gold-silver vein deposit. The available data
consist of exploration drill hole samples in which the gold and
silver grades have been assayed. Two approaches are tested: The
first one consists of a direct modelling of the geometry of the
vein (via indicator kriging) followed by a modelling of the gold
and silver grade distributions within the vein. The second one is
an indirect approach in which the variables of interest are the
vein thickness and the metal accumulations measured along the
drill holes. The difficulties of each approach are highlighted, in
particular in what refers to the lack of statistical robustness caused
by the skewed grade distributions and outlier data, variogram
analysis and geometrical issues. The applicability of the models
and methods is discussed in order to provide the mining industry
with guidelines for evaluating complex vein-type mineralisations.
348 Evalu ating Mineral R esources in a Narrow Vein  – T ype D eposit

introduction
In the exploration stage, the mineral resources in a vein-type deposit can be roughly
estimated by means of geometrical methods. Basically, these methods aim at projecting
the available drill hole data according to the vein orientation and at defining the volume
of influence of each data. The main limitation of such methods is the lack of description
of the spatial distribution and continuity of mineral grades.
The use of geostatistical methods overcomes this limitation but faces several difficulties:

• The long - tailed grade distributions and the presence of extremely high grades is usually
a serious issue for variogram analysis, since outlying data provoke a lack of robustness
in sample variograms. A partial solution to this consists in capping high grades

• The grades are often poorly structured (short ranges of correlation, high nugget effect)
• The narrowness of the vein complicates even more the calculation of variograms in
the direction perpendicular to the vein.

In the scope of exploration sampling of a vein - type deposit, it is generally accepted


that diamond drill holes are enough to characterise the geological continuity of the
vein, but that care should be taken when studying the spatial distribution of mineral
grades, because of the presence of a few extreme and very localised grades. To study the
distribution of grades, it is recommended to use data from underground excavations [1]
and to define the measured and indicated resources on the basis of such data and not
only of diamond drill hole data [2]. In addition, according to the jorc code, one should
have an adequate quality control of the sampling and estimation processes, so that
geological and grade continuities are characterised suitably, especially near the boundary
between indicated and inferred resources [3, 4]. Also, it is good practice to study the grade
distribution in relation with the geological controls, especially structures and associated
geological domains [5].

methodology
There are two main approaches to construct a resources model in a vein-type deposit:
the direct and the indirect approach.

Direct approach
This approach consists of two steps: first, a modelling of the vein geometry; second, a
modelling of the grade distribution within the vein. Regarding the first step, it is necessary
to accurately estimate the orientation and the dimensions of the vein, keeping an adequate
geological control to avoid misclassifications between vein and host rock [6]. Alternatively,
the vein geometry can be defined by means of a cut -off criterion, e.g., by using indicator
kriging or indicator simulation [7].
Some authors argue that the direct approach is preferable when there is a significant
correlation between the vein thickness and the grades and when point - support grades have
to be estimated [8], but that its advantages vanish when estimating block grades [9, 10].

Indirect approach
This approach has been widely applied to narrow vein-type deposits. It has been
developed as an alternative to the geometrical methods mentioned in the introduction. 
CHAPTER V 349

The objective is to model the resources of the mineralised vein in two dimensions, by
using the thickness and the accumulation as the variables of interest:

• The thickness is measured in the direction perpendicular to the plane that defines the
(local) orientation of the vein.

• The accumulation is the product between the thickness and the mean grade within
this thickness.

From a methodological point of view, it is more convenient to work with the accumulation
instead of the mean grade, as the former is an additive variable (i.e., it can be averaged
arithmetically on a block support), whereas the latter is not. Furthermore, the
accumulation has a smoother distribution than the mean grade, because it is defined
on a greater support.
The indirect method is suited to the evaluation of narrow vein–type deposits [11] .
However, when the grade distribution is not too skewed and their variogram is moderately
structured, the direct approach may lead to more accurate estimates than the indirect
approach [12] .
Some applications of the indirect approach use indicator kriging in order to separate
the deposit into an extreme high - graded zone and the rest of the deposit [13] . Other
applications [14, 15] introduce the thicknesses and accumulations associated with
multiple cut - off grades, allowing the use of specific multivariate coregionalisation
models between these variables. In any case, it is suggested to keep an adequate geological
control and not to blindly trust on mathematical or computational methods [16, 17] .

case study
Presentation of the data
We avail a dataset from an exploration diamond drill hole campaign, consisting of 24,729
samples with a length of 0.5 m. Each sample has been assayed for gold and silver grades
and is classified according to whether it belongs to the vein (code 1) or not (code 0); such
a classification has been made on the basis of geological criteria. Table 1 presents the
basic statistics of the overall dataset and of the subset of data belonging to the vein.

Table 1 Basic statistics of the available drill hole data

All the data Data within the vein


Gold grade Silver grade Gold grade Silver grade
N° data 24,729 24,729 1,747 1,747
Minimum [g/t] 0.00 0.00 0.17 2.80
Maximum [g/t] 1,819.70 11,663.10 1819.70 11,663.10
Mean [g/t] 4.03 64.43 50.93 731.42
Variance [g/t]2 1,063.68 102,028.56 12,513.3 927,512.0
Correlation 0.70 - 0.64 -

One of the crucial aspects in the evaluation of the resources is to characterise the
geometry of the mineralised vein, in particular, its orientation. In a projection onto the
horizontal plane (Figure  ➊), one observes a break in the azimuth of this orientation,
which allows us to distinguish a northern sector and a southern sector. In each sector, the
vein can be approximated by a sub - vertical plane (Table 2).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
350 Evalu ating Mineral R esources in a Narrow Vein  – T ype D eposit

Table 2 Principal plane of the vein

Sector Azimuth Plunge


Northern 13.65° 83.00°
Southern -4.00° 83.00°

Figure 1 Plan view of the data located


within the vein, showing the two
sectors under consideration.

Application of the direct approach


In this case, the data under study (gold and silver grades) present highly skewed
distributions, as the one shown in Figure  ➋ corresponding to the northern sector.

Figure 2 Histograms of assayed gold and silver grades (in g/t) within the vein in the northern sector.

Owing to the outlying data and to the narrowness of the vein, sample variograms turn
out to be quite erratic. Instead of traditional sample variograms, we preferred to use
covariograms, which lead to more interpretable structures, especially along the direction
perpendicular to the plane of the vein (Figure  ➌).
CHAPTER V 351

Figure 3 Calculation of sample variograms along the principal plane of the vein, and perpendicularly to
this plane. Left: traditional sample variograms; right: variograms obtained via sample covariograms.

The steps of the direct approach are the following:


• Divide the deposit into two sectors (Northern and Southern).
• In each sector
––Create a grid following the orientation of the vein
––Perform variogram analysis of the vein indicator data
––Perform ordinary kriging of the vein indicator. For each block, the kriging estimate
is interpreted as the probability P int that the block is contained in the vein. At this
step, it is important to carefully design the kriging neighbourhood, so that P int is
zero (or very close to zero) for the blocks located far from the vein data. To this end,
the radius of the neighbourhood along the direction perpendicular to the vein has
been set to 20 m, so as to avoid excessive extrapolation in this direction
––Separate the dataset into two subsets: data within the vein, and data outside the vein
––Perform a joint variogram analysis of the gold and silver grades within the vein,
then perform co - kriging of these data to estimate each block as if it belonged to the
vein. Denote by Grade int the estimated grades so obtained
––Calculate the final grades as [18] :

Grade = Grade int ∙ P int (1)

––Calculate the tonnage and metal contents in the vein, assuming a rock density of
2.7 t/m3 (Table 3):

Tonnage [t] = P int ∙ Block dimension [m 3] ∙ density [t/m 3] (2)

Metal content [g] = Grade [g/t] ∙ Block dimension [m 3] ∙ density [t/m 3] (3)

Table 3 Calculation of tonnages and metal contents (direct approach)

Sector Tonnage [kt] Silver metal [t] Gold metal [t]


Northern 768.48 519.29 33.99
Southern 672.30 310.01 20.86
Total 1,440.78 829.30 54.85

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
352 Evalu ating Mineral R esources in a Narrow Vein  – T ype D eposit

Application of the indirect approach


In this approach, we estimate the vein thickness, gold and silver accumulations. The
estimation is performed according to the following steps:

• For each sector, characterise the orientation of the principal plane of the vein.The
plane is parameterised by an equation of the form ax + by + cz + d  = 0

• For each drill hole:


––Select the first and last samples and determine the equation of the line supporting
the drill hole
––Find out the intersection point between the drill hole and the principal plane of the
vein. This point will be assigned thickness and accumulation data
––Determine the angle between the drill hole line and the direction perpendicular to
the principal plane
––Determine the measured thickness, gold and silver accumulations
––Determine the actual thickness and accumulations by projection onto the direction
perpendicular to the principal plane of the vein. This amounts to multiplying
the measured thickness and accumulations by the cosine of the angle above
determined. The histograms and basic statistics of the accumulation and thickness
data so defined are presented in Figure   ➍ and Table 4.

Figure 4 Histograms of accumulation and thickness data.

Table 4 Basic statistics for thickness and accumulation data

Variable N° data Minimum Maximum Mean Variance


Silver accumulation [g / t*m] 374 2.199 12,444.7 1,091.6 2,077,891
Gold accumulation [g / t*m] 374 0.074 1,543.3 72.3 17,652.5
Thickness [m] 374 0.21 5.92 1.46 0.998

• Rotate the principal plane of the vein so that the rotated plane has a constant
x coordinate. This allows transforming the original (x, y, z) coordinates of the data points
into coordinates (y, z) in a single vertical plane. This procedure is applied in both sectors,
taking care that the two rotated planes remain adjacent.

• Considering the correlation coefficient between gold and silver accumulations (0.70) and
between thickness and accumulations (0.52 for gold, 0.59 for silver), perform co- kriging
of the three variables under study. Because no specific direction is identified in the
variogram analysis stage, omnidirectional sample variograms are calculated and fitted
with a linear model of coregionalisation, using a nugget effect and two nested spherical
models. Co -kriging is performed over blocks with size 2 m × 2 m, leading to the results
indicated in Table 5.
CHAPTER V 353

Table 5 Statistics of accumulations and thickness estimated by co - kriging

Variable N° blocks Minimum Maximum Mean Variance


Silver accumulation [g / t*m] 70,415 22.32 8,685.8 940.77 347.68
Gold accumulation [g / t*m] 70,415 2.62 1,335.0 68.42 6,128.2
Thickness [m] 70,415 0.36 5.92 1.44 0.182

• Calculate the recoverable resources (Table 6):

Tonnage [t] = Block dimension [m 2] ∙ thickness [m] ∙ density [t/m 3] (4)

Metal content [g] = Accumulation [g/t*m] ∙ Block dimension [m 2] ∙ density [t/m 3] (5)

Table 6  Calculation of tonnages and metal contents (indirect approach)

Tonnage [kt] Silver metal [t] Gold metal [t]


1,095.19 715.44 52.03

discussion and conclusions


In this work, we performed the evaluation of mineral resources in a narrow vein - type
deposit by means of multivariate geostatistical methods, considering a direct and an
indirect approach. Despite their difficult implementation, both approaches are applicable
to model this kind of deposit. In the deposit under study, the direct approach gives 30%
more ore tonnage and 15% more silver tonnage than the direct approach, but almost
the same gold tonnage. These differences are not so important if one considers the
implementation issues of each approach that are recalled hereunder.
The main drawbacks of the indirect approach correspond to geometric issues. Given
that this approach reduces the problem to two dimensions, the modelling becomes
complex when the vein cannot be identified with a plane, e.g., when it is highly curved
or when it opens out into several parallel branches. In addition, when a high proportion of
drill holes are sub - vertical, the calculation of the actual thicknesses and accumulations
is approximate, because the measured thicknesses and accumulations must be projected
onto a sub - horizontal direction (the perpendicular to the principal plane of the vein).
Also, the application of the indirect approach (especially variogram analysis) requires
a large number of drill holes, as each drill hole is converted into a single thickness and
accumulation data.
In contrast, the direct approach suffers from the presence of extremely high data
values and from the narrowness of the vein. This leads to difficulties in the variogram
analysis stage, in particular to characterise the spatial variability of grades along the
direction perpendicular to the vein plane, and in the modelling of the vein, as one
must not extrapolate too far from the vein data. The stationarity or quasi - stationarity
assumption underlying ordinary kriging becomes questionable in the presence of a single
vein, insofar as it supposes that the vein has the same prior probability of occurrence
everywhere. The indirect approach attenuates these issues because it works in the
plane of the vein (avoiding the dimension of narrow thickness) and uses regularised
data (accumulations) instead of the original grade data. For these reasons, the indirect
approach is recommended in a preliminary evaluation of a vein-type deposit.
On the other hand, because it works in the three-dimensional space, the direct
approach allows one to characterise complex vein geometries. This is an advantage for

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
354 Evalu ating Mineral R esources in a Narrow Vein  – T ype D eposit

mine design purposes, where in some cases it could be convenient to mine different
branches of the vein, something that is not described by the indirect approach.
Future works include the use of conditional co-simulation instead of co - kriging, for
all the variables under study (indicator and grades in the direct approach; thickness and
accumulations in the indirect approach). This will be useful to define probability intervals
on the tonnages and metal contents and to determine whether or not the differences in
the estimates obtained with co-kriging (Table 1 and 6) are really significant.
Another avenue for future work is the recourse to transitive kriging [19, 20] . This
approach is attractive for narrow vein–type deposits insofar as it allows a 3-d modelling
of the grades within the vein without requiring any stationarity assumption, and the
structural tool (the so-called transitive covariogram) is generally better structured and
easier to infer and to fit than the variogram, as it reflects not only the grade spatial
continuity, but also the geometry of the vein.

acknowledgements
This research has been funded by the Chilean Commission for Scientific and Technological
Research (conicyt), through fondecyt Project nº1090013. The authors acknowledge the
support of the Advanced Laboratory for Geostatistical Supercomputing (alges) and the
Advanced Mining Technology Center (amtc) at Universidad de Chile.

references
Dominy, S. C., Johansen, G. F., Cuffley, B. W., Platten, I. M. & Annels, A. E. (2000) Estimation and
R eporting of Mineral R esources for Coarse Gold-Bearing Veins. Exploration and Mining Geology 9 (1),
pp. 13–42. [1]

Dominy, S. C., Stephenson, P. R. & Annels, A. E. (2001) Classification and Reporting of Mineral Resources
for High -Nugget Effect Gold Vein Deposits. Exploration and Mining Geology 10 (3), pp. 215–233. [2]

Vallée, M. (2002) Comments on Classification and R eporting of Mineral R esources for High -Nugget Effect
Gold Vein Deposits by sc Dominy, pr Stephenson and ae Annels. Exploration and Mining Geology
11 (1–4), pp. 113–117. [3]

Dominy, S. C. (2002) Author's R eply to Comments on Classification and R epor ting of Mineral R esources
for High -Nugget Effect Gold Vein Deposits by M. Vallée. Exploration and Mining Geology 11 (1–4),
pp. 119–124. [4]

Duke, J. H. & Hanna, P. J. (2001) Geological Interpretation for R esource Modelling and Estimation. In:
Edwards ac., ed., Mineral Resource and Ore Reserve Estimation. The AusIMM Guide to Good Practice.
The Australasian Institute of Mining and Metallurgy, Melbourne, pp. 147–156. [5]

Roth, C., & Armstrong, M. (1998) Estimating the Geometr y of Conjugate Veins. Exploration and Mining
Geology 7 (4), pp. 333–339. [6]

Sulistyana, W. (2004) Gold Vein Modeling Using Two Stage Indicator Kriging. In: 13th International
Symposium on Mine Planning and Equipment Selection. [7]

Marcotte, D. & Boucher, A. (2001) The Estimation of Mineralized Veins: a Comparative Study of Direct and
Indirect Approaches. Exploration and Mining Geology 10 (3), pp. 235–242. [8]

Dagbert, M. (2001) Comments on "The Estimation of Mineralized Veins: a Comparative Study of Direct and
Indirect Approaches". Exploration and Mining Geology 10(3), pp. 243–244. [9]

Marcotte, D. & Boucher, A. (2001) Author's R eply to Comments on “The Estimation of Mineralized Veins:
a Comparative Study of Direct and Indirect Approaches". Exploration and Mining Geology 10 (3),
pp. 245–247. [10]
CHAPTER V 355

Bertoli, O., Job, M., Vann, J. & Dunham, S. (2003) Two -Dimensional Geostatistical Method s. Theor y,
Prac tice and a Ca se Study from the 1a Shoot Nickel Deposit, L ein s ter, Wes tern Au s tralia. In: 5 th
International Mining Geology Conference, p. 8. [11]

Roy, W., Butt, S. D. & Frempong, P. K. (2004) Geostatistical Resource Estimation for the Poura Narrow-Vein
Gold Deposit. cim Bulletin 97(1077), pp. 47–51. [12]

Deutsch, C. (1989) Mineral Inventor y Estimation in Vein Type Gold Deposits: Case Study of the Eastman
Deposit. cim Bulletin 930(82), pp. 62–67. [13]

Rivoirard, J. (1988) Quelque s Modèle s de Corégionali sation de Pui s sance s e t A ccumulation s (Some
Coregionalisation Models for Thicknesses and Accumulations). Internal report n54/88/g, Centre de
Géostatistique, Ecole des Mines de Paris, p. 22. [14]

Rivoirard, J. (1991) Modèles Factorisés de Puissances de Veines et Changement de Support (Factorised Models
for Vein Thicknesses and Change of Support). Sciences de la Terre 30, pp. 173–188. [15]

Dowd, P. A. & Milton, D. W. (1987) Geostatistical Estimation of a Section of the Perseverance Nickel Deposit. In:
Matheron G., Armstrong M., eds., Geostatistical Case Studies. Reidel, Dordrecht, pp. 39–67. [16]

Dominy, S. C., Annels, A. E., Camm, G. S., Cuffley, B. W. & Hidkinson, I. P. (1999) R esource Evaluation
of Narrow Gold - Bearing Vein s: Problem s and Me thod s of Grade Estimation. Transactions of the
Institution of Mining and Metallurgy (Section A: Mining Industry) 108, pp. a52–70. [17]

Pawlowsky, V., Olea, R. A. & Davis, J. C. (1993) Boundar y Assessment Under Uncertainty: a Case Study.
Mathematical Geology 25(2), pp. 125–144. [18]

Alfaro, M. & Miguez, F. (1976) Optimal Interpolation Using Transitive Methods. In: Guarascio M., David
M., Huijbregts CJ., eds., Advanced Geostatistics in the Mining Industry. Reidel, Dordrecht,
pp. 91–99. [19]

Rivoirard, J. (2005) Concepts and Methods of Geostatistics. In: Bilodeau M., Meyer F., Schmitt M., eds.,
Space, Structure and Randomness. Springer, Berlin, pp. 17–37. [20]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Speeding Up Conditional
Simulation: Using Sequential
Gaussian Simulation with
Residual Substitution

abstract
Alejandro cáceres Sequential Gaussian Simulation (sgsim) and Turning Bands
Geoinnova Consultores, Chile Simulation (tbsim) are widely used to generate realisations of
mineral grades and to evaluate mineral resources. sgsim relies
Xavier emery on the recursive application of Bayes theorem and is designed
Universidad de Chile as a direct conditional method. tbsim is based on a stereological
device that generates non - conditional realisations, which
Marcelo godoy are subsequently conditioned through residual substitution
Golder Associates, Chile kriging. sgsim uses a random path to visit the nodes targeted
for simulation. In practice, for each realisation the path has to
be changed in order to avoid artefacts or artificial correlation
between realisations. This is clearly the case when conditional
simulation is performed. However, for non -conditional simulation
with a proper implementation, the resulting realisations do not
exhibit such artificial behaviour.
In this context, the proposed algorithm (sgsim-rs) uses the
sequential approach to generate non - conditional realisations,
with a single controlled path. Then, a conditioning step is
performed using residual substitution as it is done in the
tbsim approach. The advantage of using a single path is the
possibility to generate many realisations at the same time and to
condition them in a single kriging step. The cpu time reduction
is considerable. For example, the time to create n realisations
is equivalent to the time of one sequential simulation plus one
conditioning kriging. The counterpart is the memory needed for
storing the non - conditional realisations all together. However,
this requirement is less demanding with actual hardware and
operative systems. The proposed algorithm is presented through
a case of study and its performance is compared to the traditional
sgsim and tbsim approaches.
358 Speeding Up Conditional Simulation...

introduction
Running time and management of large files involved in simulation studies could be
discouraging factors for incorporating conditional simulation as a daily basis practice
in the mining industry and become critical when large simulation grids are used [1] .
The most widespread algorithms for Gaussian conditional simulation are sequential
Gaussian (sgsim) [2– 6] and turning bands (tbsim) [7,  8] simulation, with available
implementations in opensource projects and comercial software.
Both algorithms rest upon the multi - Gaussian model and the homoscedasticity
property of the Gaussian distribution, together with the orthogonality of simple
kriging. tbsim directly uses those properties by separating the problem in two steps:
simulating first a non - conditional Gaussian random field (Y(x)) and then conditioning
to the data using the residual substitution (rs) approach [7] . rs is applicable to convert
non-conditional simulations into conditional ones. In contrast, the sgsim algorithm [3] ,
making use of screening effects, search strategy, node migration, visiting sequence and
multiple grids, among other considerations, directly derives a conditional distribution
at each target location x, from which a simulated value is drawn as follows:

Ycs (x) = Ysk  (x) + σ sk (x) *U (1)

where Ysk (x) is the simple kriging estimate of Y(x) calculated from the original data and
previously simulated nodes, σ sk (x) is the associated kriging standard deviation, and U is
an independent Gaussian random variable.
An attractive feature of sgsim is its ability to directly provide conditional simulations,
avoiding the two-step approach used in tbsim [3] . However, the cost of this remarkable
feature is the requirement of whole re-simulation if new data are added or removed,
while tbsim can be updated by just adding or removing the data in the conditioning step.
A method that allows faster simulation and can easily manage the update of new
drilling campaigns or removing certain data would be beneficial to practitioners in the
mining industry. This paper presents such a method that uses the sequential Gaussian
algorithm for generating non-conditional simulations and the residual substitution
approach for conditioning to sample data.

sequential simulation algorithms


Traditional Sequential Gaussian Simulation (SGSIM)
Sequential Gaussian simulation uses a random visiting order for the nodes targeted for
simulation. This visiting sequence, often called random path, is changed from realisation
to realisation in order to avoid artificial correlation or similarity between realisations.
This is clearly valid when conditional simulation is performed, because for every node
the same conditioning data locations and original data values are used to determine
the local distribution of the value to simulate. The implementation of the sequential
Gaussian approach has therefore two sources of randomness: a theoretical one related
to the Gaussian value U used in Equation (1) and the random visiting sequence as an
implementation aspect.
If non -conditional simulation is performed with sgsim, the very first nodes (for which
there is no conditioning data) are simulated from a Gaussian distribution without any
covariance or spatial consideration. Then, simulated values are available and the procedure
goes on using these first simulated values as conditioning data, i.e., non - conditional
simulation in sgs becomes a simulation conditional to these first nodes.
CHAPTER V 359

By construction, the non -conditional values simulated at the first nodes are


independent from realisation to realisation, so the use of a changing visiting sequence
for each realisation can be avoided. Therefore several non-conditional realisations can
be generated in a single execution of the sequential algorithm using an unique visiting
sequence. This approach has already been suggested [9] and the use of deterministic or
modified visiting sequences been explored [10–13] .

Proposed methodology (SGSIM-RS)


The global procedure to get conditional realisations using the sgsim-rs approach is:

• Normal score transformation of the raw data into Gaussian values Y(x i), i = 1… n
• Variogram analysis of the normal scores data, defining a covariance model C Y  (h)
(or, equivalently, a variogram model)

• Non -conditional Gaussian simulation using a single -path sequential approach. Get the


residuals between the Gaussian data values and every realisation at the data locations:

(2)

where is the k -th non - conditional realisation of the Gaussian random field Y(x).

• Estimation of the residuals over the domain of interest by simple kriging (sk), using
the covariance model C Y  (h). Because the weigths are the same in all the realisations,
the residual estimates are simultaneously obtained in a single kriging run.

• Addition of the residual estimates and the corresponding non-conditional realisation


to generate the conditional realisation over the domain:

(3)

• Back-transformation of the conditional Gaussian realisations to the original values.

The non-conditional Gaussian simulation step is detailed below:

• Define a single visiting sequence, which can be achieved using a low discrepancy
sequence [7] , a regular sequence or just a random sequence with a multiple grid
approach [13] .

• With the LU algorithm, generate several (n) realisations for the first thousand nodes of
the visiting sequence. These simulated values, which will be used as conditioning data
for the subsequent nodes, account for the covariance model and are by construction
independent from one realisation to another.

• Continue with the sequential approach and generate n non - conditional realisations


(  ) in each visited node x:

(4)

where stands for the vector of simple kriging estimates for the n realisations
given the previously simulated nodes, for the simple kriging standard deviation,
and for an independent Gaussian random vector. This way, all the realisations can be
generated simultaneously.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
360 Speeding Up Conditional Simulation...

As an example, for a domain of 100 × 100 nodes, one hundred non -conditional


realisations are generated using an isotropic spherical variogram of range 20. Figure   ➊
(left) presents the probability intervals of the simulated values at each node as a function
of the sequence order: it is remarkable that the intervals are almost the same for all the
nodes. To give another look at this feature, the conditional variance is calculated at each
node and plotted as a function of the sequence order (Figure   ➊, right) without exhibiting
any trend or artificial pattern.

Figure 1 Probability intervals (left) and conditional variances (right) as a function of the visiting order.

Application to a mining dataset


Presentation of the case study

The area under study is part of the Río Blanco – Los Bronces porphyry copper deposit [14] ,
a breccia complex located in the Chilean central Andes. A set of 2376 diamond drill hole
samples, located in a 400 × 600 × 130 m3 volume, are available with information on total
copper grades. Figure  ➋ presents the available data coloured by copper content.

Figure 2 Location map of copper grade data.


CHAPTER V 361

Simulation approaches

Three different algorithms are compared by simulating the copper grades over a 2-d
regular grid of 390× 600 nodes with a 1 × 1 m spacing:

• Sequential Gaussian Simulation (sgsim): this approach is performed using the sgsim
program of the gslib package [3] , using multiple grids and migration of data to nodes.

• Sequential Gaussian Simulation with R esidual Substitution (sgsim-rs): a single path is


used with multiple grids and a uniformly random ordering of the nodes of each grid.

• Turning Bands Simulation (tbsim): this approach is performed in the isatis software,
using 1000 turning lines.

The same search radius (250 m) and number of conditioning data (16) are considered for
each method. Simple kriging is used for non - conditional simulation and conditioning
step in sgsim-rs, and in the conditioning steps of tbsim and sgsim. There is no loss of
data in the migration to the nodes in sgsim and sgsim-rs, so the effective datasets are
the same in each method. The same normal score transformation, back - t ransformation
and isotropic variogram model (Table 1) are used in each method to avoid differences
due to implementation.

Table 1 Isotropic variogram model for transformed copper grades

Structure Range (m) Sill contribution

Nugget – 0.12
Spherical 112 0.7
Exponential 416 0.18

Comparison between methods

In the following subsections, the resulting realisations are compared in several ways.
First of all, the visual inspection of several realisations of sgsim-rs does not indicate the
presence of any artefact or strange pattern. Even more, it is impossible to distinguish if
the realisation comes from sgsim, sgsim-rs or tbsim (Figure   ➌).

Figure 3 Examples of realisations.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
362 Speeding Up Conditional Simulation...

Basic statistics: Figure   ➍ (left) shows the distributions of the average copper grade per
realisation for each algorithm. These distributions are close to each other, although that
of sgsim-rs presents slightly higher values. The variance per realisation is also indicated
in Figure   ➍ (right): sgsim-rs and sgsim have similar distributions, while tbsim shows
a slightly wider range of variances.

Figure 4 Distribution of the average (left) and variance (right) of simulated copper grades per realisation (right).

Grade -tonnage curves: the tonnage and grades above a set of cutoff grades are calculated on
each realisation. The expected curves are presented in Figure   ➎ (left), showing virtually
the same values for all the algorithms. In counterpart, Figure   ➎ (right) presents the
widths of the 80% confidence intervals for the grades and tonnages by cutoff. It is seen
that the turning bands algorithm presents a higher variability in grades for almost
every cutoff, whereas sgsim-rs and sgsim exhibit a similar behaviour for this measure.

Figure 5 Expected grade -tonnage curves (left) and widths of 80% confidence intervals for grade -tonnage
curves (right).

Local expectation and uncertainty measures: Figure  ➏ (left) shows the distributions of the
conditional expectation (mean of realisations) calculated at each node. The curves
are almost identical for the three algorithms. The conditional variance distributions
(Figure  ➏, right) are close for sgsim and sgsim-rs, whereas tbsim shows a 5% of the
nodes with higher values.
Figure  ➐ displays the conditional coefficient of variation. The high and low zones are
located in the same parts of the domain for all the algorithms. However, extreme values
are notorious in the outer part of the deposit for sgsim-rs and tbsim, which is caused by
the lack of conditioning data in this sector.
CHAPTER V 363

Figure 6 Distributions of the conditional expectation (left) and conditional variance (right) .

Figure 7 Conditional coefficients of variation.

Performance evaluation

Given a set of 20,893 blast hole data in which the true copper grades are known, a
validation model is created by ordinary kriging on 5 × 5 m blocks in the bench where the
copper grade realisations have been generated. Every realisation is regularised to this
block size and compared against the validation model.

Error distribution: The Percentual Mean Error (pme) is defined as:

(5)

where S k (i) is the simulated grade at block i, R(i) is the grade of the validation model and
nb is the number of blocks where both models are defined. Figure   ➑ (left) presents the
distributions for the three algorithms. The differences are marginal, although tbsim
performs slightly better than the other two algorithms.

Grade correlation distribution: The linear correlation coefficient between every realisation
and the validation model is calculated. Figure   ➑ (right) shows a close correlation
distribution between the algorithms.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
364 Speeding Up Conditional Simulation...

Figure 8 PME distribution (right) and correlation distribution (left).

Accuracy plot: Given an interval of probability p in the distribution of simulated block


grades, the effective percentage of blocks of the validation model that fall within the
interval is calculated. The closer this percentage to p, the better is the model in terms
of accuracy and precision [15] . Figure   ➒ (left) displays the accuracy plots, showing a
close performance for the three algorithms.

Destination mismatch: The percentage of correctly classified blocks (mill/dump) for


several cutoff grades is calculated for each realisation. The expected percentages for
each algorithm and cutoff are indicated in Figure   ➒ (right). sgsim-rs presents a sligthly
lower performance than the other two algorithms.

Figure 9 Accuracy plot (left) and destination match percentage (right).

cpu time by algorithm: each method has been executed in a 2.16 ghz Core2Duo computer
under the same conditions. The gain of cpu time of sgsim-rs against the other methods
is evident: the time for sgsim-rs to write out 100 conditional realisations (280 s) is 20%
that of sgsim (1,430 s) and 4.3% that of tbsim (6,500 s).
The time reduction of sgsim-rs increases with the number of realisations to generate.
Obviously this reduction is restricted by the amount of available ram memory. For example,
the memory required to store a floating array of 8,000,000 elements, equivalent to a
simulation grid of 200 × 200 × 200 nodes, is about of 30 MB. Therefore with 3 available GB,
one hundred realisations can be done in a single execution of sgsim-rs. The higher elapsed
time of tbsim is probably related to the search strategy used in conditioning kriging (no
migration of data to grid nodes; no use of spiral search as in sequential algorithms).
CHAPTER V 365

discussion
About the use of a single visiting sequence
Even for non-conditional simulation, the choice of the visiting sequence is not simple.
It is clear that the use of a regular ordering, for example by cycling along the grid axes,
can create artefacts that are likely to increase with higher - dimensional domains. There
is therefore an open space of research to design visiting sequences that consider aspects
such as the variogram model, data locations and geometry of the domain, in order to
ensure the best reproduction of model statistics.

Potential improvements of SGSIM-RS


In the non-conditional step, it is possible to use the LU algorithm instead of estimating the
local distributions by kriging [16]. LU simulation can generate several realisations at each
node, so the principles of the algorithm remain equal. However, the LU simulation of groups
of nodes [17] can considerably increase the speed of the already fast proposed approach.
Another improvement is the possibility to use ordinary kriging in the conditioning
step, when the mean value of the Gaussian random field is deemed uncertain (cases of
local stationarity, when the mean varies slowly in space), while still using of simple
kriging to construct the non -conditional realisations. In general, ordinary kriging is
not recommended in sgsim because of the design of the kriging neighbourhood (the
screening effect is often partial for nodes on dense simulation grids), yielding a poor
reproduction of the model statistics [3] . In the same vein, sgsim-rs allows for the separate
simulation of the nugget effect in the non-conditional step, which is of interest because
the nugget effect is also often responsible for a poor screening effect.

Advantages of SGSIM-RS
sgsim-rs inherits from the residual substitution and conditioning kriging the ability to
update the realisations to new data: there is no need to recalculate the non-conditional
simulations to add or to remove data. Also sgsim-rs does not suffer from information
losses due the migration procedure of sgsim: if two or more data are associated with the
same node, all the samples can be kept because they have different residuals, which can
be used in the conditioning step.

conclusions
A simple sequential algorithm to obtain conditional Gaussian simulations has been
presented. It allows for generating realisations faster than the traditional sgsim and
tbsim algorithms. The comparisons and validation results indicate no significant
differences in terms of accuracy and reproduction of statistics between the proposed
method and the traditional ones. However cpu time reduction allows for generating
more realisations in the same amount of time, hence resulting in a better evaluation of
transfer functions and uncertainty measures.

acknowledgements
Thanks are due to R. Riquelme and S. Khosrowshahi for their comments on an earlier
version of this paper.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
366 Speeding Up Conditional Simulation...

references
Godoy, M. (2003) The Effective Management of Geological Risk in Long - Term Production Scheduling of Open
Pit Mines. PhD thesis, The University of Queensland, p. 256. [1]

Alabert, F. (1987) Stochastic Imaging of Spatial Distributions Using Hard and Soft Information. Master's
thesis, Stanford University, Department of Applied Earth Sciences, p. 197. [2]

Deutsch, C. V. & Journel, A. G. (1998) gslib : Geostatistical Software Librar y and User's Guide. Oxford
University Press, New York, p. 369. [3]

Gómez-Hernández, J. J. & Journel, A. G. (1993) Joint Sequential Simulation of Multigaussian Fields. In:
Soares A (ed.) Geostatistics Troia'92. Kluwer Academic, Dordrecht, pp. 85 – 94. [4]

Journel, A. G. (1994) Modeling Uncertainty: Some Conceptual Thoughts. In: Dimitrakopoulos, R. (ed.),
Geostatistics for the Next Century. Kluwer Academic, Dordrecht, pp. 30 – 43. [5]

Ripley, B. D. (1987) Stochastic Simulation. Wiley, New York, p. 237. [6]

Chilès, J. P. & Delfiner, P. (1999) Geostatistics: Modeling Spatial Uncertainty. Wiley, New York, p. 695. [7]

Matheron, G. (1973) The Intrin sic R andom Func tion s and Their Application s. Advances in Applied
Probability 5, pp. 439 – 468. [8]

Goovaerts, P. (1997) Geostatistics for Natural Resources Evaluation. Oxford University Press, New York,
p. 491. [9]

Isaaks, E. H. (1990) The Application of Monte Carlo Methods to the Analysis of Spatially Correlated Data.
PhD thesis, Stanford University, p. 213. [10]

Richmond, A. J. (1998) Multi-Scale Ore Texture Modelling for Mining Applications. Master's thesis, The
University of Queensland, p. 190. [11]

Richmond, A. J. & Dimitrakopoulos, R. (2001) Evolution of a Simulation: Implications for Implementation.


In: Kleingeld, W.J., and Krige, D.G. (eds.) Geostats 2000 Cape Town. Geostatistical Association
of Southern Africa, Johannesburg, pp. 134 – 144. [12]

Tran, T. T. (1994) Improving Variogram R eprodu c tion on Den se Simulation Grid s. Computers and
Geosciences 20 (7), pp. 1161–1168. [13]

Serrano, L., Vargas, R., Stambuk, V., Aguilar, C., Galeb, M., Holmgren, C., Contreras, A., Godoy, S.,
Vela, I., Skewes, M.A. & Stern, C.R. (1996) The Late Miocene to Early Pliocene R ío Blanco -  L os Bronces
Copper Deposit, Central Chilean Andes. In: Camus, F., Sillitoe, R.H., and Petersen, R. (eds.) Andean
Copper Deposits: New Discoveries, Mineralizations, Styles and Metallogeny. Society of Economic
Geologists, Special Publication no. 5, Littleton, Colorado, pp. 119 – 130. [14]

Deutsch, C. V. (1997) Direct Assessment of Local Accuracy and Precision. In: Baafi, E.Y, Schofield, N.A.
(eds.) Geostatistics Wollongong'96. Kluwer Academic, Dordrecht, pp. 115 – 125. [15]

Davis, M. W. (1987) Production of Conditional Simulations Via the lu Triangular Decomposition of the
Covariance Matrix. Mathematical Geology 19(2), pp. 91 – 98. [16]

Dimitrakopoulos, R. & Luo, X. (2004) Generalized Sequential Gaussian Simulation on Group Size ν and
Screen -Effect Approximations for Large Field Simulations. Mathematical Geology 36(5), pp. 567–591. [17]
Truncated Gaussian Kriging as an
Alternative to Indicator Kriging

abstract
Alejandro cáceres Truncated Gaussian Simulation (tgs) and Plurigaussian Simulation
Rodrigo riquelme (pgs) are widely accepted methods for generating realisations
GeoInnova Consultores, Chile of geological domains (lithofacies) that reproduce contact
relationships. The realisations can be used to evaluate transfer
Xavier emery
functions related to the lithofacies occurrence, the simplest ones
Universidad de Chile
of which are the probability of occurrence of each lithofacies and
the most probable lithofacies at each location of the deposit.
In order to get the probability of occurrence of a lithofacies, the
simulation approach can be time consuming. A shortcut method
(Truncated Gaussian Kriging, or tgk) is proposed, based on the
Truncated Gaussian Simulation model and the well-known multi-
Gaussian kriging method. In this method, the variogram analysis
stage and the definition of the truncation rule remain the same
as in the traditional Truncated Gaussian Simulation approach.
The formulation of the method is halfway between spatial
estimation and simulation. The key point is to apply the truncation
rule to the local distribution of the underlying Gaussian random
field used in the tgs approach. Because the relationship between
the lithofacies indicators and this Gaussian random field is not one-
to-one, the latter is simulated at the data locations conditionally
to the available indicator data. The local distributions of the
Gaussian random field at the target locations are then obtained
by considering the simple kriging estimates and simple kriging
variances, as it is done in the multi-Gaussian kriging approach.
tgk can be used as a step previous to simulating lithofacies, or
as an alternative to indicator kriging, when the lithofacies exhibit
a hierarchical spatial disposition or when such a disposition is a
desirable feature. The proposed method is naturally extensible to
plurigaussian simulation.
368 Truncated Gau ssian K riging a s an Alternative to Indicator K riging

introduction
Currently, numerical models of the spatial distribution of geological domains (lithofacies)
can be generated by geostatistical methods. Two approaches are commonly used to achieve
this goal: stochastic simulation and local uncertainty models.
Stochastic simulation consists of creating multiple realisations of the lithofacies of
the deposit. The realisations can be used to evaluate transfer functions related to the
lithofacies occurrence, the simplest of which is the probability of occurrence of each
lithofacies. In contrast, local uncertainty models directly provide the probabilities of
occurrence of the lithofacies without generating several realisations. The main methods
associated with stochastic Simulation are Truncated Gaussian (tgs), Plurigaussian (pgs)
and Sequential Indicator (sis) Simulation, whereas the most common method for local
uncertainty models is Indicator Kriging (ik). This paper presents a local-stochastic
approach to obtain the probability of occurrence of each lithofacies based on truncated
Gaussian Simulation and Multi-Gaussian Kriging (mgk).

overview of current methods


Indicator Kriging (IK)
Indicator kriging [1, 2] is a non-parametric technique to calculate the Conditional Cu–
mulative Distribution Function (ccdf) of a set of indicators, which are a binary coding
of a categorical variable representing the lithofacies. It basically consists of estimating
the indicator values using a kriging or cokriging of indicator data. The estimated values
of each indicator are interpreted as the probability density function of the lithofacies,
generating a local model of the probabilities of occurrence of lithofacies.

Sequential Indicator Simulation (SIS)


Sequential Indicator Simulation [2] rests on the sequential estimation of the ccdf
associated with the lithofacies coded as indicators. The estimation is performed by
indicator kriging using the sample data and previously simulated nodes as conditioning
information. From the ccdf a lithofacies is drawn by Monte Carlo simulation at each
node. The main advantages of the method are: its auto-conditional nature, the simple
incorporation of soft data and the possibility to express spatially highly continuous
patterns. As a counterpart, ik and consequently sis suffer from order relation violations
in the ccdf, among others problems [3] .

Truncated Gaussian Simulation (TGS)


The Truncated Gaussian Simulation method [4] relies on the truncation of a single
Gaussian Random Field (grf) in order to generate realisations of lithofacies. The main
feature is the reproduction of the indicator variograms associated with the lithofacies and
the hierarchical contact relationship among them. This method is adequate for deposits
where the lithofacies exhibit a hierarchical spatial distribution, such as depositional
environments or sedimentary formations.
The procedure to obtain lithofacies realisations using tgs is described as follows:

• Establish the lithofacies proportions and their contact relationships. Summarise this
information in a truncation rule (flag).
CHAPTER V 369

• Using the truncation rule, perform variography of the lithofacies indicators through
the determination of the covariance function of the underlying grf.

• Simulate the grf at the data locations conditionally to the lithofacies coding. This
step is performed using the Gibbs sampler algorithm [5] . As the relationship between
the lithofacies indicators and grf is not one-to-one, several realisations should be
considered for the next steps.

• Simulate the grf at the target locations using the values generated at the previous
step as conditioning data.

• Truncate the realisations according to the truncation rule.

Plurigaussian Simulation (PGS)


Plurigaussian Simulation [6, 7] is an extension of Truncated Gaussian Simulation that
incorporates two or more Gaussian Random Fields and a set of truncation rules. The use
of several grfs allows reproducing complex contact relationships between the lithofacies.
The workflow of pgs is similar to tgs.

Multi-Gaussian Kriging (MGK)


Multi-Gaussian Kriging [8] is a method to calculate the conditional distribution of a grf
at a point support. It has been used to establish the risk of exceeding or falling short of a
threshold for a continuous (not necessarily Gaussian) variable. It relies on the application
of the multi-Gaussian hypothesis and the property of orthogonality of simple kriging.
The key property of the multi-Gaussian model is that the multivariate distributions
of a grf are fully defined by its first- and second-order moments: mean and covariance
function. The orthogonality property is that the simple kriging estimator is not correlated
with any linear combination of the data. Therefore, it can be shown that the conditional
distribution of a grf is Gaussian, with mean equal to the simple kriging estimate and
variance equal to the simple kriging variance.
The workflow of the application of multi-Gaussian kriging to get the conditional
distribution of a continuous variable is described below:

• Transform the raw variable into a Gaussian variable. Store the transformation table.

• Perform simple kriging of the Gaussian variable. At each target location, the
conditional distribution is fully defined by the simple kriging estimate and simple
kriging variance.

• Perform numerical integration at each target location:


––Sample the conditional Gaussian distribution using Monte Carlo simulation.
––Back-transform every sampled Gaussian value according to the transformation
table.
––The distribution of back-transformed values is an approximation to the
distribution of the original variable conditional to the available data. From this
distribution, several measures can be derived, e.g. expected value (mean of the
distribution), conditional variance (variance of the distribution), probability to
exceed a given threshold, or confidence intervals.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
370 Truncated Gau ssian K riging a s an Alternative to Indicator K riging

Proposed approach: Truncated Gaussian Kriging (TGK)


The proposed method is based on the following aspects to get the probability of occurrence
of a lithofacies:

• To generate realisations of lithofacies, tgs use Gaussian simulations that rely on the
multi-Gaussian hypothesis.

• The conditional distributions of the underlying grf used in tgs can be obtained by
the multi-Gaussian kriging approach.

• The truncation rule can be interpreted as a particular transformation from a categorical


variable (lithofacies) to a continuous variable (grf). This transformation is similar to
the one used to get the conditional distribution of a continuous variable in mgk, except
that the truncation rule is not one-to-one.

Therefore, it is possible to calculate the probability of occurrence of each lithofacies without


simulating the grf in the domain. Instead the multi-Gaussian approach can be used to
obtain the conditional distribution at each target location. As the truncation rule is not
one-to-one, we will need several independent realisations of the grf at the data locations as
conditioning data (see tgs workflow). Therefore we will have to truncate several conditional
Gaussian distributions with different mean values but with the same kriging variance;
recall that the simple kriging variance does not depend on the data values.
The workflow of the proposed method is presented only for the stationary case, i.e.,
when the proportions of the lithofacies remain constant over the domain under study.
Consider F1…F n as n contiguous lithofacies present in the deposit. The indicators
associated with these lithofacies are defined as:

(1)

Let Y(x) be a standard grf with covariance function C Y (h). The n lithofacies are defined by
n - 1 thresholds and every lithofacies can be expressed as the truncation of Y(x) as follows:

(2)

where l i and ui stand for the lower and upper truncation thresholds for the i-th lithofacies
and l i = u i-1 ∀ i = 2…n - 1. For lithofacies F1 and F n the lower and upper thresholds are set
to l 1 = -∞ and u n  = +∞, respectively. The proportion of the i-th lithofacies is defined by:

(3)

with G the standard Gaussian cumulative density function. Keeping this notation in
mind, the workflow of tgk is the following:

• Establish the lithofacies proportions and their contact relationships. Summarise this
information in a truncation rule.

• Using the truncation rule, perform variography of the lithofacies indicators through
the determination of the covariance C Y (h) of the underlying grf.

• Simulate the grf at the data locations conditionally to the lithofacies coding. Several
(nrealis) realisations or sets of Gaussian values are generated at this step.

• Perform simple kriging using the covariance of the grf and the previous realisations
as conditioning data. A single execution of simple kriging is needed to determine the
kriging weights and kriging variance.
CHAPTER V 371

• At this stage, we have several kriging estimates and a single kriging variance at each
target location. Using the multi-Gaussian hypothesis, the conditional probability for
the i-th lithofacies and j-th realisation can be expressed as follows:

(4)

where is the simple kriging estimate calculated on realisation j and is the


simple kriging variance at location x.

• The final probability for each lithofacies at location x can be expressed as:

(5)

Because of the use of multiple realisations of the grf at the data locations and of simple
kriging to obtain the conditional distribution of the grf at the target locations, the
proposed method is in-between simulation and kriging estimation.

Application of Truncated Gaussian Kriging

A synthetic case study is presented, which considers the estimation of the probability of
three lithofacies (coded as lith1, lith2 and lith3) that are embedded units. A set of 189
sample data is available to calculate the probabilities. The data are distributed over a
500 m × 500 m domain, as shown in Figure   ➊.

Figure 1 Data locations


showing lithofacies
coding.

Basic parameters. The contact relationships, declustered proportions and threshold


values are summarised in Figure   ➋. The truncation rule reflects the hierarchical
disposition of the lithofacies.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
372 Truncated Gau ssian K riging a s an Alternative to Indicator K riging

Figure 2 Truncation rule, showing


contact relationship, proportions
and Gaussian thresholds associated
with the lithofacies.

Variography. At this step the indicator variograms are fitted by defining the covariance
C Y (h) of underlying grf (Table 1).

Table 1 Covariance model of the underlying GRF

Structure Sill contribution Major range (60°E) Minor range (30°W)

Gaussian 1 120 80

Gibbs sampler. Several sets of Gaussian values are generated at the data locations in
order to honour the truncation rule and the covariance function C Y (h). Two realisations
are presented in Figure   ➌.

Figure 3 Two realisations of the Gibbs sampler algorithm.

Modelling the local distributions. Simple kriging is performed, given the covariance 〖
C Y (h) and the realisations at the data locations as conditioning data. Figure   ➍ presents
two simple kriging estimates, derived from the realisations shown in Figure   ➌ and
their kriging variance, which fully defines the conditional distributions of the grf.
CHAPTER V 373

Figure 4 Two simple kriging estimates from two Gibbs sampler realisations and kriging variance.

Calculation of the conditional probabilities of lithofacies. The truncation rule is applied


to the local distributions of the underlying grf (Equation (5)). The thresholds are
directly computed by using the global proportions of lithofacies, as per Equation (3) .
Figure  ➎ presents a workflow of the procedure, where the upper Gaussian distribution
represents the prior (non-conditional) model used with the contact relationship and
global proportions expressed as the truncation of the underlying grf by threshold T1
and T2. The lower Gaussian distribution represents a conditional distribution of the grf
at a given location obtained by mgk.

Figure 5 Local probability calculations.

The resulting local probabilities and conditioning data are presented in Figure   ➏. The
most probable lithofacies is also calculated and presented in Figure   ➐ , where the contact
relationship imposed by the truncation rule is clearly expressed.
For comparison, an indicator kriging of the lithofacies was performed using the same
data and search parameters as in tgk. In this case the most probable lithofacies show
violations of the contact relationship in several instances (Figure   ➑ ). This feature
happens when there are data of lith1 near to lith3 without data of lith2 to restrict the
estimation. At the same locations the Truncated Gaussian Kriging approach (Figure   ➐)
generates the intermediate unit (lith2).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
374 Truncated Gau ssian K riging a s an Alternative to Indicator K riging

Figure 6 Maps of probabilities of occurrence of each lithofacies using tgk .

Figure 7 Most probable lithofacies using Truncated Gaussian Kriging (tgk).

Figure 8 Most probable lithofacies using Indicator Kriging (IK).

discussion
The proposed approach allows generating a probability model of each lithofacies that
presents a hierarchical disposition, whereas for indicator kriging this feature is not
guaranteed and the amount of violations is likely to increase with the number of lithofacies.
tgk is naturally extensible to consider complex contact relationships between lithofacies,
using the plurigaussian simulation framework instead of the truncated Gaussian. In this
CHAPTER V 375

case the proposed method needs to determinate the conditional multivariate distribution
of two or more underlying grfs. To achieve this goal, it is necessary to perform multi-
Gaussian kriging or cokriging, depending on whether or not the grfs are correlated. The
authors are working on that extension named plurigaussian kriging.
In geostatistics, there is a correspondence between some stochastic imaging methods
and local uncertainty models, as described in Table 2 . For sequential indicator simulation
and Gaussian simulation, there already exists an associated model of local uncertainty.
However for Truncated Gaussian and Plurigaussian simulations, there was no associated
model until the present work.

Table 2 Correspondence between stochastic imaging and local uncertainty models

Local uncertainty model Stochastic imaging Type of model Type of variable


Sequential indicator
Indicator kriging non-parametric Categorical / Continuous
simulation
Multi-Gaussian kriging Gaussian simulation multi-Gaussian Continuous
Truncated Gaussian Truncated Gaussian
multi-Gaussian Categorical
kriging simulation
Plurigaussian kriging Plurigaussian simulation multi-Gaussian Categorical

The conditional distributions of the underlying grf in conjunction with the truncation
rule can be used as input to p-field simulation [9] in order to generate realisations of the
lithofacies that honour the contact relationships and lithofacies indicator variograms.

conclusions
A methodology to obtain the probabilities of occurrence of lithofacies and to calculate the
most probable lithofacies has been presented. It allows generating lithofacies maps in a
more geological way by considering and reproducing the contact relationships between
lithofacies. The proposed approach can be used prior to simulation or as an alternative
to the traditionally used indicator kriging. It is extensible to more complex contact
relationships by considering two or more grfs, as done in plurigaussian simulation.
The formulation of the method is robust from a theoretical point of view, since it
is based on two well accepted approaches (Truncated Gaussian Simulation and Multi-
Gaussian Kriging). There is no order relation violation or border effect. The non-stationary
case can be addressed by a procedure similar to the one used in Plurigaussian Simulation,
by incorporating local proportion curves.

acknowledgements
The authors would like to acknowledge GeoInnova Consultores and the Advanced
Laboratory for Geostatistical Supercomputing (alges) at the Universidad de Chile for
supporting this research, and F. Ibarra and G. Fuster for their useful comments.

references
Journel, A. G. (1983) Nonparametric Estimation of Spatial Distributions. Mathematical Geology 15 (3),
pp. 445–468. [1]

Alabert, F. (1987) Stochastic Imaging of Spatial Distributions Using Hard and Soft Information. Master's
Thesis, Stanford University, Department of Applied Earth Sciences, p. 197. [2]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
376 Truncated Gau ssian K riging a s an Alternative to Indicator K riging

Emery, X. (2004) Properties and Limitations of Sequential Indicator Simulation. Stochastic Environmental
Research and Risk Assessment 18 (6), pp. 414–424. [3]

Matheron, G., Beucher, H., de Fouquet, C., Galli, A., Guérillot, D. & Ravenne, C. (1987) Conditional
Simulation of the Geometr y of Fluvio Deltaic R eser voirs. In: 62nd Annual Technical Conference and
Exhibition of the Society of Petroleum Engineers, Dallas, Texas. spe 16753, pp. 123–131. [4]

Geman, S. & Geman, D. (1984) Stochastic R elaxation, Gibbs Distribution and the Bayesian R estoration of
Images. i.e.e.e. Transactions on Pattern Analysis and Machine Intelligence 6 (6), pp. 721–741. [5]

Galli, A., Beucher, H., Le Loc'h, G., Doligez, B. & Heresim Group (1994) The Pros and Cons of the Truncated
Gaussian Method. In: Armstrong, M. and Dowd, P.A., eds., Geostatistical Simulations. Kluwer
Academic, Dordrecht, pp. 217–233. [6]

Armstrong, M., Galli, A., Le Loc'h, G., Geffroy, F. & Eschard, R. (2003) Plurigau ssian Simulations in
Geosciences. Springer, Berlin, p. 160. [7]

Verly, G. (1983) The Multigau ssian Approach and it s Applications to the Estimation of L ocal R eser ves.
Mathematical Geology 15 (2), pp. 259–286. [8]

Srivastava, R. M. (1992) R eser voir Characterization with Probability Field Simulation. In: 67th Annual
Technical Conference and Exhibition of the Society of Petroleum Engineers, Washington. spe
24753. spe Formation Evaluation 7 (4), pp. 927–937. [9]
Using Geologic Models to Support
Resource and Reserve Estimation of
Marble Deposits in Complex Settings

abstract
Mario baudino The marble deposits operated by Cementos Avellaneda S.A. in the
Cementos Avellaneda S.A. & Sierra del Gigante, approximately 100 km NW of the city of San
Universidad Nacional Luis, Argentina, are complex, folded and fractured deposits with
de San Luis, Argentina varying marble quality. In addition to the challenge of modelling
the complex geology, the resource and reserve estimation of the
Carlos gardini
deposit has to take into account the chemical composition of the
Universidad Nacional
rock, and the effect it causes on the processing kilns.
de San Luis, Argentina
The use of a detailed geologic model is considered paramount
Mario rossi for an accurate prediction, but at the same time the modelling has
GeoSystems International, to represent as accurately as possible those aspects that are most
Inc., USA consequential to kiln performance. In this sense, the geologic
variables of interest are somewhat different than those interpreted
and modelled for exploration purposes.
The detailed geological modelling of the folded structures presents
many challenges, and it is difficult to do using software tools alone.
The interpretation and evaluation of the model behaviour requires
careful analysis. It generally requires adjustments by smoothing
the different contacts in the global model.
The geostatistical methods used in resource estimation has
to take into account the complex geometry and the zonation of
quality variables of the marble; while the contacts with non-
marble rock tends to be a hard contact, within the marble itself
the properties of interest evidence more or less smooth transitions
that should be accurately modelled.
This paper discusses the intricacies of modelling marble
deposits in complex settings, and the impact of quality variables
in kiln performance.
378 Using Geologic Model s to Suppor t R esource and...

introduction
The deposit is located in the Sierra del Gigante (Figure   ➊), 100 km northeast of the city
of San Luis, capital of the province of the same name, in central-west Argentina, and
comprises of several individual deposits, the most important being Cerro Redondo, Cerro
La Calera, and Cerro Impuro II.

Figure 1 Location map.

The operating company (Cementos Avellaneda S.A.) applies 3-d resource models, obtained
after careful geologic modelling and geostatistical grades estimation for each deposit
mined. From these, the reserves are estimated, as well as the materials to be consumed
during the process and the implementation of the Long- (10 year horizon) and Short-term
(one year horizon) mine plans.
The Short-term mine plan is fed back iteratively with assays' results from blast holes.
The samples are obtained after proper reduction and splitting of the material collected
from drilling 10 m benches.
The Long-term model in particular is a reasonable representation of the carbonate
content of the mined material, but it does not perform as well predicting contaminants into
the processing kilns. This paper analyses the differences between the Long- and Short-term
models in relation to its impact into mine planning and mine operation. The combined
use of the geologic and geostatistical models, in addition to the blast hole data, allows for
an improved prediction of the material that is loaded onto the homogenisation stockpile.
CHAPTER V 379

geologic description of the deposits


The marble deposits belong to the El Gigante metamorphic complex [1, 2] , comprised on
fan alternating sequence of schists and marble with minor intercalated quartzite and
amphibolites, which are more dominant on the North and Central areas of the mountain
range (Figures   ➋ and ➌).

Figure 2 Marble outcrop illustrating the degree of deformation commonly found in the area.

Figure 3 Mica-schists, abundant in the Northern part of the El Gigante mountain range.

Pressure and temperature originated the metamorphic rocks, after which processes
significant deformation at different stages produced folding and overturning of the entire
sequence. As a consequence, a series of tight, asymmetrical geometries were generated
with isoclines orientated on average to the South, with an axis dip to the East. There is
also a significant disharmony between the schist and marbles due to their rheological
contrast, and in addition, the amphibole bodies add more complications.
An additional issue to consider is that the marble's ductility complicates the
understanding and interpretation of their geometries because deformation and
metamorphism produce stretching and thickening of the isoclinals' hinges (Figure   ➍).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
380 Using Geologic Model s to Suppor t R esource and...

Figure 4 Details of a fold in the Northern area of the Figure 5 Folded structure of the marble units
Sierra. (top) and profile showing folding of the marble
units with significant disharmonies (middle) and
its representation (below).

The marble's sequence, particularly towards the North, is relatively homogeneous, with
70° to 100° azimuth. The relative hardness of the marble with respect to the schists
explains the morphology of the area, such as the hills that are currently being mined.
Folding generates designs in S, Z, and M shapes, with the first one being the most
common in the mine areas. Figure   ➎ shows the crest of the Central area.
With the deformation model and field work to confirm and obtain detailed mine-scale
observations, a final deformational model was developed for the entire district, applicable
to the interpretation of the marble layers and sequences, which was done on sections.
The significant deformation observed makes correlation of the stratigraphy difficult,
both from the geometric standpoint and considering the compositional variability of the
➏ illustrates the different structural characteristics encountered in
subunits. Figure  
each deposit, while Figure  ➐ shows a schematic of the interpretative model applied.
CHAPTER V 381

Figure 6 Schematic showing structural types in different deposits.

Figure 7 Structural model for marble sequences.

chemical data and mineralogy analysis


To characterise the deleterious elements that affect kiln performance several assays are
obtained, which include SiO2, Al 2O3, Fe2O3, K 2O, Na 2O, SO3. The assays were obtained
from exploration drill hole core material, as well as blast hole cuttings and samples
taken from the mine faces. Also, the mineralogy was described using optical microscopy.
The main lithologies present in the three areas being currently mined (Cerros La
Calera, Redondo, and Impuro) were described and its mineralogy characterised. The most

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
382 Using Geologic Model s to Suppor t R esource and...

significant impurities are quartz (SiO2), with the chalcedony and opal phases being a
minor content; muscovite (KAl 2(Si3Al)O10 (OH,F)2), and graphite (C).
The phyllosilicates are most easily identified and quantified under microscope, since
these are muscovite-type micas. Graphite, on the other hand, is present on almost every
unit defined, but is generally not abundant.
The different units are defined based on KST (Lime Standard), which quantifies the
ratio of calcium oxide to hydraulic factors, and is defined according to Equation (1) :

(1)

In the case of the Cerro Redondo, for example, four main units are defined and shown
in Table 1 ; other deposits have slightly different definitions.

Table 1 Lithology units defined for Cerro Redondo

Lithologic Unit KST Comments

Marble 1 >90 High grade unit


Marble 2 65-90 Medium grade unit
Marble 3 50-65 Low grade unit
Schists 0-50 Barren

A key parameter that is of interest is the GS (degree of sulfation), which is defined as:

(2)

Another parameter that is an indicator of kiln performance is MS, silicate modulus,


which represent the weight ratio of silica to the sum of alumina and ferric oxide. MS
varies between 1.9 and 3.2; higher values imply lowering of the liquid phase in the kiln,
with calcination and clinkering conditions worsening, which result in cements that
require more time to harden and set. MS is defined as:

(3)

The flux modulus (MF) is the weight ratio of the minerals that result in the liquid phase:

(4)

There is up to 25% SO3 in units for which the only sulfur minerals FeS2 and CuFeS2
(pyrite and chalcopyrite) do not exceed 2% of the total sulfur content. The alkali content,
mostly as Na 2O, are high in relation to the rest of the mineralisation for the lithology
units of interest.

geostatistical analysis and modelling


Figure  ➑ shows the relative average content of SiO2, SO3, Na2O, and K2O for each of
the three deposits being mined, showing their relative abundance and variability across
deposits. In this figure all results from the exploration drill holes and the production
information (blast holes) for the 2009 calendar year are included.
CHAPTER V 383

Figure 8 Average oxide content for Silice, Sulfur, Sodium and Potasium per ore type.

The assays indicate that the greater S values (mostly as gypsum and pyrite) are found in
Cerro La Calera deposit (high grade) and Cerro Impuro (low grade).
The higher silica concentrations are found in Cerro Impuro in the form of quartz
(mostly), with minor chalcedony. The larger amounts of alkalis are found in Cerro Impuro
and Cerro La Calera, as a mixture of muscovite mica and K-feldspar.
To date, modelling techniques have included applying the Inverse Distance Squared
method (ids) to obtain estimated values for at least three key indicators (KST, MS, and
GS), rather than the individual deleterious materials.
Block models are used to estimate grades into from the exploration drill holes and
production blast holes. Simple models have been the norm, although challenges in the
processing and final product quality have necessitated a closer look at the long- and short-
term modelling practices. Figure   ➒ shows an isometric view of the Cerro Impuro model,
color-coded by KST values, obtained by a direct estimation of KST values from drill holes
and production blast holes into blocks.
Future upgrades to the resource estimate should consider more explicitly the intricacies of
the deposit geology, as well as the spatial distribution of the individual deleterious elements.

Figure 9 Isometric view of the KST model, Cerro Impuro.

discussion and conclusions


The Sierra El Gigante marble deposits are located within a massif with high structural
complexity, complicating its geologic modeling and the prediction of grades and key kiln

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
384 Using Geologic Model s to Suppor t R esource and...

performance indicators. In addition to the modeling difficulties, three deposits with very
different geologic characteristics are being mined simultaneously.
The chemical assays are used to define a material mixture to be sent into the processing
plant; this stockpile has to be accurately predicted, and thus the short-term information
coming from mining faces and the blast hole assays are key; the practice of visually
estimating a grade, even on a microscope, although helpful, is insufficient to predict
the high level of variability.
Having recognised the complexity of the deposits and the intrinsic difficulties in
accurately modelling its geology and predicting grades, the added value of improving
the modelling methodology should become self evident.
The first significant improvement was recognising the need to incorporate the complicated
structural geology to develop a good predictive model; although detailed descriptions of the
geologic and structural characteristics of the deposits were available before (see for example, [2]),
only in the last two to three years a concentrated effort in geologic interpretation and
modelling was successfully implemented. This is summarised in this paper.
Next in the path of continuous improvement is the search for a better estimation
technique. First, the estimated models should be upgraded from using Inverse Distance
Squared estimation (ids) to some form of linear or non-linear kriging estimator, depending
on the distribution of the variables being estimated. Second, the blocks in the resource
model should be populated with estimates of the individual elements or minerals, and
after this estimation, at the block level, the key kiln performance indicators KST, GS, and
MS should be calculated according to Equations (1) to (4) . Since the kiln performance
indicators result from ratios, direct block estimation using a linear estimation technique
of these variables is theoretically incorrect.
From a pragmatic viewpoint, it is not yet understood whether the resulting KST, GS,
and MS estimates can still be acceptable approximations of the unknown true values. But
it would be safer to estimate directly the individual elements, which do upscale linearly,
and then combine them at the block level to provide the key indicators block values. A
worthwhile exercise would also be to compare the two approaches, since the differences
provide insight into the short-scale spatial distribution of each variable.
Another series of techniques that are worth investigating is the group of compositional
data analysis, since the chemical and mineralogical assays can be considered to sum
up to 100% of the rock. The potential advantages of this type of estimation are that the
variables are estimated such that their relative proportions are maintained.
From the processing perspective, the variability of the deleterious elements can
be more critical than the absolute content itself. This implies that additional future
upgrades to the geostatistical analysis and modelling would be a greater focus on
the prediction of the expected variability from the long- and short-term mine plans.
The best method to accomplish this is an uncertainty and risk analysis based on
geostatistical conditional simulations.

references
Gardini, C. (1991) Geología del Basamento de la Sierra de El Gigante, Provincia de San Luis. Unpublished
Doctoral Thesis, Departamento de Geología, Universidad Nacional de San Luis., p. 249. [1]

Criado Roque, P., Mombru, C. & Ramos, V. (1981) Estructura e Interpretación Tectónica. In Yrigoyen,
M. (Ed): “Geología y recursos naturales de la Provincia de San Luis: viii Congreso Geológico
Argentino”, Relatorio: pp. 155–192. [2]
Collocated Cosimulation with
Multivariate Bayesian Updating:
A Case Study on the Olympic
Dam Deposit

abstract
Mario rossi The Olympic Dam deposit is the world's largest single uranium
GeoSystems International, resource, the world's fourth largest copper resource and Australia's
Inc., USA largest gold resource, and it has been exploited by underground
mining methods for more than two decades.
Colin badenhorst Conditional Simulation (cs) models were developed to provide
Shane o'connell models of uncertainty which could be compared to the existing
BHP Billiton, Australia resource classification scheme, as well as to provide an assessment
of mine plan risk and short-term variability of metallurgical feed
profiles to the plant.
Developing the cs models presented several challenges.
In addition to the massive size of the deposit and the spatial
correlation among the different commodities, very large datasets
and models provided some unique logistical challenges.
The uncertainty in the key geological variables (haematite
abundance and sulphide mineral species) are characterised by 30
realisations derived using Sequential Indicator Simulation and
the Maximum A-posteriori Selection (maps) algorithm. Simulation
domains were then constructed for each realisation and used to
condition the simulation of copper (Cu), uranium oxide (U3 O 8),
gold (Au), silver (Ag), sulphur (S), and in-situ bulk density (SG). The
spatial correlation between Cu, U3O8, S and Au was modelled using
Gaussian collocated co-simulation with Bayesian updating, while
the remaining grade variables were independently simulated
using Sequential Gaussian Simulation (sgs).
The resulting cs models are used to evaluate the resource
classification scheme, provide an analysis of recoverable resources,
and provide mill feed grade profiles for different time periods. The
ability to evaluate this level of information at an early stage of
the expansion project is invaluable to its progress and subsequent
decision making.
386 Collocated Cosimulation with Multivariate Bayesian Upd ating...

introduction
The Olympic Dam orebody represents the world's largest single uranium resource, the
world's fourth largest copper resource and Australia's largest gold resource. The January
2009 resource model for the Olympic Dam deposit recorded a total resource of 9.080 billion
tonnes at 0.87% Cu, 0.27kg/t U3O8 and 0.32 g/t Au, within approximately 12 km3 of the
mineralised Mesoproterozoic crystalline basement of the eastern margin of the Gawler
Craton. It is covered by approximately 350 metres of flat-lying Neoproterozoic to Cambrian
sedimentary rocks, separated from the basement by a major sub-horizontal unconformity.
The principal host for mineralisation is the Olympic Dam Breccia Complex (odbc), which
describes all breccias and related lithologies associated with the Olympic Dam mineralised
environment. This complex is hosted within, and is largely composed of the Roxby Downs
Granite, and includes lesser contributions of the felsic to intermediate volcanics and
numerous intrusions of mafic/ultramafic and felsic dykes. Copper sulphide (bornite,
chalcocite and chalcopyrite), coeval and contiguous uranium (uraninite, brannerite and
coffinite) mineralisation occurred approximately 1,590 billion years ago, and is characterised
by a continuum of weakly to strongly haematised breccias.Central to the deposit is a zone
of intense haematite -quartz breccia which is largely void of mineralisation.
The major controls on mineralisation of the Olympic Dam deposit are described by
the inter-relationship between haematite abundance and sulphide mineral species.
Importantly, there is a significant and unequivocal spatial correlation between Cu, U3O8,
Au (associated with sulphide) and Ag mineralisation across the deposit as a consequence
of the co-precipitation of these elements as several minerals. Thus, the controls on Cu
mineralisation are indeed the same controls on U3O8, Au (associated with sulphide) and Ag.

Figure 1 Depiction of an idealised cross-section through the Olympic Dam deposit showing the
distribution of sulphide mineral species within the ODBC.
CHAPTER V 387

methodology
Simulation of geologic variables
The simulated geologic model was developed based on haematite (DomLit) and Cu mineral
species (DomMin) variables that are used to define the estimation and simulation
domains. The categorical sis method [1] with a local-varying mean (lvm) uses the same
principles as the Multiple Indicator Kriging (mik) technique for grade simulation and
estimation, except that it deals with a categorical variable [2] .
The simulation parameters are not only dependent on the software used for simulation,
but also are part of the “uncertainty model”, in the sense that the different parameters
(search ellipsoids, number of samples and simulated values to be used, the definition of
octant searches, etc.) may provide different levels of uncertainty. Thus, these parameters
need to be carefully considered [3] .
Pangeos software (www.statios.com) was used for both geological and grade simulation.
The main steps used to simulate the geological variables were:

• The categorical variables are transformed into a series of indicators: seven for the
haematite categories and four for the sulphide mineral species categories. The
indicators defined the presence or absence of each category.

• The basic statistics (proportions or relative abundance) of each category are obtained,
as well as the corresponding directional indicator variogram models.

• A Locally-Varying Mean (lvm) model was created using Inverse Distance cubed, and
the subsequent local mean value at each simulation node was then used to condition
the simulated values. This local mean value aids the simulation process in accounting
for trends and departures from the strict stationarity assumption required by the
simulation method.

• At each node being simulated, a cumulative frequency curve representing the


probability of each category present at that location is obtained using mik. A random
number between 0 and 1 is then drawn and the corresponding categorical value is
then selected accordingly.

• After incorporating the previously simulated node into the simulation database, the
process is repeated until all nodes on 10 x 10 x 5m spacing are simulated. The resource
block model uses blocks that are 30 x 30 x 15m, which means that there are 27 simulated
nodes within each resource model block. This process is repeated independently for
each domain, culminating in a model comprising of 29.5 million nodes.

• A post-processing routine (maps, [4] ) was used to locally modify the simulated values
such that low probabilities for some categories in areas of unlikely occurrences
were cleaned up. This is important particularly in those areas where the category is
known to be a singularly massive unit, but the simulated model may present non-
existing simulated categories stemming from very low probabilities of occurrence.
maps changes clean-up about 1 or 2% for each category, and thus provides a small
improvement in the reproduction of the original statistics.

A total of 30 realisations were obtained for both categorical variables. The combinations,


at each node, of these two variables define the 30 simulation domains used to uniquely
condition each grade simulation. Thus, a measure of uncertainty as introduced by the
geologic model is also introduced into the simulated grade model.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
388 Collocated Cosimulation with Multivariate Bayesian Upd ating...

Validations
The simulated values should reproduce the basic statistics of the original data (5m
composites), as well as the variogram models used in the simulation. In the case of
the categorical (geological) variables, the basic statistics to be reproduced are simply
the proportions of each category within the original database. Generally, these are well
reproduced with the exception of volumetrically small domains.
The reproduction of the spatial variability model used to simulate each indicator is
also checked. Figure  ➋ shows the comparison for the chalcopyrite-pyrite (cpy-py) unit
in Domain 320. The overall tendency, with few exceptions, is for the simulated values
to exhibit more continuity than suggested by the 5 m composites, although within
acceptable margins.
Statistically, there is a tendency of the simulated values to disguise the differences in
proportions observed in the 5 m composites. This is partially due to local data clustering
and difficulties in obtaining representative statistics for each domain. In addition, the
variability in some domains is relatively high, such as domains 310, 320, and 340, which
are both small and narrow in certain directions with respect to variogram ranges.
Spatially however, the simulated values honour well the original data and reproduce
the spatial textures and patterns of connectivity observed in the 5 m composites and the
resource model. As examples, Figure  ➌ shows the DomLit and DomMin variables for
realisation No. 1, Bench -792.5m. The variability observed is deemed representative of
that observed in the drill hole data and confirmed by underground geological mapping.

Figure 2 CPY-PY indicator variogram with 5m composite models — Domain 320, realisation No.10.
CHAPTER V 389

Figure 3 DomLit (left) and DomMin (right) realisation No.1, bench -792.5mRL.

Collocated co-simulation with multivariate Bayesian Updating


Cu, U3 O 8 , Au and S were co-simulated using collocated cokriging [5] with Bayesian
updating, while Ag and in situ density (sg) were simulated independently using Sequential
Gaussian Simulation [6] . As with the sis with lvm categorical simulations, 30 realisations
were obtained. The steps used to obtain the grade simulations were:

• The basic statistics and variogram models were obtained, and declustering, despiking,
Gaussian transformations and Gaussian variogram modelling were completed on the
database.

• The collocated simulation was established with Cu being the first independently
simulated variable. The correlation between U3O8 and Cu was modelled by simulation
domain, and U3 O 8 was then simulated using the previously simulated Cu as a
secondary collocated variable.

• S was then simulated after calculating the combined correlation of Cu and U3O8 as a
linear combination of the individual correlations. This is called the super-secondary
variable (ssv) of Cu-U3O8 [7] , and is used as the collocated secondary variable. The use
of ssv is justified by the fact that the dependencies between variables are linear. In
the case of non-linear dependencies, the Stepwise Conditioning transformation [8]
may be more appropriate. Figure ➍ shows a schematic summarising the process for
simulating S.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
390 Collocated Cosimulation with Multivariate Bayesian Upd ating...

• After the simulation for S is completed, the process is repeated for Au. The corresponding
ssv variable is generated as a linear combination of Cu, U3O8, and S, as well as the
collocated correlation from the 5 m composites.

• Note that each co-simulation uses updated collocated correlation values in the Bayesian
sense, generating an ssv to account for the multiple correlations among the variables
being simulated. The algorithm uses an LU decomposition method to account for the
correlation between the data and the simulated values.

After validation, the 30 realisations were regularised to the same block size as the
resource model, such that block-to-block comparisons could be made. The cs model was
also used in several studies, which require the models to be on the same block size as
the resource model.

Figure 4 Accounting for multivariate correlations using the super-secondar y variable concept.

Figure ➍ shows the correlations considered to simulate S, Cu and U3O8.

Simulation plans
The simulation models were obtained using search radii of 300 x 300 x 300m or
350 x 350 x 350m, depending on the variable being simulated. An isotropic search was
used to provide sufficient opportunity for data from all directions to contribute to the
simulated value. The variogram models were left with the task of reproducing the
spatial anisotropies observed.
Between 10 and 12 total values were used as conditioning data, combining both original
composites and previously simulated nodes. This was found to be sufficient, since the
simulated values did not change significantly with more conditioning data. Several other
simulation options were tried during the development of the models, including the option
CHAPTER V 391

of restricting the simulation to areas were at least one 5m composite (original data) was
found. A multiple grid search was used, but no octant searches were applied.
Variogram models for Gaussian data were developed for each grade variable and
Domain. The data used for the transformation was the despiked 5m composites. Also,
the minimum and maximum grades used in the back transformation from the simulated
Gaussian space to the original were modified according to the ranges observed in the 5m
composites and by looking at the corresponding probability plots.

Simulation statistics and validations


The reproduction of the basic statistics and histogram of simulated Cu grades (not
shown here) is in general very good in terms of the mean and the median, as well as
the variance and thus the coefficient of variation.
Figure  ➎ shows a Q-Q plot comparing simulated Cu values and 5m composites for
simulation No. 25, Domain 4406, with a very good match shown. This is generally the
case for most of the simulations and domains.

Figure 5 Q-Q plot, realisation No. 25 vs. 5m composites, Cu, Domain 4406.

The correlograms of the simulated values tend to show more variability than the
models obtained from the 5m composites. This is opposite to what was observed in the
case of the geologic variables. Figure ➏ shows three directions for Cu, realisation
and Domain 4109. These are some of the Domains where the correlogram model is
better reproduced. The overall conclusion is that the level of reproduction of the spatial
variability is acceptable, and thus that the simulation model adequately reflects the
information used.
The other important aspect that should be analysed is the overall spatial patterns
observed in the simulated values. As was done with the DomLit and DomMin categorical
variables, the simulated grades were visualised in plan and sectional views. Figure ➐
shows Cu simulation 1 at elevations -590m. The grade distribution is correctly reproduced
in a global and local sense, validating the methodology applied.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
392 Collocated Cosimulation with Multivariate Bayesian Upd ating...

Figure 6 Directional Cu correlograms from simulated values with models


from original 5m composites, Domain 4109, realisation 1.

Figure 7 Cu realisation No. 1, -590mRL.

In addition to the univariate validations, it is also important to check whether the


correlations among variables are reproduced. This was the case for most domains,
although the general tendency is for the simulations to reproduce less correlation (as
measured by the linear correlation coefficient) than the original drill hole data shows.
Comparisons based on rank correlations were sometimes significantly better. This is
CHAPTER V 393

believed to be partly due to the higher variability of the simulated nodes, and partly due
to the lack of robustness of the correlation coefficients derived from composites in small
or highly variable domains.
Table 1 shows the matrix of the overall linear correlation coefficients comparing
simulation 1 and the original data. The comparison is made by taking the difference
between the correlation coefficient of the 5m composites and subtracting the
corresponding correlation found in simulation 1. A negative value implies that the
simulation have less correlation than the original data. Note that the simulated Cu,
U3 O 8, and S have globally reproduced well the correlation, although on a domain by
domain basis the differences are larger. Au is less correlated to the other three to begin
with (between 0.25 and 0.30 globally), so realisation 1 has more trouble reproducing that
lower correlation.

Table 1 Global correlation matrix as differences between realisation


No. 1 and original 5 m composites.

Cu U 3O 8 S Au

Cu 0.00 -0.06 -0.03 -0.14

U 3O 8 -0.06 0.00 -0.04 -0.17


S -0.03 -0.04 0.00 -0.19
Au -0.14 -0.17 -0.19 0.00

conclusions
The conditional simulation model for the Olympic Dam deposit has been developed
using a co-simulation methodology that accounts for several correlated grade variables.
The observed local variability is high, as expected, which is partially induced by the
geologic variability. With a few exceptions, both the simulated geological and grade
variables (and the correlation matrix between the grade variables) validate well against
the original 5m composites used in the simulation.

The key findings of this work and the resulting uncertainty and risk analyses are:

• In some local cases, the cs model tends to be more optimistic in terms of grades and
tonnages above cutoff than the resource model. This is particularly noticeable for Au
mineralisation.

• The continuity and spatial characteristics observed in the 5m composites are


reproduced well in the haematite and sulphide mineral species simulations (the
underlying geological controls on mineralisation).

• The cs model has proven to be a useful tool for studying variability (uncertainty) and
its associated risks. In this case, three specific applications were developed:

––A comparison of the uncertainty model (cs) with the resource model suggests that for
volumes greater than 15 to 20 million tonne parcels, the grade variability decreases
rapidly.
––The development of daily, weekly, and monthly concentrator and smelter material
feed profiles. The method developed using the simulated grades and tonnages
to obtain feed profiles shows the level of variability that can be expected for key
variables such as the copper-sulphur (Cu:S) ratio, as well as the individual grades.
––A comparison of the grade tonnage curves for different domains and mining periods
with respect to the resource model. The most significant difference is observed
with respect to Au grades, which is expected given its higher variability and the
methodology used to estimate Au in the resource model.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
394 Collocated Cosimulation with Multivariate Bayesian Upd ating...

The model developed has very few precedents in terms of methodology, and is challenging
primarily because of its size. The uncertainty and risk analysis derived from the model
highlights some areas of improvement required for subsequent resource models. The
simulation model is also a useful tool that, when used in conjunction with the resource
model, will allow the most uncertain aspects of mine development to be focused on, and
thus provide a foundation for risk mitigation.

acknowledgements
BHP Billiton is gratefully acknowledged for allowing us to publish this paper. Other
members of the Olympic Dam Resource Team, in particular A. Bottrill and M. Smith,
are also gratefully acknowledged for their support at various stages during project
development.

references
Alabert, F. (1987) Stochastic Imaging of Spatial Distributions Using Hard and Soft Information, MSc. Thesis,
Stanford University, Stanford, ca, p. 197. [1]

Rossi, M. E. (2005) Indicator Simulations of Categorical Variables, Proc. of the 32nd International
Symposium on Applications of Computers and Operations Research in the Mineral Industries
(apcom), Tucson, Arizona, March 30-April 1. [2]

Rossi, M. E. (2003) Practical Aspects of L arge-Scale Conditional Simulations, Proceedings of the 31st
apcom, Cape Town, South Africa, May 13–15. [3]
Deutsch, C. V. (1998) Cleaning Categorical Variable R ealisations with Maximum A-Posteriori Selection,
Computers and Geosciences, 24(6), pp. 551–562. [4]

Almeida, A. S. & Journel, A. G. (1994) Joint Simulation of Multiple Variables with a Markov-Type
Coregionalization Model, Mathematical Geology, Vol. 26, No. 5, pp. 565–588. [5]

Isaaks, E. H. (1990), The Application of Monte Carlo Methods to the Analysis of Spatially Correlated Data,
PhD Thesis, Stanford University, Stanford, CA, p. 213. [6]

Babak, O & Deutsch, C. V. (2009) Collo c at ed Cok r i g in g Ba s ed on M e rg ed Se c on d ar y At t r ib ut e s,


Mathematical Geosciences 41(8), pp. 921–926. [7]

Leuangthong, O. & Deutsch, C. V. (2003) Stepwise Conditional Transformation for Simulation of Multiple
Variables, Mathematical Geology, 35(2), pp. 155–173. [8]
Indicator Cokriging for Construction
and Conditional Cosimulation for
Comparison of Resources Models

abstract
Rodrigo riquelme Quetena is a porphyry copper deposit located in the Chuquicamata
GeoInnova district. The oxide zone has an interesting economic potential for
Consultores Ltda., Chile Codelco Norte. For this reason, the correct volume and characterisa-
tion of the copper solubility ratio is a critical factor for the Quetena
Carlos cisterna project, presently in the conceptual (scoping) study stage.
Codelco Norte, Chile Currently, the project has two available resource models. The
first model has been built using a definition unit exclusively based
on the geological mapping. This model was created by manual
interpretation (deterministic). The second model uses a definition
unit based on geological mapping, total copper (CuT) and soluble
copper (CuS) contents, and solubility ratio (CuS/CuT). The spatial
extent of the units was performed in a probabilistic way using
indicator cokriging of the units. In both models, the total and
soluble copper grades are estimated within the geological units.
In order to compare the robustness of both models in relation
to solubility ratio and indirect definition of the units, a global
conditional cosimulation (using turning bands) of the CuT and
CuS grades was done at a point support. For cosimulations 100
realisations were generated. At each node, the expected conditional
solubility ratio was calculated at point support by averaging the
solubility ratio (CuS/CuT) for each realisation. This solubility ratio
has been compared within the two resources models: probabilistic
and deterministic.
In this work, the advantages and disadvantages of using
deterministic/probability models in an early stage of a mining
project are presented. It is remarked that the probabilistic models
are complementary and can provide a guide for the construction
process of the traditional geological models in a quicker, more
precise and reliable way.
396 Indicator Cokriging for Con s tru c tion...

introduction
Quetena is a porphyry copper deposit located in the Chuquicamata district, discovered
in 2002, as a deposit close to Toki.
Codelco Norte Division (dcn) has resources over 17,000 million tonnes with 0.52% total
copper grades, the resources are mostly sulphides minerals. Codelco Norte will face the
depletion of leachable ore reserves in the next decade, because Mina Sur and Radomiro
Tomic will finish theirs oxides reserves, leaving hydrometallurgical plants available. For
this reason, oxides of Quetena project are strategic to continue with sx-ew production
lines. In consequence, the correct volume and characterisation of oxide zones and copper
solubility ratio are critical factors for feasibility of the project. In contrast, ore sulphides
of project are not high priority to Codelco Norte because the mineralisation sulphides
(chalcopyrite and bornite) have low leachable potential.
Currently, the project comes to pass from advanced exploration stage to scoping study.
During the last infill drill hole campaign (2008) over 10,000 metres of diamond core have
been drilled.
Historically, the Quetena project has presented problems with estimation proportion
of the structural leached unit because of low density of data (drill hole grid spaced over
100 x 100 m).
The project developed two resource models. Geological units used as mineralisation
controls for both models were minerals zones, these are associated with supergene
processes. The first model has been built using a definition unit exclusively based on the
geological mapping regardless of grades. This model was created by manual interpretation
(deterministic) on sections separated each 100 metres in direction North-South and plans
each 30 levels. The second model (probabilistic) uses a definition unit based on geological
mapping, Total Copper (CuT) and Soluble Copper (CuS) contents, and solubility ratio
(CuS/CuT). The spatial extent of the units was performed in a probabilistic way using
indicator cokriging of the units. In both models, the total and soluble copper grades are
estimated within the geological units.
Both models were constructed based on the same database with two codes of units. A
stochastic approach was applied with the goal to validate and compare the robustness of
both models in relation to solubility ratio and indirect definition of the units. A global
conditional cosimulation (using turning bands) of the CuT and CuS grades were done
on point support. For cosimulations 100 realisations were generated. At each node, the
expected conditional solubility ratio was calculated at a point support by averaging the
solubility ratio (CuS/CuT) for each realisation. This solubility ratio has been compared
within the two resources models: deterministic and probabilistic.

data information
The current spacing between diamond core drill holes is in order of 100 x 100 m. Quetena
has over 36,000 metres of diamond drill hole with records geological mapping and grades
analyses. The data base is based on support samples of 1.5 metres.
Figure ➊ shows in plan view, the drill holes traces separated between historical
and last campaign.
CHAPTER V 397

Figure 1 Plan view with old drill hole and drill hole campaign 2008.

Geology and units


Quetena has a gravel overburden of 100 metres of thickness on average. The mineralisation
was emplaced by means of tonalitic porphyry and host rocks (composed by tonalites and
andesites mainly) also were mineralised.

Oxides zones units


Both approaches (probabilistic and deterministic) model the same oxides units, but theses
were defined with different geological attributes:

• Green Oxides (oxv): This unit is characterised by a high presence of chrysocolla,


malachite, seudomalachite and atacamite and a low abundance of black oxides. This
unit should have control over the distributions of high total copper grade and high
solubility ratio (Cus/CuT).

• Black Oxides (oxn): unit with abundance of copper wad, copper black oxides, tenorite
and others. This unit is located around the green oxides units. Also, this unit has an
important presence of limonite. The solubility of black oxides is medium, explained
by the presence of minerals with low leaching kinetics.

• Leach (lix): this unit is characterised by presence of disseminated limonite and


structurally controlled limonite. Structural leached is associated with faults, dykes
or high permeable zones, where fluid were channelled, alterating and leaching the
mineralisation, producing iron oxides and removing copper oxides. Mineralisation
with content copper is scarce. This unit presents low solubility ratio.

• Mixes (mix): this unit represents the coexistence of sulphides and oxides mineralisation
i.e. it is possible to find sulphides mineralisation with presence of oxides minerals. The
solubility ratio is low in this unit.

The green oxides are in the centre of the ore body surrounded by black oxides. These
units are crossed by structural leach and the amount of limonite increases to the outer
part of the deposit.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
398 Indicator Cokriging for Con s tru c tion...

The historical information of geologic mapping presents quality deficiencies in the


logging and consequently in the oxides zones classification among green oxides, black
oxides, leached and mixed. The visual characterisation of oxides minerals is complex;
this requires good training and experience for distinguishing oxides minerals.

methodology
For the project two resources models have been developed:

Deterministic model: The first model was built with recoded units based exclusively
on geological mapping without consideration of total and soluble copper grades. The
model was created by the plan view interpretation of mapped information using a set of
control sections, after that plan view interpretations are modelled and finally extruded
(traditional way). After this, the model was estimated using ordinary kriging for total
copper and soluble copper grades for each unit and finally reblocked to 25x25x15 m.

Probabilistic model: The second model uses a different recode of units approach based on
geological mapping, total copper (CuT) and soluble copper (CuS) contents, and the solubility
ratio (CuS/CuT). The spatial extension of the units was made in a probabilistic way using
Indicator Cokriging (ick) of the units The Indicator Cokriging provides proportions or
probabilities for each unit in each block. Then CuT and CuS grades estimation were
done for each unit. Finally, the proportions and grades were weighted to obtain total
copper and soluble copper grades estimation. Then the final block model was reblocked
to 15x15x15 m.
Indicator Cokriging has advantages for the models where the units have border effect
or in the case of nested units in relation to the independent indicator kriging [1] . Direct
and cross indicator variograms of four units are shown in Figure  ➋.

Figure 2 Direct and cross indicator


variograms. Dashed lines indicate
sample variograms and authorized
envelopes for cross-variograms,
solid lines indicate modelled
variograms.
CHAPTER V 399

Figure ➌ presents a perspective view of the green oxides proportions (co)estimated by ick.

Figure 3 Isometric view, probability or proportions of green oxides and drillhole data.

Comparison with solubility ratio calculated from cosimulation of


CuT and CuS
The robustness of both models was compared with respect to a global estimation of spatial
distribution of solubility ratio by an indirect way. Solubility ratio was calculated in each
block by means of conditional simulation, this allows comparing the definition of the
units. A conditional Gaussian cosimulation (using turning band [2]) of the grades CuT and
CuS was done at a point support without considering the geological units. For cosimulation
100 realisations were generated. At each node, the expected conditional solubility ratio
was calculated at a point support by averaging the solubility ratio (CuS/CuT) for each
realisation: this is the conditional expectation estimator [3]. This solubility ratio has been
compared to both resource models available.
The cosimulation was done with all the oxides zone limited between top rock - gravel
contact and in the bottom with limited sulphides presence.
The scatter plot total copper and soluble copper have good correlation in raw and normal
score transformation.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
400 Indicator Cokriging for Con s tru c tion...

Figure 4 Scatter plot of CuT and CuS on composites 1.5 m with conditional
expectation curve (CuS/CuT) on raw data and normal score transformation.

The variograms of normal score transformation of CuT and CuS are shown in Figure
➎. There is no evidence of anisotropy due to the low density of information, and to
avoid an artificial preferential trend, an omnidirectional variogram for CuT and CuS
was calculated.

Figure 5 Bivariate experimental and modelled


variograms of Gaussian transformation CuS
and CuT.
CHAPTER V 401

Resources
Both estimations have very similar resources. Some observations are possible to do:

• The probabilistic model presents more metal copper for the cutoff grade 0.35% CuT
than the deterministic model; this is explained by a mixture of units in the latter one.

• Indicator Cokriging model presents more tonnage in cutoffs lower than 0.25% CuT
in relation to the traditional model. This difference can be explained, because the
coestimation of units was done using only drill hole samples without a border limit i.e.
the Indicator Cokriging was extrapolated in the domain. Artificial point control may
be added in the waste zone at the outer part of the deposit to avoid the extrapolation
of low grades. The relative metal copper of two models is shown in Figure  ➏.

Figure 6 Copper metal relative vs. total copper cutoff of both models.

Comparison solubility ratio calculated from cosimulations


The results of the cosimulation of CuT and CuS reflect the global trends of the spatial
distribution of the solubility ratio (high and low values zones). The cosimulation was
done over the oxides domains without accounting for the geological units (Figure  ➐).
The following aspects can be stated:

• The cosimulated solubility map shows a better match with the units of the probabilistic
model. The Green oxides unit of the probabilistic model presents a similar shape to the
high solubility zone, whereas the black oxides unit is associated to medium solubility
ratio around green oxides.

• For the deterministic model, the green oxides are present over the whole domain with
a wide range of solubility ratio, mixing high and low solubility ratios.

• The solubility approach exhibits a North - South trend caused by a structural leach unit that
is present in the deterministic model. The probabilistic model does not reflect that unit.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
402 Indicator Cokriging for Con s tru c tion...

• The conditional cosimulation does not directly consider the relation order (CuS<CuT)
of the cosimulated variables. For this reason it is possible to find cosimulated grades
where the soluble copper is greater than the total copper, situation without a physical
sense. That mainly happens in the outer part of the deposit associated to low grade
zones. A conditional cosimulation approach that considers variable constrains should
be studied and developed.

• The high solubility zone could be used as a guide for the 2-D manual interpretation
of the green oxides unit, reducing the time and complexity of the 2-D interpretation
and consequently the 3-D construction.

• Solubility ratio can be obtained by the independent cosimulations of CuT and CuS
grades for each geological unit, and then comparing and reviewing the definition
consistence of each unit.

Figure 7 Level 2125 (a) deterministic


model units, (b) probabilistic model
units and (c) solubility ratio map from
cosimulation of CuT and CuS with isocurve.

conclusions
The solubility ratio calculated using cosimulation of CuT and CuS provides a global
picture of the trends of the high and low solubility zones and generally shows a better
match to the probabilistic model. To improve these results, the solubility ratio could be
calculated for each geological unit using the data of each domain. The results of this
work indicate that the deterministic and probabilistic models provide very similar copper
metal resources from a global point of view.
The probabilistic models are especially adequate when the geology of the deposit
exhibits a simple geometry and when the decisions about the project are global without
a highly detailed level. The generation of the probabilistic models is faster than the
traditional way, with the consequent cost decreasing, which allows us to make decisions in
CHAPTER V 403

the early stage of the project. Also, the construction process is fully reproducible, auditable
and easily modified or updated in function of new data or a new geological concept.
In addition, the probabilistic models can be used as a guide for the geologist to perform
the geological interpretation, in this the case of study the mineral domains.
There are several available methods to build a probabilistic model of geological
domains such as Indicator Kriging, Indicator Cokriging or truncated Gaussian kriging [4] .
The key aspect is choosing the more suitable method depending on the geological setting
of the deposit.
However, the probabilistic models are not the panacea; a purely data-driven numerical
modelling could provide an incomplete or unreal picture of the deposit because there
is a lot of geological knowledge beyond the sample data that should be considered and
incorporated in the probabilistic approach. The incorporation of that knowledge could
be achieved by adding some control points and by sampling the available geological
interpretation [5] .

acknowledgements
The authors thank Enrique Chacon P., Roberto Fréraut C. and Ricardo Boric P. of codelco
norte for their help, support and development of this project and for authorising and
publishing this work. We also thank Serge Séguret and Alejandro Cáceres for their
comments and recommendations.

nomenclature
CuT = total copper grade
CuS = soluble copper grade
OXV = green oxides
OXN = black oxides
LIX = leached
MIX = mixed
ICK = indicator cokriging

references
Rivoirard, J. (1994) Introduction of Disjunctive Kriging and Nonlinear Geostatistics. Oxford University
Press, Oxford. [1]

Chilès, J. P. & Delfiner, P. (1999) Geostatistics: Modeling Spatial Uncertainty, Wiley, New York,
pp. 465–476. [2]

Carrasco, P., Chilès, J. P. & Séguret, S. (2008) Additivity, Metallurgical R ecover y, and Grade. In Ortiz,
J.M., Emery, X., eds., geostats 2008, VIII International Geostatistics Congress, Gecamin Ltda,
Santiago, Chile, pp. 465–476. [3]

Cáceres, A., Emery, X. & Riquelme, R. (2010) Truncated Gaussian Kriging as an Alternative to Indicator
Kriging. In minin 2010, IV International Conference on Mining Innovation, Santiago, Chile. [4]

Cáceres, A. & Emery, X. (2010) Conditional Cosimulation of Copper Grades and Lithology at R ío Blanco -
Los Bronces Copper Deposit. In minin 2010, IV International Conference on Mining Innovation,
Santiago, Chile. [5]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Image Segmentation for
Mineral Identification in
an Oxide Copper Deposit

abstract
Álvaro egaña Mine plans and the performance of metallurgical processes
Julián ortiz define the success of a mining endeavour. These plans are based
Universidad de Chile on a block model that characterises the spatial distribution of
relevant variables. Geometallurgical variables are increasingly
being considered in these models due to their importance in the
definition of the processes and destination of each block in the
mine plans. The correct characterisation of the mineralogical
proportions of ore and gangue is extremely important for this
purpose. This is currently done manually, during the logging of
drill cores. This procedure is slow, does not provide satisfactory
precision for its use in quantitative analysis and is generally
biased towards ore mineral species, rather than gangue, the
latter being more relevant for mineral processing and metallurgy.
Nonetheless, this procedure allows for acquiring knowledge useful
for the genetic interpretation of the deposit and the definition of
the spatial distribution of geological units.
We propose an automated approach for determining the
mineralogical proportions in drill cores, from high resolution
digital images, within the visible spectrum. The method consists
in extracting the colour characteristic of each pixel of the image
and segmenting by the determination of thresholds in the colour
histogram. This is done by using a non-supervised statistical
procedure. This methodology can be easily extended to incorporate
other features such as edges and textures. The results are applied
to images of drill cores from a Chilean oxide copper deposit to
identify the mineral species relevant for the metallurgical
processes. We show results and their statistical validations,
which demonstrates the value that this information has, eases
the logging and increases the amount of information available
as input for a predictive geometallurgical model.
This methodology is seen as the base of a larger, semi-supervised
mineralogical proportions logging system that can help the
geologist understand the deposit and improve its interpretation,
as it allows for generating abundant information from the logging
procedure and reducing the time to gather that information.
406 Image Segmentation for Mineral Identif ication...

introduction
Mining an ore deposit requires knowing the distribution of the resources, having
a strategy for their extraction in time through a mine plan and understanding the
metallurgical performance that these reserves will have through the processes considered
for the recovery of the elements of interest. For all these steps, a correct characterisation
of the resources and reserves is required. In practice, the characterisation is based on
the logging of drill holes and the integration of several sources of geological, structural
and geophysical information.
Drill holes are an important source of information to understand the geology and
the distribution of grades in a mineral deposit. Diamond drilling provides very rich
information because the recovered core can be logged for the geological characterisation
of rock types, mineralisation, alteration and lithologies. Furthermore, the core is split
and part of it goes to destructive tests through sample preparation for chemical analysis
to determine the grade of the elements of interest and impurities.
Logging of cores becomes of paramount importance [1, 2] , since this is the process
where the geology team learns about the deposition conditions, types of geological events,
and chronological sequence and superposition of mineralisation and alteration events.
This requires a careful inspection of the cores.
Current practice considers a qualitative assessment of mineral proportions within
the core and definition of alteration type based on the appreciation of the sample
mineralogy. These assessments are often backed up by more expensive analysis, including
spectrophotometry, X-ray diffraction analysis and scanning electron microscope. Several
systems of quantitative mineralogical analysis exist that provide a detailed description of
the minerals, grain sizes and relationships for liberation analysis purposes, among others
[3] . These attributes are extremely relevant for the geometallurgical characterisation
of the different geological units. Additionally, it is common practice to photograph the
samples to keep a record, which can be beneficial for geotechnical assessments. However,
this image is not systematically used as an information source.
Overall, the process is lengthy, very demanding of geological expertise and only partial
information is finally translated to the numerical model.
We focus on a procedure to improve the speed of geological logging and enrich the
information that is finally transferred to the data base and to the numerical model that
characterises the deposit, hence improving its quality and forecasting capacity in terms
of the plant performance.

Quantitative mineralogy
The correct characterisation of the mineral species existing in an ore has been
highlighted as a very relevant aspect to improve the knowledge and understanding of a
mineral deposit. Understanding the mineralogical changes in the ore, alterations and
lithologies can be a significant help for defining exploration targets and identifying
the alteration halo in which a sample is taken, in order to improve the genetic model of
deposition of ore in a mineral occurrence [4] .
Traditional techniques often rely on manual counting of particles and estimations
of their volume through microscopic analysis of thin or polished sections, generating
problems related to their statistical representativeness in the case of highly variable
mineralisations with coarse particles, and with the subjective nature of the manual
work by a mineralogist.
CHAPTER V 407

Several approaches are available for the quantitative mineralogical determination


with application to many fields: determination of gold mineralogy [3] , investigation of
contaminated sites [5] , characterisation of rocks for the metals and energy industries
[6–8] , geometallurgy, environmental and biological applications [9] , flotation
performance forecasting and optimisation [10] , comminution modelling [11] .
The analytical response of the sample must be compared to a reference response for
known mineral species. Each technique works well under certain conditions, however,
these analyses must be supported by other complementary techniques to validate their
results [3] .
In all the cases, the use of automatic systems allows for:

• Increasing the available information extracted from the samples.


• Reducing the problem of statistical representativeness, as many analysis can be done
at a low cost.

• Reducing the subjectivity related to a manual analysis, where fatigue, experience of


the mineralogist and the focus of the analysis can bias the results.

Currently, the most popular techniques for quantitative mineralogy are based on
scanning electron microscope. These techniques are generally costly and cannot process
a very large amount of samples to provide a reliable statistical characterisation of the
ore. Notwithstanding this limitation, the precision and quality of the information that
can be retrieved from these analyses remains unquestioned and they should be used as
calibration of faster and less costly methods. In the next section we discuss the use of
digital photographs for the determination of mineralogy.

Image analysis
Automated mineralogical species characterisation starting from drill core digital images
is, from a computer science point of view, a feature recognition and classification problem.
The latter are complimentary functions that lie at the high end in the field of image
analysis. In our case, classification deals with the procedure of establishing criteria to
distinguish different populations of mineralogical species represented by some kind of
well defined features. Recognition is the process which uses classification tools to find
a particular mineralogical species within an image.
In image analysis and computer vision, image segmentation is one of the most
important tasks because it is usually the starting point of more complex techniques
such as feature recognition and classification [12] . The goal of image segmentation
algorithms is to partition the image into a number of disjoint classes with similar nearly-
uniform properties. The output should always be a set of visually distinguishable regions
within the image. For a drill core digital image, those regions are the superset which
contains the different candidates to be classified and recognised as mineralogical species.
A feature is any property that can be extracted from an image. Texture and colour are two
such image properties. Probably because they are a natural approach that have received
significant attention from the research community. Most of the initial research has
examined these two properties as separate entities due to the fact that considering them
as a single descriptor has been more difficult than initially anticipated [13] . But recent
attempts have addressed the problem of joining colour and texture as a single feature
with rather good results [14] .
Since our research is at early stages we are choosing colour feature as a starting point
to also include the texture feature in a near future.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
408 Image Segmentation for Mineral Identif ication...

Colour has received significant attention from the research community motivated by
the ever-increasing advances in imaging and hardware processing techniques and the
proliferation of digital colour cameras. Colour itself has been used in the development of
algorithms for applications including feature recognition, skin detection, image indexing
and retrieval, and product classification.
Colour segmentation algorithms can be divided into three main classes:

• Pixel-based techniques are built on the assumption that colour is a property locally
constant in the image that can be examined using statistical methods such as
histogram thresholding or clustering.

• Area-based techniques divide the image in an arbitrary initial number of regions, often
randomly generated, to start a merging process based on uniformity criteria until a
stability condition is reached.

• Physics-based techniques work on the assumption that, commonly irregular


illumination conditions, scene objects shape and reflection characteristics are all
known properties; this way colour is treated as a result of highlights and shadows
generated from those factors.

We focus on a combination of pixel-based and area-based algorithms, because using just


the earlier tend to produce over segmentation at a fine level. In our case this is seen as an
advantage due to the extremely irregular shape of mineralogical species. But this is not
enough to produce suitable results. Thus, a pixel-based histogram thresholding algorithm
is used to produce an initial rough segmentation that is used to feed an area-based
algorithm which will produce a final segmentation where coherence of adjacent colours
is preserved. Physics-based algorithms inclusion was put off until we move forward, if it
is actually needed to have a better control over the image capture process.

methodology
The approach for data gathering from drill holes can be described as follows. Firstly, a
calibration and training process is required which will feed the expert system. Then, this
expert system will be supervised by the logging geologist, but will speed up the process
of data acquisition.
The training process considers the following steps:

• Capture of digital photograph of the drill hole core

• Automated image analysis to determine a number of distinct categories

• Supervised calibration of the number of categories


• Labeling of the categories, associating them to mineralogies

• Analysis of characteristic vector of each category based on colour and texture


distributions

• Training of expert system.


This work is required for the first few samples. Once the system has enough information,
the process becomes semi-supervised:

• Capture of digital photograph of the drill hole core

• Automated image analysis to identify known mineralogies and define unclassified categories

• Check identified mineralogies, update if necessary, and label new categories as new
mineralogies or assign to existing categories
CHAPTER V 409

• Update characteristic vector of each category

• Update the expert system.


The process of automated image analysis to define the categories that will then be
labelled by the logging geologist is shown in Figure  ➊.

Figure 1 Image analysis procedures.

It should be noticed that the identification of mineral species is a difficult problem and
in most techniques a matching procedure is used based on a database of characteristics
for each species, which has to be tailored to the specific site under investigation [10] .

Example application and description


To illustrate the image analysis previously presented, an example application is
shown. Digital photographs from an oxide copper deposit are collected for a pilot test
of the methodology. The samples are logged to describe the abundance of the following
minerals: iron oxides intensity, copper oxides intensity, chrysocolla, atacamite, copper
pitch, copper wad, clays, magnetite, anhydrite, gypsum, carbonates.
The original picture (Figure  ➋a) is first cleaned of spurious pixels by applying a
gaussian filter. These filter parameters are set up to preserve colour contact zones while
smoothing the rest of the image as much as possible. Then a colour quantizer is applied
to avoid filling up the image statistics with redundant colour information [15] . Then
the image is converted from the rgb space (red, green, blue) to hsv (hue, saturation,
value). Only the hue channel is used as the colour source. The colour histogram is
constructed and a first segmentation is done by considering each individual mode in
➋b) [16] . This process evidently over segments the values, generating
the curve (Figure 
➋c). A statistical procedure to identify meaningful modes
too many classes (Figure 
reduces the number of classes (Figures ➋ d and ➋ e). Further refinement can be
obtained (Figure ➋ f).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
410 Image Segmentation for Mineral Identif ication...

(b)

(a)

(d)

(c)

(e) (f)

Figure 2 (a) Original image; (b) segmentation based on the identification of modes in the colour histogram;
(c) first segmentation; (d) segmentation based on the identification of meaningful models in the histogram;
(e) improved segmentation; and (f) refined segmentation based on calibration defined by the user.

The procedure to find meaningful modes in a histogram can be summarised as follows:

• First, a definition of meaningful interval is needed [17]. We say that an interval [a, b]
is statistically meaningful if there is a value c inside of it such that the interval [a, c]
is statistically increasing and the interval [c, b] is statistically decreasing.

• There are many ways to define what statistically increasing (decreasing) is. In our
CHAPTER V 411

case, we say that the increasing (decreasing) property is preserved if the interval
contains no statistically meaningful valleys or peaks. We check the latter by comparing
the histogram interval against the same interval in a known increasing (decreasing)
density function.

• Once we know how to find meaningful intervals we are able to say that a meaningful
mode is the statistically unique mode of a meaningful interval.

• With the previous definition a process can be outlined to find all the meaningful
modes in a histogram:

––Observe that the interval between two local minima is a meaningful interval
according to our definition. Having this in mind, we define the set of intervals
between all local minima as the finest histogram partition which is statistically
acceptable.
––The finest histogram partition is then refined by looking at all the adjacent intervals
that can be merged preserving the meaningful interval property. The process stops
when there are no more intervals to merge.

Figure ➋b shows the finest partition and the result of the merging process.

conclusions
Mining endeavours require appropriate information, models and plans to perform
correctly. The characterisation of the ore sent to the different processes may help forecast
the performance and define the best process for each zone of the deposit. One of the most
critical aspects of the ore characterisation is understanding its mineralogy. Current
practice bases most of the decisions on a qualitative logging of the mineralogy. Cores
from diamond drill hole samples may be used to improve the knowledge of the deposit,
providing a wealth of information about mineralogies and other rock characteristics.
We presented an approach based on the analysis of digital images from drill hole cores,
which can discriminate between different mineralogical species in a semi-supervised
mode, constituting the basis of an expert system currently under development.
The segmentation is based on the colour channel of the image. Meaningful modes are
identified and statistically tested, and the image is pre and post processed to segment in
a number of classes that can be controlled by the user.
Current results are promising in terms of the discrimination capacity of the algorithms
developed; however, there are many steps to be completed. Incorporating textures is an
important step to improve the characterisation. The matching between the automatic
classification and the mineralogical species, done by the geologist, must feed the expert
system that has yet to be tested. Finally, a friendly user interface is also required to
facilitate the work.

references
Stephenson, P. R. & Vann, J. (2001) Common Sense and Good Communication in Mineral R esource and Ore
Reserve Estimation, in Mineral Resource and Ore Reserve Estimation – The AusIMM Guide to Good
Practice (Ed: A C Edwards), pp. 13–20 (The Australasian Institute of Mining and Metallurgy:
Melbourne). [1]

Lewis, R. W. (2001) The Resource Database: Now and In the Future, in Mineral Resource and Ore Reserve
Estimation–The AusIMM Guide to Good Practice (Ed: A C Edwards), pp. 43–48 (The Australasian
Institute of Mining and Metallurgy: Melbourne). [2]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
412 Image Segmentation for Mineral Identif ication...

Goodall, W. R. & Scales, P. J. (2007) An over view of the advantages and disadvantages of the determination
of gold mineralogy by automated mineralogy. Minerals Engineering 20, pp. 507–517. [3]

Lyon, R. J. P. & Tuddenham, W. M. (1959) Quantitative mineralogy as a guide in exploration. Mining


Engineering, pp. 1233–1237. [4]

Hillier, S., Roe, M. J., Geelhoed, J. S., Fraser, A. R., Farmer, J. G. & Paterson, E. (2003) Role of quantitative
mineralogical analysis in the investigation of sites contaminated by chromite ore processing residue. The
Science of the Total Environment 308, pp. 195–210. [5]

Ward, C. R. & Taylor, J. C. (1996) Quantitative mineralogical analysis of coals from the Callide Basin,
Queensland, Australia using X-ray diffractometr y and normative interpretation. International Journal
of Coal Geology 30, pp. 211–229. [6]

van Alphen, C. (2007) Automated mineralogical analysi s of coal and a sh produ c t s-Challenge s and
requirements. Minerals Engineering 20, pp. 496–505. [7]

Hoal, K. O., Appleby, S. K., Stammer, J. G. & Palmer, C. (2009) SEM-based quantitative mineralogical
analysis of peridotite, kimberlite, and concentrate. Lithos 112S, pp. 41–46. [8]

Hoal, K. O., Stammer, J. G., Appleby, S. K., Botha, J., Ross, J. K. & Botha, P. W. (2009) R esearch in
quantitative mineralogy: Examples from diverse applications. Minerals Engineering 22, pp. 402–408. [9]

Lotter, N. O., Kowal, D. L., Tuzun, M. A., Whittaker, P. J. & Kormos, L. (2003) Sampling and flotation
tes ting of Sudbur y Ba sin drill core for process mineralog y modelling. Minerals Engineering 16,
pp. 857–864. [10]

Powell, M. S. & Morrison, R. D. (2007) The future of comminution modelling. International Journal of
Mineral Processing 84, pp. 228–239. [11]

Russ, J. C. (2007) The Image Processing Handbook, Fifth Edition, CRC Press, Boca Raton, Florida,
p. 817. [12]

Mirmehdi, M., Xie, X. & Suri, J. (2008) Handbook of Texture Analysis, Imperial College Press, London,
p. 413. [13]

Ilea, D. E. & Whelan, P. F. (2006) Color image segmentation u sing a self-initializing EM algorithm, in
Proceedings of the Sixth iasted International Conference Visualisation, Imaging and Image
Processing, Palma de Mallorca, Spain, pp. 417–424. [14]

Dekker, A. H. (1994) Kohonen Neural Ne t work s for Optimal Colour, in Quantization Network:
Computation in Neural Systems, Vol. 5, 1994, pp. 351–367, Institute of Physics Publishing. [15]

Delon, J., Desolneux, A., Lisani, J-L. & Petro, A-B. (2005) Color Image Segmentation Using Acceptable
Histogram Segmentation, Lecture Notes in Computer Science (J.S. Marques et al., Eds.), Vol. 3523,
pp. 239–246. [16]

Delon, J., Desolneux, A., Lisani, J-L. & Petro, A-B. (2007) A non parame tric approach for hi stogram
segmentation, ieee Transactions on Image Processing, Vol. 16, No. 1, pp. 253–261. [17]
Multiple – Point Conditional
Unilateral Simulation for
Categorical Variables

abstract
Álvaro parra Simulation of categorical variables has several applications in
Julián ortiz mining problems. Simulation algorithms are a useful tool to
Universidad de Chile study the variability of categorical variables such as rock types and
presence or absence of a particular element. Classical approaches
are based on the truncation of a Gaussian random field, where
a continuous variable is simulated and truncated to generate a
categorical variable with the required spatial statistics. These
methods cannot reproduce some geological characteristics since
Gaussian simulations only reproduce relations involving two
points at a time through the variogram and cannot provide control
to simulate curvilinear structures and complex hierarchical
relationships between the categories. Multi-point simulation
algorithms overcome these issues by taking relations directly from
a training image in which the geological characteristics in study
are represented.
We propose a fast algorithm based on the theory of Markov
Random Fields and computing graphics techniques. The algorithm
starts with a grid informed only with conditioning data. The
uninformed nodes are visited following a unilateral path and
using two kinds of neighbourhood data: causal and no-causal
data, defined by a constant set of conditioning data (previously
simulated), in the former case, and by the conditioning data in
the latter case. Using the causal data, a probability distribution
function is inferred from the training image for the node to be
assigned. This distribution is changed in order to honour the
conditioning data represented by the no-causal information.
Finally, a category is simulated from that probability distribution
function by Monte Carlo simulation.
We show the performance for large template sizes in terms of
computing time and statistical performance for some geological
settings honouring the conditional data. Binary and multi-
category cases are illustrated and parallelisation approaches are
discussed that could improve the performance of the method.
414 Multiple – Point Conditional Unilateral Simulation for Categorical...

introduction
Geological models are usually built based on a manual interpretation of sections and
the definition of solids representing the volumes of the geological and estimation units.
This approach does not provide a measure of the possible implicit error. Geostatistical
simulation methods provide several tools for constructing models reproducing the spatial
correlation of the units (their extent) and the relationships (contacts) among them,
honouring the conditioning data.
Conventional simulation methods based on indicators and truncation of Gaussian
fields summarise the transitions between categories with two-point statistics, not
accounting for complex relationships of multiple points simultaneously. Furthermore,
these techniques require computing many parameters that make them cumbersome for
practitioners. Object-based methods are more flexible, but conditioning may be a problem.
Multiple point simulation is still a developing research area, but very promising tools
are being proposed [1] . The most significant contributions include the Single Normal
Equation Simulation (snesim) approach [2, 3] and some of its variants that consider
direct simulation of patterns [4–6] . Other approaches use neural networks [7, 8] , indirect
integration of multiple point statistics within conventional simulation [9–11] or through
multivariate data integration [12] , and simulating using a Gibbs Sampler approach [13] .
A paradigm that falls outside this framework was proposed by Daly showing a
particular case of both a Markov Random model and a sequential simulation. He
proposed the use of a so-called unilateral path [14, 15] . This approach is not new in
other engineering applications. In computer graphics, texture synthesis deals with a
similar multiple-point statistics problem [16] . One of the main differences between
the texture synthesis problem and the geostatistical problem is conditioning. We show
progress on an approach to simulate accounting for pattern statistics inspired by the
texture synthesis solution.

proposed approach
Consider the problem of simulating a number of categories in a domain. Let re be the
empty realisation containing the nodes to be simulated. The first step is to assign the
known hard data to re. Then, the approach consists of simulating each unassigned node
in a unilateral path order, taking information retrieved using a causal and a non-causal
template centred at the node to simulate ( Figure  ➊).
(a) Template (b) Conditioning data (c) Simulation (d) Realisation

Figure 1 Illustration of the implementation of the causal and non-causal


region for conditioning during simulation [17] .
CHAPTER V 415

Unilateral path
Each node re(u) of the realisation re is assigned in the unilateral path order defined as a
vector of positions u 1, ..., u n so the positions are ordered considering first the coordinate
X, then the coordinate Y and finally coordinate Z. Figure  ➋ shows an example of the
unilateral path order.

Figure 2 Unilateral path. Left: a 3-D grid example; Right: the


unilateral path order in which the grid is visited.

Causal and non-causal template


A causal template is a template that ensures that data-events obtained in a unilateral
path order contain only pre-assigned node values. A pattern obtained using a causal
template is named causal pattern.
Let be a template defined by a box of dimension c x x c y x c z . The central position is
defined by:

(1)

We define the causal template  n of dimension n as:

with n = c x x c y x c

The non-causal template is defined as the complement of , excluding the central node.
The shape of the causal and the non-causal template are shown in Figure  ➌.

Figure 3 Non-causal pattern (light contours)


representing the region where hard data are
searched for conditioning. The causal pattern
is depicted with black contours and the arrow
shows the direction the unilateral path moves.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
416 Multiple – Point Conditional Unilateral Simulation for Categorical...

Pattern acquisition
In order to accelerate the algorithm, a pattern data base is needed where the frequencies
of the categories of the central node for all the patterns in the training image for a given
causal–template are stored. The data base allows retrieving the frequencies for the most
similar patterns from the causal data-event, such that the conditioning hard data found
in the non-causal pattern are honoured. With the retrieved frequencies, a distribution
function can be built and a category can be simulated by Monte Carlo in the visited node.
The main data structure to store the central node frequencies in our implementation is
a linked list. Each node of this linked list contains:

• A causal pattern that identifies the node


• The frequencies for all the possible values for the non-conditioned case
• A map of conditioning non-causal patterns and the frequencies for all the possible values
for the conditioned case. All of these frequencies are conditioned to the causal pattern.

Edge effect
There are many problems related to the design of the data base. Firstly, for a given
template used to scan the training image, there are positions in the border of the domain
where we cannot retrieve patterns for the given causal-template. One solution is to add
a buffer zone with random noise. This solution produces bad reproduction of the spatial
structure near the edge of the domain. It can be improved by adding a larger buffer
zone containing empty nodes to be assigned. This solution improves the realisations,
but more nodes need to be simulated and it requires an extra parameter. The approach
followed in our proposal consists of storing all the needed data bases to account for the
resulting configurations at the edges. At the beginning of the simulation, none of the
nodes contain information in the area defined by the template (if there are no hard data),
hence the only useful information is given by the prior probability of each category.
This case represents the empty template. At the second node, only one node at the left is
considered, thus we use a data base related to this template. The process constructs data
bases as required in that fashion.
The total number of data bases needed is:

for a training image of dimension h x x h y x h z . Figure  ➍ shows the 13 templates needed


for the causal template of Figure   ➌. We use this approach to consider all the available
information, but for memory and scanning time reduction, it is better to consider an
approximated solution using sub-templates for retrieving information from a single data base.
CHAPTER V 417

Figure 4 Templates to consider for the causal template given in Figure 3.

Pattern distance
We use a weighted distance where if two patterns differ in a certain node the
corresponding weight is added to the total distance. The weight depends on the location
of the node in the pattern with respect to the position of the node to be simulated. We
define the weights by setting the values according to the Euclidean distance where the
central node is the farthest. Then the matrix of weights in the pattern is normalised to
one and if there are some conditioning data involved, these nodes weights are set to one
to ensure that they are never discarded.

Conditioning
The conditioning is achieved using the non–causal template. Using this template,
frequencies are retrieved from patterns that respect the conditioning data. From the
data base corresponding to the template, a set of the most similar patterns is retrieved,
ensuring that this set is never empty. The next step is to select only the patterns and their
corresponding conditional frequencies from those that honour the conditioning data. If
none is found, the farthest conditioning node is dropped and the search is repeated until
the set of patterns is not empty.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
418 Multiple – Point Conditional Unilateral Simulation for Categorical...

Algorithm
The algorithm is summarised by the following pseudo code:

examples
Example 1: Unconditional simulation of channels over
background setting
A typical training image of sinuous channels over a background which can represent high
permeability sands over low permeability shales is used to generate a large unconditional
realisation. Results are visually extremely similar to the training image (Figure  ➎).

Figure 5 Unconditional simulation of channels.

Example 2: Conditional simulation of channels over


background setting
Using the same training image as in the previous example, and conditioning to two wells,
several realisations are computed and the expected channel facies is computed. Again,
results are satisfactory visually and statistically (Figure   ➏). It should be pointed out
that variograms are not directly imposed in the simulation; however, their reproduction
is excellent.
CHAPTER V 419

(a) Training Image (b) Conditioning data

c) One realisation (d) One realisation

(e) Frequency of finding channel (f) Variogram reproduction

Figure 6 Conditional simulation of channels.

conclusions
Geological heterogeneity characterisation is a relevant aspect of numerical modelling
for mining applications. The extent and continuity of geological domains may define the
success or failure of a mining endeavour. Conventional geological modelling does not take

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
420 Multiple – Point Conditional Unilateral Simulation for Categorical...

advantage of quantitative measures of continuity. Geostatistical simulation provides tools


for generating possible configurations of the geological units, but remains complex for
practitioners, since many parameters are required to control the result. Multiple-point
geostatistical simulation has proven valuable and is still being developed.
We show a particular implementation of a simulation algorithm for categorical
variables. It is based on a unilateral path, with a fixed search domain. We adapted this
algorithm, which is well known in texture synthesis and computer graphics, to honour
conditioning data. Results demonstrate that without directly imposing lower order
statistics, such as the variogram, spatial continuity can be reproduced and models are
visually comparable to the training image that is used to define the geological setting.
Long range connectivity is preserved, without resorting to multiple grids, as is done
in the sequential approach using a random path. One critical aspect of implementing
this technique is selecting a pattern size large enough to impose the spatial structure
defined by the objects present in the geological setting. As usual, smaller patterns provide
worse reproduction of long range connectivity. Several issues have been solved for this
implementation, including a measure of similarity between patterns, to ensure that,
even when the exact pattern is not found, a simulated value can be generated that is
consistent with most of the available information. Also, the edge effect has been solved
by considering sub-patterns retrieved from the data base. Many other issues should be
investigated including non-stationarity, however, the framework presented in this paper
has proven valuable to continue research in this area.

acknowledgements
This research was funded by the National Fund for Science and Technology of Chile
(fondecyt) and is part of the project number 1090056. The authors would also like to
acknowledge the support of the Codelco Chair on Ore Reserve Estimation at the Mining
Engineering Department, Universidad de Chile.

references
Ortiz, J. M. (2008) An Over view of the Challenges of Multiple-Point Geostatistics, in Geostats 2008 –
Proceedings of the Eighth International Geostatistics Congress, J.M. Ortiz and X. Emery (eds.),
Gecamin Ltda., Santiago, Chile, Vol. 1, pp. 11–20. [1]

Strebelle, S. & Journel, A. G. (2000) Sequential simulation drawing structures from training images, In
6th International Geostatistics Congress, Cape Town, South Africa. Geostatistical Association
of Southern Africa. [2]

Strebelle, S. (2002) Conditional simulation of complex geological structures using multiple-point statistics,
Mathematical Geology, Vol. 34, No. 1, pp. 1–21. [3]

Arpat, G. B. & Caers, J. (2004) Reservoir Characterization using Multiple-Scale Geological Patterns, ecmor
ix, 9th European Conference on the Mathematics of Oil Recovery – Cannes, France, 30 August – 2
September 2004, A026, p. 8. [4]

Arpat, G. B. & Caers, J. (2007) Conditional simulation with patterns, Mathematical Geology, Vol. 39,
No. 2, pp. 177–203. [5]

Zhang, T., Switzer, P. & Journel, A. (2006) Filter-Based Classification of Training Image Patterns for Spatial
Simulation, Mathematical Geology, Vol. 38, No. 1, pp. 63–80. [6]

Caers, J. & Journel, A. G. (1998) Stochastic reser voir simulation using neural networks trained on outcrop
data. In 1998 spe Annual Technical Conference and Exhibition, pp. 321–336, New Orleans, la ,
September 1998. Society of Petroleum Engineers. SPE paper # 49026. [7]
CHAPTER V 421

Caers, J. & Ma, X. (2002) Modeling conditional di stribution s of facies from sei smic u sing neural ne t s,
Mathematical Geology, Vol. 34, No. 2, pp. 143–167. [8]

Ortiz, J. (2003) Characterization of high order correlation for enhanced indicator simulation, Unpublished
doctoral dissertation, University of Alberta, p. 246. [9]

Ortiz, J. M. & Deutsch, C. V. (2004) Indicator simulation accounting for multiple-point s tati s tic s,
Mathematical Geology, Vol. 36, No. 5, pp. 545–565. [10]

Ortiz, J. M. & Emery, X. (2005) Integrating multiple point statistics into sequential simulation algorithms.
In: Leuangthong, O., and Deutsch, C.V., eds., Geostatistics Banff 2004, Springer, pp. 969–978. [11]

Hong, S., Ortiz, J. M. & Deutsch, C.V. (2007) Integration of disparate Data with Logratios of Conditional
Probabilities, Petroleum Geostatistics Conference, Portugal. [12]

Boisvert, J. B., Lyster, S. & Deutsch, C. V. (2007) Constructing Training Images for Veins and using them
in Multiple-Point Geostatistical Simulation, in 33rd International Symposium on Application of
Computers and Operations Research in the Mineral Industry, apcom 2007, E. J. Magri (ed.),
pp 113–120. [13]

Daly, C. (2005) Higher order models using entropy, Markov random fields and sequential simulation. In:
Leuangthong, O., and Deutsch, C.V., eds., Geostatistics Banff 2004, Springer, pp. 215–224. [14]

Daly, C. & Knudby, C. (2007) Multipoint Statistics in Reservoir Modelling and in Computer Vision, Petroleum
Geostatistics 2007, A32. [15]

Wei, L. Y. (2001) Texture Synthesis by fixed neighborhood searching, Ph. D. thesis, Stanford University,
p. 132. [16]

Parra, A. & Ortiz, J. M. (2009) Conditional Multiple-Point Simulation with a Texture Synthesis Algorithm,
iamg 09 Conference, Stanford University. [17]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Quantifying Uncertainty in
Resources Tonnage Using Multiple
Point Geostatistical Simulation

abstract
Sebastián hurtado Simulation of categorical variables has several applications in
Julián ortiz the mining industry. It is essential to have a reliable geological
Universidad de Chile model because it influences the subsequent stages such as the
design of an open pit, the caving or stopes in underground mining
and the metallurgical processing. Until today this has been done
considering a deterministic geological model. This approach does
not allow for characterising the uncertainty and the inherent risk
associated with mining projects and can provide an optimistic
view of the economic performance of the project. It is very
important to quantify the uncertainty in tonnages and in the
contacts between geological units.
There are currently many methods to simulate categorical
variables; most of these use the covariance model as a measure
of geological continuity. Covariance models are rarely sufficient
to depict patterns of geological continuity consisting of strongly
connected, curvilinear geological objects such as channels or
fractures. Multiple-point statistics (mps) are used to infer spatial
patterns by using many spatial locations within a given geometric
template scanning a training image. These mps have been used for
simulating geological settings in the petroleum industry.
We propose applying the Single Normal Equation Simulation
(snesim) based on mps to quantify uncertainty in tonnage in an
area of the copper deposit of El Teniente, owned by Codelco Chile.
This is done by using drillholes as hard data and a deterministic
geological model as a training image. The geological model has
three different rock types and the contacts between them are very
clear. The resulting simulations are interpreted as an assessment
of the uncertainty in the deterministic geological model.
The snesim results are compared against Sequential Indicator
Simulation categorical realisations.
We show the performance of both methods for categorical
simulation. Results indicate that snesim has a better performance
when counting the mismatch with a validation data set over the
same domain. This can be explained by the use of more complex
statistics than indicator simulation, which suggests a high
potential for these techniques in the mining industry.
424 Qu antif ying Uncer taint y in R esources Tonnage Using Multiple Point ...

introduction
The extent and continuity of geological units define the volume of material of interest in
an ore deposit [1] . Conventional practice considers modelling these units by deterministic
methods, that is, by interpreting cross sections and plan views and building a 3-D solid
by wireframing or other method. Once defined in this fashion, each geological unit has
a fixed volume and tonnage. There is an obvious lack of information for decision making,
if only these values are available. The interpreted volumes are subject to errors and these
possible fluctuations should be considered when making decisions about the design or
plan for the extraction of the ore.
Geostatistical methods provide tools for quantifying these fluctuations. Traditional
methods for modelling categorical variables include indicator techniques and simulation
of Gaussian fields that are truncated based on some rules to reproduce the relationships
between geological bodies [1] . However, newer techniques based on multiple-point
statistics (mps) are available and have seen some applications in the oil and gas industries
[2, 3] . In this paper, we explore the applicability of these techniques to a mining case.
We assess the fluctuations in tonnage over a region of the El Teniente Mine, specifically
at the Esmeralda Sector, which is currently being operated by Codelco Chile. A classical
indicator technique and a more sophisticated multiple-point simulation algorithm are
tested. Some sensitivities related to the choice of the training image are also performed.
Results are compared by jack-knife to evaluate the forecasting capacity of each method.

methodology
The approach is implemented using the following steps:

• A volume to be studied is selected. Since this research aims at assessing the problems
in applying multiple-point geostatistical simulation to quantify tonnage uncertainty in
mining applications, a volume with a relatively simple distribution of units was selected.

• Drill hole data within the selected volume are composited to a constant length,
assigning the most frequent geological unit to the composite.

• The data base is divided into two groups. The first group is used as conditioning
information for simulation, while the second group is kept aside to perform validations
later on, in order to assess the precision in the uncertainty quantification.

• Statistical and geostatistical analysis is performed:


––The proportions of the different categories are calculated and declustered to obtain
representative statistics.
––The geological model in current use is selected as the training image for the
mps methods. Since a training image is required and it must correspond to an
interpretation of the geological setting, the deterministic interpreted model was
used. This may seem circular, however the goal is to build alternate models that
provide a quantification of the possible fluctuations over the “average” model, in
this case, the deterministic interpreted model.
––The indicator variogram of each category is calculated and modelled. These models
are used to perform indicator simulation.

• Simulation:
––Sequential indicator simulation is used to generate realisations of the distribution of
geological units. This constitutes a more standard procedure in the mining industry
and is considered here for comparison purposes.
CHAPTER V 425

––Single normal equation simulation is used to generate realisations of the distribution


of geological units. This method does not require variogram models, only the
training image.

The resulting models are validated by comparing the mismatch in the assigned categories
using the different methods.

case study
El Teniente Mine
El Teniente Mine is located 80 kms south of Santiago in Chile and 2500 metres above
sea level. It started operating in 1904. Extraction is done from several underground
operations. Production reaches 380 000 metric tonnes of copper per year and 4500 metric
tonnes of molybdenum as a by-product.
Geologically, the following units can be characterised:

• cmet, Complejo Máfico El Teniente (El Teniente Mafic Complex): this lithology formed
mostly by andesites hosts the majority of the mineralisation of copper and molybdenum
at the Esmeralda mine. Towards the west, cmet is intersected by breccias from the
Braden Breccias Complex, while at the East, cmet is intermingled with tonalites.

• B. Braden, Complejo de Brechas Braden (Braden Breccias Complex): this complex is


located in the central area of the deposit, shaped as an inverted cone with a diameter
of 1200 m near the surface and over 1800 m of observed vertical continuity.

• Hydrothermal Breccias:
––B. Turmalina (Turmaline Breccias): cemented by turmaline, this breccia is composed
mostly of andesitic fragments.
––B. de Clorita (Chlorite Breccias): these breccias are in contact with cmet and are
located in a strip that varies from 5 to 30 m in width.
––B. de Biotita (Biotite Breccias): located at the east side of the Esmeralda mine.
––B. de Anhidrita y Turmalina (Anhydrite and Turmaline Breccias): it is located in the
contacts with cmet.

All lithologies have suffered in different degrees from hydrothermal alteration, which is
responsible for the concentration of the elements of economic interest.
The identification of lithological types is relevant to the metallurgical performance of
the ore, but also due to its geotechnical behaviour and caving properties.

Sample information
The available information is depicted in Figure  ➊. The composites categories are colour
coded depending on the rock type prevailing. Samples are clustered around exploration
drifts and some areas are misrepresented. The geological understanding of the deposit
is used during the modelling procedure of the units. The actual volumes of each unit are
poorly represented by the proportions of the available composites. This is later taken into
account in the analysis by using declustering techniques.

Geological model
The geological model is an interpretation of the extent and location of the lithological
units. It is the result of a lengthy process of modelling over plan views and cross

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
426 Qu antif ying Uncer taint y in R esources Tonnage Using Multiple Point ...

sections, which are combined into solids by wire framing. As such, it is used as a
training image for the mps simulation runs. Figure   ➋ shows an isometric view of
the model.

Figure 1 Available data.

Elev (Z)
2400.00

Elev (Z)
2400.00

East (x)
1000.00

North (y)

North (y)
600.00

Figure 2 Geological model, used as the training image for MPS methods.

Tab le 1 shows the proportions of each category, from the data, after polygonal
declustering and from the geological model. It can be seen that declustering does a fair
job and allows representing each category with a proportion close to the one defined by
the geological interpretation, over the domain.
CHAPTER V 427

Table 1 Proportions from the composites, from the model and validation sets, after declustering and from
the geological model

Composites Model (75%) Validation (25%) Declustered Geological Model


% % % % %
CMET 66.24 66.22 65.91 28.69 27.17
B. Braden 25.21 25.24 25.00 59.34 69.23
B. Turmalina 8.55 8.54 8.52 11.97 3.6

Simulation
The snesim algorithm in S-Gems [4] is used to create 100 realisations of the distribution
of geological units. The input parameters required are:

• Conditioning data: 75% of the composites are used to build the model.

• Training image: in the first case, the interpreted geological model is used as training
image. A second case is done considering a smaller training image (a portion of the
full geological model in the domain, Figure  ➌). The proportions of that second image
are shown in Table 2 .

• Target proportions: the declustered proportions are used as target (Table 1).

• Servosystem factor: These parameters ensure that the resulting realisations will match
the proportions defined as target.

Table 2 Proportions from smaller training image,


obtained as a portion of the geological model

  %
CMET 37.36
B. Braden 53.64
B. Turmalina 9.00

Results are summarised in Table 3 , showing the statistics obtained for the proportions
from the realisations, considering the first and second training image. Figure  ➍ shows
two realisations of each case. It can be seen that the models built with a smaller training
image are noisier.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
428 Qu antif ying Uncer taint y in R esources Tonnage Using Multiple Point ...

Figure 3 Training images for the first run of SNESIM (left: geological model)
and for the second run (right: a portion of the geological model).

Table 1 Summary statistics from 100 realisations using SNESIM with both training images

SNESIM 1 SNESIM 2

Min Max St. Dev. Avg. Min Max St. Dev. Avg.
CMET 23.97 33.30 1.88 26.84 24.70 29.33 1.88 27.09
B. Braden 56.37 70.10 3.25 66.21 59.37 68.43 3.25 64.05
B. Turmalina 4.40 11.77 1.63 6.95 5.80 11.73 1.63 8.86

Figure 3 Top: two realisations using SNESIM and the full geological model as training image;
Bottom: two realisations using SNESIM and a portion of the geological model as training image.

Sequential indicator simulation [5] is run as an alternative, considering this is a more


conventional approach for simulating categorical variables. In this case, the use of
spatially biased data when computing experimental variograms results in variograms
with sills that are not consistent with the declustered proportions, hence these are scaled
CHAPTER V 429

based on the inferred proportions from polygonal declustering. Once modelled, these are
input in the simulation program. The spatial undersampling in some areas generates
that these lack conditioning data when simulating, generating categories that are not
supposed to appear in some areas. Using a map of local proportions, considering a search
radius of 200 m, as secondary information solves this problem. Results are presented in
Table 4 and Figure  ➎.

Table 4 Summary statistics from 100 realisations using BlockSIS

BlockSIS
Min Max St. Dev. Avg.
CMET 27.90 48.93 3.29 34.92
B. Braden 40.13 61.40 4.01 53.59
B. Turmalina 4.53 26.17 3.75 11.49

Elev (z) Elev (z)

East (x) East (x)

North (y) North (y)

Figure 5 Two realisations using BlockSIS.

Validation
Validation is done by comparing the simulated categories in blocks where a validation
composite was available, with the actual category of that composite. The validation set is
displayed in Figure   ➏. Table 5 summarises the number of coincidences and percentages
when the simulated category matches the category in the validation composite.
From these numbers, it is clear that simulation using SNESIM and considering the
geological model as the training image provides the best result. The category B. Turmalina
is the one with most uncertainty and harder to predict. Only 43.27% of coincidence can
be achieved in the best case. This makes perfect sense, since knowing the location of the
contact between this unit and its neighbouring units is subject to error.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
430 Qu antif ying Uncer taint y in R esources Tonnage Using Multiple Point ...

Elev (z)

East (x)

North (x)
Figure 6 Blocks used for validation, based
on the composites that were kept aside.

Quantification of uncertainty in tonnage


A useful result of this exercise is to assess the expected range of tonnage in each category
with the different methods (Table 6). Fluctuations in tonnage can be larger than 10%
of the expected tonnage, which may have a very significant impact in the mine plans.

Table 5 Matching statistics for all cases

SNESIM 1 SNESIM 2 BlockSIS

Category in Number Number Number


Matching % Matching % Matching %
validation set of cases of cases of cases
CMET
CMET 8420 72.59 8330 71.81 7949 68.53
B. Braden 1 0.02 0 0 800 18.18
B. Turmalina 101 6.73 203 13.53 100 6.67
B. Braden
CMET 0 0.00 1 0.009 614 5.29
B. Braden 2983 67.80 2977 67.66 1900 43.18
B. Turmalina 50 3.33 87 5.80 400 26.67
B. Turmalina
CMET 380 3.28 369 3.18 237 2.04
B. Braden 16 0.36 23 0.52 300 0.07
B. Turmalina 649 43.27 610 40.67 300 20.00

The numbers in bold are the percentages of matching between the simulated models and the validation set for a given category

Table 6 Tonnage range for each unit obtained with the different methods

Geological
SNESIM 1 [kton] SNESIM 2 [kton] BlockSIS [kton]
model
[kton] Min. Max. Min. Max. Min. Max.

CMET 16,954 14,957 20,779 15,413 18,302 17,410 30,532


B. Braden 43,200 35,175 43,742 37,047 42,700 25,041 38,314
B. Turmalina 2,246 2,746 7,344 3,619 7,320 2,827 16,330
CHAPTER V 431

conclusions
Accounting for the variability in grade is well understood in the mining industry;
however, tonnages of the relevant geologic units are still determined by a deterministic
geological model. Since a single interpretation of the distribution of units is available,
the tonnages are locked and the variability due to the geological heterogeneity is not
incorporated in the modelling or the decision making process.
Geostatistics provides several tools to quantify and use that variability. Simulation of
categorical variables can be done with conventional methods based on indicators or on
truncation of Gaussian fields. However, these methods are cumbersome for practitioners.
Multiple point geostatistical simulation provides new tools that are easier to use and
that perform well.
We showed a case study where we implemented the Single Normal Equation Simulation
algorithm considering two training images, both based on the geological interpretation
materialised in the geological model. These models are compared with a conventional
indicator simulation. The mismatch computed for each method indicates that narrower
units are more variable and harder to predict. Additionally, we have shown that the
fluctuations that can be expected in the tonnages of the geological units may exceed
10% of their expected tonnage, which may have a significant impact in the operation
and planning of the mine.
mps simulation methods provide powerful tools for mining application and their
potential as easy-to-apply tools should be recognised.

acknowledgements
This research was funded by the National Fund for Science and Technology of Chile
(fondecyt) and is part of the project number 1090056. The authors would also like to
acknowledge the support by the Codelco Chair on Ore Reserve Estimation at the Mining
Engineering Department, Universidad de Chile.

references
Chilès, J. P. & Delfiner, P. (1999) Geostatistics: modeling spatial uncertainty. Wiley, New York. [1]

Strebelle, S. (2002) Conditional Simulation of Complex Geological Structures Using Multiple-Point Statistics,
Mathematical Geology, 34(1). pp. 1–21. [2]

Ortiz, J. M. (2003) Characterisation of high order correlation for enhanced indicator simulation. Ph.D. thesis,
University of Alberta. [3]

Remy, N. (2004) Geostatistical Earth Modeling Software (sgems), User's Manual. [4]

Deutsch, C. V. (2006) A sequential indicator simulation program for categorical variables with point and
block data: BlockSIS. Computers & Geosciences, 32. pp. 1669–1681. [5]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Distributed-Multiprocess
Implementation of Kriging for the
Estimation of Mineral Resources

abstract
Exequiel sepúlveda One of the most popular methods for the estimation of mineral
Julián ortiz resources is kriging. It allows for estimating a variable in a block
Universidad de Chile model from a set of conditioning data and makes use of the
spatial continuity through the variogram. In many cases, these
models consider tens of millions of blocks and are conditioned
to hundreds of thousands of samples. Therefore, any estimation
software must be able to handle this amount of information,
and, in addition, it must be capable of computing the model in
a reasonable time. Most available software have been designed
and implemented under a sequential programming paradigm,
and consequently do not take advantage of the available capacity
offered by today's computers. Nowadays, these are based on
multicore architecture. We propose a distributed - multiprocess
implementation to improve the performance of this estimation
algorithm, considering two main focuses: (1) Use of efficient
algorithms for the different issues involved in the estimation
by kriging (search and solving of systems of linear equations),
and (2) Implementation of the algorithm in a parallel setting, in
order to distribute the computation effort in several processes.
The first focus is approached using oct - t rees and specific
algorithms for the solution of systems of linear equations
with symmetry. The second focus is resolved by modifying
the kriging algorithm to fit specific strategies for the use of
multiple processes and distribution of the computation load,
thereby significantly reducing the computation time for large
estimation models. In addition to this, some tools are used for
specific homogeneous systems of processors (clusters) to further
reduce the running time of the estimation.
We show a case study to demonstrate the improvements
in computation time from three different perspectives: (1)
using the multicore capacity; (2) improving the performance
by adapting the algorithm to run in a distributed framework
(cluster); and (3) increased speed due to the use of specific tools
for a homogeneous cluster.
434 Di s tri buted-Multiprocess Implementation of K riging...

introduction
Resource and reserve quantification calls for interpolation techniques to predict a variable
at unsampled locations from available samples in a neighbourhood, and accounting for
the change of support involved and the spatial continuity. These techniques are based on
the theory of regionalised variables [1] and are generically called kriging [2, 3] . Since the
first introduction of these procedures in the mining industry, kriging techniques saw a
rapid development, with the introduction of many variants to account for peculiarities
of the data. In addition to simple and ordinary kriging, there are many non linear flavors
such as indicator, lognormal, multi -Gaussian and disjunctive kriging, just to name a
few. Nonetheless, all these approaches have the same basic structure:

• For each block a search of nearby samples is required


• A system of linear equations is established and solved.
This procedure requires knowledge and modelling of the spatial continuity through a
variogram or covariance model. Furthermore, the final goal is always to obtain a best
estimate given by an optimality criterion.
Currently, it is common to find mining districts with very complex geology which need
a small support (block size) for modelling [4] or massive low grade deposits which require
the construction of very large models in order to provide a realistic view of the district's
potential for mine design, scheduling and optimisation. The construction of these very
large block models is difficult due to time and software constraints and provides little
chance for doing tests and validations.
The application of computers in the calculation of these numerical models goes back
to the early days of geostatistics, thus they are based on a sequential programming
paradigm adequate to the technology of those days. Current technology provides
very accessible computers with multi-core technology and clusters of computers with
multiple processors.
Parallel and distributed computing has been used in geostatistics in some applications,
mainly oriented towards simulation, which is computationally more expensive.
We present a distributed-multiprocess framework that takes advantage of the
amenability of kriging to parallelising and that can be used in a regular desktop computer
with one multi-core processor or in a cluster of various nodes with multi -core processors.

Kriging algorithm and its optimisation


Kriging is performed over a specific domain, given by an estimation unit. This estimation
unit is the result of a geological analysis and must have enough samples to perform
statistical inference of a model of spatial continuity. The domain is discretised in blocks.
Estimation is done on a block by block basis, by centring the search of nearby data on the
node to be estimated (Figure   ➊). Once some of the samples found in the neighbourhood
are selected, a system of linear equations is set up to find the weights to be assigned to
each sample data. Constrains and transformations can be required, depending on the
type of kriging to be used.
CHAPTER V 435

Figure 1 Domain, block model, data and search neighbourhood for kriging.

When analysing the computation time used in this process, the following time
distribution can be found for a typical kriging set up:

• Initial setup and conditioning data pre-process: 2%


• Search for conditioning data: 35%
• Setup, solution of linear system of equation and computation of estimated value and
variance: 60%
• Post-process and writing output: 3%
The optimisation of the computation time must focus on the most demanding issues, in
this case, the search of nearby data, and the set up and solution of the kriging system.

Spatial search with oct-trees


Our implementation of kriging, considers oct-trees for the spatial search of nearby data.
An oct-tree, abbreviated simply as octree, is a search tree where each parent node is
divided into eight children nodes that can be used for indexing spatial data in 3 - d. Each
one of the child nodes is associated to a spatial octant. Octrees are a particular application
of kd - trees, with dimension 3.

Figure 2 Octree indexing: each node is


divided into eight children nodes.

Finding k nearby data for a given distance D using a octree is an operation O (k log n) [5, 6] ,
while other approaches such as the superblock search implemented in gslib is O (kn).
When a large number of data is available, the gain of using an octree as compared to a
superblock search is very significant, as illustrated in Figure  ➌.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
436 Di s tri buted-Multiprocess Implementation of K riging...

Figure 3 Comparison of an
algorithm O(klogn) and O(kn)
as n increases.

Solving systems of linear equations


The second problem addressed to optimise the computation time of kriging is solving the
linear system of equations. In general, the system can be expressed as Ax = b, where A is
the data to data covariance squared and symmetric matrix, x is the vector of weights and
Lagrange parameters, depending on the type of kriging used, and b is the data to block
or point to be estimated vector of covariances.
Typically, available software utilise generic solvers for this problem, being the most
common the LU decomposition of the matrix A. However, since the system of equations
is symmetric, particular implementations of solvers can take advantage of this fact
providing optimised solutions in terms of the computation time. In addition to the
classical triangular decomposition (LU), a spectral decomposition (LDLT ) and the Cholesky
approach can be used (LLT ) (Table 1).

Table 1 Order of algorithms for the decomposition of linear systems of equations

Linear System Type Decomposition Order


Generic case LU

Symmetric semi-positive LDLT

Symmetric definite positive LLT

Parallelisation and distribution of computations


Parallelisation in computing is a technique that allows executing a set of tasks in
different processors at the same time. Under ideal conditions, if a sequential process
takes a time T to compute, using P processors, the time would be reduced to T/P if the task
is 100% parallelisable. However, in practice, computers do not provide this ideal scenario
in terms of their architecture, due to communications between CPUs and memories.
Another problem is that the operative system must manage the assigned CPU time to each
process. In spite of these problems, the most relevant aspect is that algorithms usually
have not been designed to be executed in parallel, and not all parts of a computation
problem can be parallelised.
CHAPTER V 437

Figure 4 Comparison of the


different solving algorithms.

Although computers increasingly provide multiple CPUs, this has a practical limit. The
only way to have a large number of CPUs available is through a cluster of computers, which
brings additional challenges, such as synchronisation of processes, and data transmission.
For all practical effects, a distinction is required between a Shared Memory parallel
algorithm, when it has been designed to run in a single computer using several CPUs
that share the memory, and a Distributed Memory parallel algorithm, when it has been
designed to run in several interconnected computers (cluster).

• Shared Memory Algorithm: The idea is to execute several processes simultaneously


(threads) using the available CPUs and sharing the memory. As an example, a schematic
of a parallel application of a for loop using four threads is presented below.

Figure 5 Schematic of a
parallelisation of a for loop.

• Distributed Memory Algorithm: a simple scheme to implement a distributed


algorithm is to name a node as master node, with the objective of preparing the data
which means partitioning the problem, connecting with the slave nodes, sending the
information to each slave, synchronising the execution status including the processing
done by the master node, and collecting data to produce the final result.

By applying any of these approaches, a time reduction in the execution of a program as


compared with the sequential execution is obtained. This improvement is quantified
with the speed-up, which is defined as the ratio between the sequential execution time
and the parallel execution time of a task.

(1)

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
438 Di s tri buted-Multiprocess Implementation of K riging...

Amdahl's law establishes a maximum for the speed - up [7, 8] :

(2)

Where P is the portion of the program that can be parallelised and N is the number of
processors. Naturally, as P is close to 0, the speed - up is close to one, meaning that no
improvement can be achieved.

Figure 6 Speedup and parallel portions.

An empirical approach to calculate the parallelisable portion of an algorithm is given by:

(3)

Sequential kriging algorithm


The basic algorithm for applying kriging in a sequential fashion, that is, without
parallelising, can be summarised as follows:

• Read the parameters, including the definition of the set of point/blocks to be estimated
• Process the parameters and load conditioning data
• For every point/block to be estimated:
––Search for data within neighbourhood
––Build the kriging system
––Solve for the unknown weights
––Compute the kriging estimate and kriging variance.

• Write out the results.


CHAPTER V 439

Proposed distributed multiprocess implementation


To explain our proposed method, we first differentiate parallelisation and distribution
of the computation load.

• Parallel Kriging Algorithm: Parallelisation focuses on the task performed over


every point/block to be estimated. This task is a function of the location of the point/
block, the input data, the variographic model, and the search parameters. Loading
parameters and data, and writing the output are not part of the parallelisation. If N
processors are available, N simultaneous tasks can be performed, which would make
us expect a speed up of N. The actual parallel portion of this is estimated as 0.86, using
one to four nodes and one to eight CPUs. This gives a speed up of approximately seven.

• Distributed Kriging Algorithm: In this case, the points/blocks of the model are
divided into portions, each one of these is sent to a different node, within which the
estimation is parallelised as explained above. Each one of these portions is fed not
necessarily with the same data.

Using both concepts, a kriging problem consisting in B points/blocks to be estimated


can be solved considering N nodes with C cores each, as follows: each node processes one
portion of the model, consisting in B/N points/blocks. Within each node, these blocks
will be processed in parallel by the C cores.
Figure   ➐shows a schematic of the solution of the block model presented in

Figure  , using three nodes and two threads per node. The block model is divided into
portions and each portion parallelised in a node.

Performance tests
To test the performance of the implementation of kriging, a case study is developed
considering a copper deposit, where around 230,000 drill hole samples are available.
Two block models are considered to assess the speed up obtained with the distributed
multiprocess algorithm. The first grid has five million blocks and the second, constituting
a real-world case has 75,168,000 blocks. Each block is discretised into 2 x 2 x 2 points to
perform block kriging and a variogram consisting of a nugget effect plus two anisotropic
nested structures is used.
The first test considers a sequential application using the small block model (five
million blocks) with industry standard search parameters, and considering only one rock
type. Execution times are compared with the open source software gslib [9] . A 62% time
reduction is obtained (Figure  ➑). This means that the optimisation of the code allows
our implementation to run more than twice as fast as gslib when a single process is used.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
440 Di s tri buted-Multiprocess Implementation of K riging...

Figure 7 Parallelisation and distribution of the problem.

A second test considers the Shared -Memory mode, where a single node is used and several
processes (threads) are launched (Figure   ➒).

Figure 8 GSLIB v/s sequential version.

Figure 9 GSLIB v/s shared memory version (one node, one to eight processors).
CHAPTER V 441

A third test evaluates the use of a distributed approach, where each node parallelises
the tasks within its processors (eight in each node). Figure  �� shows the results.

Figure 10 GSLIB v/s distributed memory version (one to four nodes and eight processors).

The final test uses the large block model (over 75 million blocks) to evaluate the time used
to perform kriging considering five different estimation units, three kriging runs with
different search parameters, and up to 48 nearby samples for solving each kriging system.
The total time for running the entire model is 260 seconds, that is, just over four minutes.

conclusions
Kriging is still the most used tool for the estimation of mineral grades in the mining
industry. However, models include more and more variables and their size increases due
to the complexity of the geological settings, the requirement for higher selectivity or the
exploitation of lower graded mining districts of large dimensions. Current commercial
software usually consider a sequential implementation of the algorithm, but do not
take advantage of the newly available multi - core processors. We discuss a distributed
multiprocess implementation of kriging and show the advantage in terms of computing
time that can be achieved. This implementation proves faster in the sequential application
due to optimisation of the key aspects of the code, namely the search for nearby data
and solving of the linear system of equations. When implemented in a parallelised and
distributed fashion, time reductions can be of one or more orders of magnitude. The
implementation is flexible enough to provide faster computation of block models by
kriging in a multi -core computer or in a cluster of multiple nodes.
The availability of faster algorithms to run geostatistical tools provides the possibility
to run several kriging plans and perform validations, as well as checking several
combinations of kriging parameters in several runs. This analysis ensures that a
satisfactory block model is fed to the subsequent steps of the deposit evaluation.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
442 Di s tri buted-Multiprocess Implementation of K riging...

references
Journel, A. G. & Huijbregts, C. J. (1978) Mining Geostatistics, Academic Press, London, p. 600. [1]

Glacken, I. M. & Snowden, D. V. (2001) Mineral R esource Estimation, in: Edwards, A.C., ed., Mineral
Resource and Ore Reserve Estimation — The AusIMM Guide to Good Practice: The Australasian
Institute of Mining and Metallurgy, Melbourne, pp. 189 – 198. [2]

Sinclair, A. J. & Blackwell, G. H. (2002) Applied Mineral Inventor y Estimation, Cambridge University
Press, Cambridge, p. 381. [3]

Jara, R. M., Couble, A., Emery, X., Magri, E. J. & Ortiz, J. M. (2006) Block Size Selection and Its Impact
on Open Pit Design and Mine Planning, Journal of The South African Institute of Mining and
Metallurgy, Vol. 106 (3), pp. 205 – 211. [4]

Friedman, J. H., Baskett, F. & Shustek, L. J. (1975) An Algorithm for Finding Nearest Neighbors, ieee
Transactions on Computers, Vol. 24(10), pp. 1000 – 1006. [5]

Hjaltason, G. R. & Samet, H. (1995) R anking in Spatial Databases, in: Egenhofer, M.J. and Herring,
J.R., eds., Proceedings of the Fourth Symposium on Spatial Databases, ssd'95. Lecture Notes in
Computers Science, Vol. 951, Springer, Berlin, pp. 83 – 95. [6]

Amdahl, G. (1967) Validit y of the Single Pro ce s sor A p proach to Archie ving L arg e-scale Comp uting
Capabilities. afips Conference Proceedings. [7]

Hill, M. & Marty, M. (2008) Amdahl's Law in the Multicore Era. ieee Computer, July 2008. [8]

Deutsch, C. V. & Journel, A. G. (1998) gslib : Geostatistical Software Librar y and User's Guide, 2nd edn,
Oxford University Press, New York, p. 369. [9]
The Business Case for Integrated
Materials Characterisation in
Mining

abstract
Karin Olson hoal Materials characterisation is an important part of mining projects
Jane stammer and takes many forms. Typically, materials are first characterised
Jocelyn ross
geologically and chemically by assay, followed by metallurgical
Colorado School of characterisation in the form of breakage, leach and flotation
Mines, USA
tests, mining blast tests and environmentally focused acid
accounting tests. In a geometallurgical context, we would argue
for integrating these test methods so that the response of the
materials at each stage could be quantified and used to predict
future behaviour. Many companies wish to implement some form
of geometallurgical program, but they may lack the expertise,
funds or management approval to commit major resources to
such an endeavour. What has been missing to date is a business
model for creating an effective characterisation program from
beginning to end, essentially for the life of a project. In this
paper, we present the business case in terms of a characterisation
stage-gate structure with interactive loops of reconciliation and
re-characterisation. We propose a simple method in which the
mineralogical and textural characterisation of materials acts as
a platform upon which other characterisation efforts are built
throughout the life of a project. The stage-gate structure allows
for evaluation stages and important decisions to be made by a
cross-functional team before going forward in a project. In the
current socioeconomic climate, the importance of data integrity
and best practices in operations applies to both the visibility and
sustainability of a project. This method incorporates materials
characterisation into the business discussion of the overall project
risk and cost structure.
444 T he Bu siness Ca se for Integrated Material s C harac teri sation in Mining

introduction
In 1998, T. McNulty discussed the critical components of mining innovation as being
truly revolutionary in nature, as opposed to evolutionary or developmental, and requiring
companies that are early-adaptors to really implement change [1] . A 2006 study by the cim
Society for Innovative Mining Technology showed, however, that mining professionals
looked toward developments in information technology and mechanisation to drive
innovation rather than the creation and adaption of new ideas. They further looked to
companies, and not to individuals, to develop innovation for the industry, citing barriers
to innovation as management attitudes toward change and risk [2] . At the present time,
companies have an exceptional opportunity to take advantage of innovative thinking and
applications. Truly innovative ideas can be rapidly exchanged, tested and applied through
the availability of new instrumentation, data management capabilities and improved
communications systems. Environmental and sustainable resource requirements are
further forcing the industry to be proactive in this area.
One area on the brink of transformational change in the mining and energy industries
is that of materials characterisation. Through the increased availability of scanning
electron microscopy (sem)-based quantitative mineralogy techniques such as mla and
qemscan, characterisation has the potential to be the platform for truly innovative
changes to expedite operations and reduce costs. In this paper, we present a template for
using materials characterisation as the basis for project management and decision making.

characterisation and geometallurgy


During the life cycle of a mining project, different forms of material characterisation
information are generated by specific technical groups and at different stages in the
project, perhaps overlapping in sequence and with limited communication between
groups. It is clear that knowledge of mineralogical and geological relationships improve
metallurgical testing, as shown by the recent high level of interest in geometallurgical
(geomet) studies. Flotation tests otherwise conducted by experimentation benefit by
using mineralogical data to predict recoveries. Leach tests traditionally conducted on
the basis of assay-calculated mineralogy benefit from actual mineral data requiring
different leach conditions.
The result of multiple characterisation efforts is that the same material (ore)
is characterised geologically, mineralogically, chemically, metallurgically, and
environmentally through independently operating procedures and by different people.
The potential for experimentation and repetition of tests and for developing unworkable
flow sheets, as well as the scheduling of these characterisation steps results in significant
and unnecessary costs to the project.
Geomet programs demonstrate how technical divisions can collaborate and
communicate more effectively, for example linking metallurgical responses of materials
to mineralogical controls. Geomet relies on quantitative mineralogy that transforms
geological information (rock type, alteration, mineralogy, texture) into a data format
that can be integrated into process flow sheets and utilised by engineers. In practice,
geomet is commonly conducted by a technical division, perhaps the geology or metallurgy
team, independently interested in reconciling recovery with mineralogy, or relegated to
an outside group. It is rare in a company that the potential cost benefits of focussing on
characterisation across all divisions are understood internally; this potentially handicaps
many geomet programs that may not receive management support and therefore may
not achieve desired goals.
CHAPTER V 445

There is value in having the earliest possible understanding of ore and gangue
variability from which to predict metallurgical responses. Early-stage full characterisation
and integration of existing core data into a 3-d geological model quantifies the mineral
associations, textural attributes of the ore, alteration, deleterious elements, and
anticipated recovery issues. The cost benefits realised at an earlier stage than normal
might be in the reduction of the number of drill holes, or in changes to the grinding
circuit through prediction of hardness. In order to fully adapt the integrated approach
across the mine life cycle, however, companies tell us they are limited by the challenges
of effectively utilising enormous data sets, correlating small sample sizes to mine-
scale operations, and justifying the costs of additional analyses, instrumentation, or
personnel. In order to go beyond localised geomet efforts to a full-project characterisation
solution requires a structure that can be carried through the entire project and which
is related to defined business outcomes such as cost, risk, and roi. This paper outlines a
characterisation method using a modified stage-gate structure with defined phases and
check points and iterative pathways for re-characterisation.

methodology
The basis of this discussion is summarised in two key points: (1) naturally occurring
ore is composed of minerals with variable abundances and textural associations, and (2)
the response of the ore to extraction is controlled by point 1. Understanding the mineral
chemistry, mineral associations, texture, and rock fabric of the ore (and gangue) is important
in order to predict grinding, leaching, flotation, and environmental response. Data sets
made up of mineralogical characterisation information are the linkage between these
stages, and serve as points at which projects can be assessed before proceeding further. The
evaluation of characterisation data sets provides check points in a project, opportunities for
planned, focused, objective and data-centred analysis of operational effectiveness.
In a decision management context, the cost benefit potential of integrated
characterisation can be recognised early and carried through the life of project.
The business case for the program is made by measuring the value of integrating
characterisation data sets collected during specific phases of the project by different
technical divisions. It is not enough to have a geomet specialist on staff; the geologists
and engineers should collaborate, as a cross-functional group is best capable of assessing
the progress of a project on the basis of technical data. Innovation occurs through
communication and the integration of data sets for multiple uses during defined
phases of a project, when parallel operations (geology and metallurgy, for example) may
be testing and evaluating the characteristics of ore materials simultaneously and in
different ways. Objective and empirical assessment points (gates), on the basis of results
produced at given intervals, allow for an assessment of methods and costs. This method
helps eliminate redundant steps, and enables development of an accessible database
that is useable by each group during evaluation of project development. This allows
for objective decisions regarding advancing or terminating a project to be based on the
materials data, that can then correlated with economic, risk, or other criteria. The value
of basing a business case on materials characterisation is that decisions are based on
the nature of the ore itself, since each site will differ in terms of operating costs, cost of
implementing new protocols, and environmental constraints.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
446 T he Bu siness Ca se for Integrated Material s C harac teri sation in Mining

characterisation platform
The characterisation platform described here utilises a stage-gate structure [3]
(Figures ➊ to ➌). The three boxes in Figure ➊ represent three phases of a project
that are subdivided into characterisation components. Initial characterisation and
domain determination is made largely by the geological team and includes information
on lithology, alteration, and mineralisation. Preliminary assessment of parameters such
as range of grain size, texture, presence of minerals that might impact extraction, and
other key features are identified through core logging, optical microscopy, and other
methods such as hand-held xrf and nir. This information is put into a 3-d geological
model in which domains and ore types are identified. The business case for developing
the deposit is initiated at this point on the basis of estimates of cost, risk and roi on the
basis of these preliminary data.

Figure 1 Project phases: schematic illustration of the evolution of a project in which mineral-based
characterisation forms an integral part of the process at each phase.

The original geological model for the project is modified during the course of data
acquisition from further drilling, analysis and metallurgical test work in the second
and third phases of the project (Figure ➊). Through geochemical and quantitative
mineralogical analysis of drill core, the ore is characterised, quantified, and preliminary
metallurgical constraints are identified by stage 2, while the metallurgical responses of
the materials are determined by stage 3. This approach requires model development and
refinement through continued ore definition, in order to predict the behaviour of the
ore. The business outcomes are defined in terms of cost, risk, and impact on the basis of
technical data. Iterative loops through which materials are re-characterised, reconciled
with other data sets, and correlated with tests are shown as gray lines in Figure ➊.
The goal is to use the end results to predict further behaviour in this or other projects
(black line in Figure ➊).
The inverted triangles represent characterisation stop points or gates where cross-
functional teams review the data required to move the project forward during the project.
Information gathered by parallel operations is assessed and decisions are made regarding
progress and development of the process. Figure ➋ illustrates the datasets for review at
theses stages. Gate 1 includes review of the 3-d model; metallurgical tests are validated
with mineralogical and geochemical data at gate 3.
CHAPTER V 447

Figure 2 Project gates: example of a gate-like structure for a project.

Figure 3 Project deliverables: 3-D digital models at each gate build upon earlier data sets.

Figure ➌ illustrates the deliverables for each gate which use increasingly larger data
sets to refine the 3-d geological model. The model, including alteration, mineralogy and
texture, also holds the data for grain size distribution, mineral association, geochemical-

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
448 T he Bu siness Ca se for Integrated Material s C harac teri sation in Mining

mineralogical parameters and metallurgical response data sets such as liberation,


hardness, flotation and leach characteristics, and recovery. These data are accessible for
examination by all the technical groups.

results and discussion


The business case
The concepts illustrated in Figure ➌ are adaptable into a project management frame–
work in which methods, procedures, tests, deliverables and business outcomes are
quantified and predicted for each project [4] while providing control over data quality.
The structure provides points of assurance where methods are evaluated and potentially
improved upon, since it may be easy to overlook quality control gaps in analytical and
testing operations when they are routine and isolated. For a sustainability initiative,
for example, high standards of analytical practice provide for better prediction of
environmental impact. The reasoning for committing financial resources is identified
and the associated costs at each check point are compared and quantified against the
costs of not implementing the plan.
The business case is to demonstrate that an investment in early and enhanced
characterisation has value and will be properly managed, and that it is in the best
interest of the company to use those data as a platform upon which decisions can be made
going forward. What is the context of using the method, and how can it be compared to
standard operating procedure in terms of cost effectiveness? The value proposition of the
method is determined through quantifying the predicted business outcomes (cost, risk,
roi) of using the method compared to not using the method. In application, the details
will be a function of parallel characterisation efforts (geological and metallurgical, for
example) and how communications between these groups can simplify testing and data
evaluation. What groups are actually impacted, and how can the individual phases and
gates be structured in time to optimise operations? By dividing up the characterisation-
based steps into discrete units, there is better opportunity for cost estimation and cost
control, since in each phase of the project, parallel operations can utilise the data set
and communicate results rapidly. These factors, as well as determining team makeup,
leadership of the project, and funding, will vary from project to project. Fewer divisions
and management and improved creative thinking will have cost benefits.

Overall results
Consider a hypothetical example of a development project (Table 1). In one domain there
have been 10 drill holes drilled, with traditional optical microscopy and geochemistry.
The sample interval is 2 m for 100 samples per hole, which will vary as a function of
ore variability and complexity. The owners wish to understand the ore variability in
order to predict metallurgical response of the material; focusing on characterisation
before proceeding with additional drilling and metallurgical tests. Because acquisition of
traditional information is already part of the budget in mapping, sampling, microscopy,
geochemistry, drilling, etc., the additional cost items are in refining the core logging
procedures, and adding quantitative mineralogical and additional geochemistry to
support the mineralogy by gate 2. Automated core loggers combined with mla/qemscan
aid visual core logging in acquisition of obscure mineral and chemical data, and fewer
(if any) thin section analyses are required. Direct benefits are in understanding ore
domains and in the early identification of metallurgical response and environmental
CHAPTER V 449

impact, with associated revenue benefits from higher throughput, recovery, and reduced
risk. Additional drilling can be re-evaluated and the metallurgical testing program will
be better defined at an early stage; reductions in drilling and additional test work costs
potentially pay for the additional analyses early on. Other examples of benefits include
early determination of how blended material will respond on the basis of the individual
ore types [5] , mill parameters and grind size determined by understanding mineralogical
and textural variability of materials [6, 7] , and improved flotation circuit parameters [8] .

Table 1 Example of potential value of a characterisation platform, US$

Typical Item Costing Potential Benefits by using Full Characterisation

Drilling, $150/m, Fewer drill holes;


$450,000 Direct savings $45,000
10 300-m holes e.g. 10% project savings
Time, materials, Identify metallurgical Less time, more relevant
Visual core logging
re-logging parameters early data, no re-logging
NIR, core scanners, digital Time, data capture, Identify obscure Time, integrate with
logging model mineralogy/chemistry geochem and mineralogy
Thin section analysis, Fewer thin sections
$200,000 Direct savings $100,000
@$200/sample analyses; e.g. 50%
Geochemistry, Digital link geochem Model integration, predict
$40,000
$40/sample with mineralogy deleterious/other elements
MLA/QEMSCAN Understand/quantify Predict metallurgical
$500,000
$500/sample ore variability responses, integrate model
Comminution tests Time, materials, Predict conditions, 2% increase throughput,
(e.g. SMC, ball mill) $2000/ea (SMC) energy, throughput large mine: 3,000 t/d
Metallurgical tests Time, materials, Reduce number tests; $ benefits of increase in
(e.g. float, leach) service fees improve recovery recovery, per %
Environmental tests (ABA, Time (months) Link to mineralogy, Reduce costs of planning,
humidity cell) materials concurrent with ops mitigate risk

conclusions
Mineralogical characterisation can be a fundamental part of decision making and not
an ancillary item. The additional up-front cost savings for the geology team include
the ability to make decisions onsite during drill programs and to predict metallurgical
response. At the metallurgical stage, cost savings are achieved by refining test work
and process development, potentially increasing throughput and recovery; we are
aware of examples in which millions of USD in unnecessary test work could have been
saved through early mineralogical characterisation. The data sets are also the basis for
reclamation and environmental monitoring decisions [9] , as an early and quantitative
protocol for waste planning and reduced environmental risk and impact.
Characterisation-based steps can be identified throughout a project, with opportunities
for re-characterisation and reassessment. Break points, or gates, inserted at intervals
provide a basis on which cross-functional teams can objectively evaluate and determine
the progress of a project on the basis of the technical data. Specific deliverables produced
at these points are in the form of data sets and 3-d models that relate to the desired
business outcomes of costs, risk, and roi. Each project will be different and variably
complex. However, a careful analysis of the business case for basing project development
on characterisation is likely to demonstrate unrealised value to the project.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
450 T he Bu siness Ca se for Integrated Material s C harac teri sation in Mining

acknowledgements
This paper benefitted from discussions in the amrc geomet group, particularly with M.
Gregory, Pebble Partnership, and S. Pichott, Anglo American Chile, and with colleagues
at sgs Minerals G. Turner-Saad and J. Richardson.

references
McNulty, T. P. (1998) Innovative Technology: Its Development and Commercialization. M.C. Kuhn (ed.),
Managing Innovation in the Minerals Industry, Society for Mining, Metallurgy and Exploration,
pp. 1–14. [1]

CIM (2006) Innovation. CIM Magazine, Canadian Institute of Mining, 1 (1) pp. 10, 31–33. [2]

Cooper, R. G. (2009) How Companies are Reinventing their Idea-to-launch Methodologies. Reference Paper
38, Research, Technology Management 52 (2) pp. 47–57. [3]

PMI (2004) A Guide to the Project Management Body of Knowledge. Project Management Institute,
Newtown Square, Pennsylvania, USA, Third Edition p. 390. [4]

Gregory, M. J. & Lang, J. R. (2009) Hydrothermal Controls on Copper Sulphide Speciation, Metal Distribution
and Gold Deportment in the Pebble Porphyr y Cu-Au-Mo Deposit, Southwest Alaska. SGA2009: Smart
Science for Exploration and Mining, Society for Geology Applied to Mineral Deposits, Townsville,
Australia, p. 3. [5]

Ross, J., Appleby, S. K., Hoal, K. & Botha, P. (2009) Quantitative Mineralogical Study of Ore Domains at
Bingham Canyon, Utah, USA. Preprint 09-108, Society for Mining, Metallurgy and Exploration,
Littleton, Colorado, USA, p. 8. [6]

Ross, J. (2010) A Geometallurgical Study at the Bingham Canyon Mines, Utah Analyzing Mineralogical and
Textural Parameters Impacting Rock Breakage. Unpublished MSc thesis (draft), Colorado School of
Mines, Golden, p. 211. [7]

Pichott, S., Godoy, S. & Holmgren, C. (2009) Advanced Mineralogical Description of the Copper Feed,
Concentrate and Tailings at Los Bronces Division, Anglo American Chile, geomin 2009, Antofagasta,
10–12 June. [8]

Hoal, K., Stammer, J. G., Smith, K. S., Walton-Day, K. & Russell, C. C. (2009) Application of Quantitative
Micro-mineralogy to Tailings and Mine Waste. D. Sego, M. Alostaz and N. Beir (eds.), Tailings and
Mine Waste 2009, University of Alberta Geotechnical Center, pp. 703–709. [9]
Simulation of Geologically Complex
Deposits: New High-Order Models
through Spatial Cumulants

abstract
Hussein mustapha Earth sciences and engineering-related phenomena such as
Roussos dimitrakopoulos geologic units, grade content and other properties of mineral
McGill University, Canada
deposits represent complex geological systems distributed in
space. Their spatial distributions are currently predicted from
finite measurements and second-order spatial statistical models,
which are limiting, as geological systems are highly complex, non-
Gaussian and exhibit non-linear patterns of spatial connectivity.
Non-linear and non-Gaussian high-order geostatistics is a
new area of research based on higher-order spatial connectivity
measures such as spatial cumulants. A high-order spatial
stochastic modelling framework is outlined herein starting
with the definitions of high-order spatial statistics vis-a-vis of
spatial cumulants. The inference and interpretation of high-order
statistics and cumulants is developed based on alternative spatial
templates. Spatial cumulants are shown to capture the directional
multiple-point periodicity and spatial architecture of geological
processes. In addition, it is shown that only a subset of all the
cumulant templates has to be computed in order to characterise
complex spatial patterns. Finally, complex deposits are simulated
using a nonparametric Legendre series approximation with the
coefficient calculated in terms of spatial cumulants. Examples
from the simulation of complex deposits are used to elucidate the
new high-order simulation approach presented.
452 Simulation of Geologically Comple x D eposit s: Ne w High- Order...

introduction
In earth sciences and engineering, measurements of phenomena under study represent
complex non-Gaussian systems distributed in space. Frequently, it is required that geo-
environmental attributes are modelled and their spatial distributions predicted from a
limited set of measurements. Random field models and stochastic data analysis, termed
geostatistics, have long been established and used as the key approach to modelling
and predicting natural phenomena in a variety of earth sciences and engineering fields
[1–6] . Despite the considerable developments over the past three decades, modelling
approaches are based on second-order statistics and the spatial information these
contain. Concerns articulated during the last decade suggest that current modelling
frameworks are limited in their ability to account for the spatial complexity of the
natural phenomena being modelled, which are critical to modelling and predicting
spatially-distributed, location-dependent data [7] . Several attempts to develop new
techniques dealing with spatial complexity include the multiple-point approach [8, 9] ,
Markov random field based approaches [10, 11] and others. These developments replace
the two-point covariance with a Training Image (ti) so as to account for high-order
dependencies. Although these are novel approaches, there is a need for a well-defined
spatial stochastic modelling framework capable of dealing with the complexity of geo-
environmental phenomena. The approach advocated herein is based on cumulants
[12, 13] , which are combinations of moment-statistical parameters that allow for the
complete characterisation of non-Gaussian random variables. In multiple-point statistics,
Training Images are used as a model for high-order joint distributions. However, this
model does not necessarily represent the true joint distribution of the random field under
consideration. The multiple-point is a particular case of the high-order moment and does
not infer from a concrete statistical theory. In contrast, cumulants that have statistical
advanced properties compared to the moments are explored in this paper and will be
essential in a future work. Spatial cumulants are a new concept that is introduced here
because cumulants completely characterise non-Gaussian stationary and ergodic spatial
random fields, and thus can provide a new consistent framework in addressing the issues
mentioned above. Related works on cumulants of one-dimensional random function
models have been developed to deal with the identification, analysis and testing of non-
linear signals [14, 15] . This paper outlines basic definitions, summarises approaches to
calculating anisotropic spatial cumulants with templates, shows some selected examples
and comments on some duality relations between cumulants and natural process. In
addition, simulations of complex non-Gaussian and non-linear geological patterns
are presented based on the use of spatial cumulants in the high-dimensional space of
Legendre polynomials [16] .

sequential simulation with high-order


spatial cumulants
Approximation of a joint probability density using
Legendre series
This section discusses the approximation of continuous densities using Legendre series.
A squared integrable and real piecewise smooth function defined on D = [-1,1] can be
formally written in a series of Legendre polynomials written in a series of Legendre
polynomials.
CHAPTER V 453

(1)

where (z) is the m th-order Legendre polynomial, with norm || ||, defined as [16]

, and (2)

The Legendre polynomials ( ),obey the following recursive relation

(3)

where P0 (z)=1, P1 (z) = z, and m ≥ 1. The set of Legendre polynomials {Pm (z)}m forms a complete
orthogonal basis set on the interval [-1,1]. The orthogonality property is defined as

(4)

To avoid numerical instability in polynomial computation, we normalised the Legendre


polynomials by utilising the square norm. The set of normalised Legendre polynomials
is defined as

(5)

The coefficients L m in Equation (1) of the Legendre series the so-called Legendre
cumulants, can be determined using the orthogonality property in Equation (4) as

,.., and 0,1,2,.. (6)

where c i is the i th-order cumulant of ƒ. The expression of L m as shown in [17] by ,..,


and 0,1,2,.

, using Equation (2)

(7)

, using moments and


cumulants relationship

,.., and 0,1,2,..

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
454 Simulation of Geologically Comple x D eposit s: Ne w High- Order...

Theoretically, the series in Equation (1), with coefficients L m calculated from Equation (6),
converges to ƒ (z) at every continuity point of ƒ (z) as demonstrated by [16] . Finally, if
only cumulants of order smaller than or equal to ω are given, then the function ƒ(z) in
Equation (1) can be approximated as follows:

(8)

The above is detailed for three-dimensional spaces in [17] .

A high-order simulation method


This section describes the high-order conditional simulation method (hosim) based on
spatial cumulants. A sequential procedure simulating values at un-sampled locations that
are randomly visited is used here. The Legendre series approximation is used to estimate
the cpdfs [17] . This expression uses Legendre polynomials which are orthogonal on the
finite interval [-1,1]. Then, the training images and the data values are first scaled to
[-1,1]d , where d is the dimension of the problem (i.e., d=1,2 or 3).

(1)- Training image (2)- Hard data

Figure 1 (1) Training image; hard


data locations in (2).

(1)- Training image with hard data (2)- Template


TEMP for global
calculation

Figure 2 Training image in (1).


The template in (2) is used for a
global calculation of the spatial
cumulants.

The hosim method first combines the ti used and the samples (Figure   ➊)to infer the
high-order spatial cumulants. A global calculation procedure is performed based on a
given maximal template size (temp) (Figure   ➋ (2)). This step consists of calculating all
the spatial cumulants needed by the Legendre series approximation. The main steps of
hosim method are as follows:

1. Scan the training image and the sample data (Figure   ➋ (1)) and store the spatial
cumulants in a global tree.

2. Define a random path visiting once all un-sampled nodes.


CHAPTER V 455

3. Define the template shape T for each un-sampled location x 0 using its neighbours. The
conditioning data available within TEMP are then searched (Figure   ➋). The high-
order spatial cumulants are read from the global tree in Step 1 and are used to
calculate the coefficients of the Legendre series. These coefficients are used to build
the cpdf of Z 0.

4. Draw a uniform random value in [0,1] to read from the conditional distribution a
simulated value, Z (x 0), at x 0.

5. Add x 0 to the set of sample hard data and the previously simulated values.

6. Repeat Steps 3 to 5 for the next points in the random path defined in Step 2.

7. Repeat Steps 2 to 6 to generate different realisations using different random paths.

8. The random path defined in Step 2 concerns only the un-sampled locations. Thus,
the final realisation obtained after step 7 honours the conditioning data.

Conditional simulation using cumulants and training images


In this section, the simulation of a horizontal 2-D section of a fluvial reservoir (Figure  ➌(1))
is shown so as to illustrate the high-order conditional simulation using spatial cumulants.
Moreover, the developed method can be applied to mineral deposits. The example presented
herein used 25 sample data (Figure   ➌(2)), the training image in Figure ➌(3) and the
conditional simulation algorithm discussed above.

(1)- True image (2)- 25 Sample data (3)- Training image

Figure 3: Simulation of a horizontal 2-D section of a fluvial reservoir.


(1) Exhaustive image: true image, (2) 25 sample data, and (3) training image.

Different realisations are presented as shown in Figure   ➍. This figure shows that the
main characteristics of the exhaustive image are reproduced using a sparse data set (about
0.85% of the total number of points). The 2-D sections presented here have particular and
complex distributions as shown by the bimodal histogram in Figure ➎(1). This figure
shows the comparison between the generated realisation histograms and the data set
histogram. In addition, the realisations reproduced the variograms along the ew and ns
directions of the data set as shown in Figure  ➎(2) and (3). The developed method is
also validated by comparing the high-order statistics of the data set, exhaustive image
and the different realisations obtained. For example, the third-order spatial cumulant
maps of the exhaustive image, data set and realisations (1) and (2) are very close as
shown in Figure   ➏. This last result is obtained because the new conditional simulation
algorithm uses different cumulant orders in the Legendre series and this will guarantee
the reproduction of not only the histogram and variograms of the sample data, but also
their high-order statistics.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
456 Simulation of Geologically Comple x D eposit s: Ne w High- Order...

(1)- Realisation 1 (2)- Realisation 2

Figure 4 Realisations (1) and


(2) obtained by the hosim.

(1)- Histograms (2)- Variograms NS (3)- Variograms EW

Figure 5 Histograms (1), NS (2) and EW (3) variograms of 10 hosim realisations. The
circles refer to the data set and the solid lines refer to the realisations.

(1)- True image (2)- Data set

(3)- Realisation 1 (4)- Realisation 2

Figure6 Third-order spatial cumulant maps of (1) the true image, (2) the hard data set, (3) and (4)
the realisations 1 and 2, respectively. Values 0.01, 0.015 and 0.020 are the isovalue contours.
CHAPTER V 457

conclusions
This paper has presented developments towards a new alternative approach to modelling
complex, non-linear, non-Gaussian earth sciences and engineering data, as required
in most applications. The new alternative framework is founded upon concepts from
high-order statistics that are introduced herein in a spatial context. The simulation of a
complex 2-D image is presented using a new high-order sequential simulation method
which is based on the concept of high-order spatial. The results have shown a good
reproduction of the main features of the exhaustive image using a small data set. The
realisations generated reproduced the histogram, variogram and high-order statistics of
the data set. A key aspect of the simulation method based on spatial cumulants is the
compliance of the simulated realisations with all statistics (any order) of the available
data and avoidance of possible conflicts between training images and dense data sets
commonly available in mining studies.

acknowledgements
The work in this paper was funded from the NSERC CDR Grant 335696 and BHP Billiton,
as well as the nserc Discovery Grant 239019. Thanks are in order to Brian Baird, Peter
Stone, and Gavin Yates, as well as BHP Billiton Diamonds and, in particular, Darren Dyck,
for their support, collaboration, and technical comments.

references
David, M. (1977) Geostatistical 0re R eser ve Estimation. Elsevier, Amsterdam. [1]

Journel, A. G. (1989) Fundamentals of Geostatistics in Five Lessons. AGU, San Francisco. [2]

Ripley, B. D. (1987) Stochastic Simulation. John Wiley & Sons, Inc., New York. [3]

Cressie, N. A. (1993) Statistics for Spatial Data. John Wiley, New York. [4]

Kitanidis, P. K. (1997) Introduction to Geostatistics – Applications in Hydrogeology. Cambridge Univ.


Press, New York. [5]

Goovaerts, P. (1997) Geostatistics for Natural R esources Evaluation. Oxford, New York. [6]

Guardiano, J. & Srivastava, R. M. (1993) Multivariate Geostati stics: Be yond Bivariate Moment s. In,
Geostatistics Tróia '92, A. Soares, ed. Kluwer, Dordrecht, Vol. 1, pp. 133–144. [7]

Strebelle, S. (2002) Conditional Simulation of Complex Geological Structures Using Multiple Point Statistics.
Mathematical Geology, Vol. 34, pp. 1–22. [8]

Zhang, T., Switzer, P. & Journel, A. (2006) Filter-based Classification of Training Image Patterns for Spatial
Simulation. Mathematical Geology 38, pp.63–80. [9]

Daly, C. (2004) Higher Order Models Using Entropy, Markov R andom Fields and Sequential Simulation. In,
Geostatistics Banff 2004, Springer, pp. 215–225. [10]

Tjelmeland, H. & Eidsvik, J. (2004) Directional Metropolis: Hastings Updates for Posteriors with Nonlinear
Likelihoods. In, Geostatistics Banff 2004, Springer, pp. 95–104. [11]

Mustapha, H. & Dimitrakopoulos, R. (2010) A New Algorithm for Geological Patterns R ecognition Using
High-order Spatial Cumulants, Computers & Geosciences. DOI: 10.1016/j.cageo.2009.04.015. [12]

Dimitrakopoulos, R., Mustapha, H. & Gloaguen, E. (2010) High-order Stati stics of Spatial R andom
Fields: Exploring Spatial Cumulants for Modelling Complex Non-Gaussian and Non-linear Phenomena.
Mathematical Geosciences. DOI: 10.1007/s11004–009–9258–9. [13]

Mendel, J. M. (1991) Use of High-order Stati s tic s (spec tra) in Signal Processing and Sys tem s Theor y:
Theoretical R esults and Some Applications. ieee Proc. v, 79, pp. 279–305. [14]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
458 Simulation of Geologically Comple x D eposit s: Ne w High- Order...

Nikias, C. L. & Petropulu, A. P. (1993) Higher–order Spec tra Analysi s: A Nonlinear Signal Processing
Framework. ptr Prentice Hall, Upper Saddle River, nj, p. 538. [15]

Lebedev, N. N. (1965) Special Functions and Their Applications. Prentice-Hall Inc, New York. p. 308. [16]

Mustapha, H. & Dimitrakopoulos, R. (2010) Conditional Simulations of Complex Geological Patterns Using
High-order Multi-point Spatial Cumulants. Mathematical Geosciences. In press. [17]
Geological Modelling and
Metallurgical Prediction Supported
by Linear and Non-Linear Statistics

abstract
Sebastián carmona The amount of information and number of variables available in
SC Mining, UK a geological database grant researchers new ways of analysing
data. The use of multivariate statistical analysis can improve
Julián ortiz the predictability of models for geological and geometallurgical
Universidad de Chile characterisation, but requires finding the right procedure to
achieve this goal.
In this paper a number of linear and non-linear statistical
analyses, in which the independent variables are the alteration
minerals, are used firstly to validate and compare two geological
models defined to control the copper grades, and secondly to see
if these models can help predict the metallurgical response in a
copper concentration plant with the restrictions of limited data.
The data belong to Radomiro Tomic copper mine, an oxide and
sulphide mine owned by codelco Chile.
For the first objective, results indicate that non-linear
techniques are better for this kind of analysis and that the
newest geological model based on alteration features provides
better predictability for grades. For the second objective, primary
sulphide results indicate that the geological modelling has good
predictability for copper recovery and Starkey index (a measure of
hardness on sag mills).
460 Geological Modelling and Me tallurgical Predic tion Suppor ted...

introduction
In the estimation of ore reserves, the geology team must define a model for the estimation
units that accounts for the spatial distribution and consistency of the units and their
capacity to discriminate the mineral grade behaviour. This definition depends heavily
on the particular interpretation and may change depending on the geology team's
background and experience [1–3] .
Radomiro Tomic (rt) is a copper mine in northern Chile. Since 1995, it has been
producing copper cathodes by heap leaching, and in the last three years the possibility
of processing the sulphides has been under study. Since 2008 this ore has fed the
Chuquicamata concentration plant. For the geological control of mineralisation, the
lithological units in the deposit are affected by different types and levels of alterations.A
model of alteration minerals (a) was used as a guide for the construction of geological
units. Since 2006, a new model also based on alteration minerals (b) has been used. The
alteration classification for the latest model (b) consists of:

• Argillic Supergenes (as): Characterised by abundant clay as a selective replacement


of feldspar. Located mostly in the upper side of the porphyry, near the oxide zone.
Subdivided in two types of clay: smectite (as-es) and kaolinite (as-ca).

• Advanced Argillic (aa): This alteration contains abundant clay (pyrophyllite, alunite
and kaolin). It has lower importance due to its low development.

• Quartz Sericite Penetrative (qsp): This alteration consists of aggregates of sericite,


quartz, pyrite and kaolin. It is associated with the principal hydrothermal event with
pyrite and chalcopyrite.

• Sericite Gray Green (sgv): it consists of little veinlets with bornite, pyrite and
chalcopyrite in the primary sulphide zone. It is commonly correlated with high copper
concentrations.

• Green Sericite Chalcopyrite (svcp): It is an event posterior to SGV but prior to QSP. It is
correlated with copper concentrations. It consists of aggregates of sericite, quartz and illite.

• Early Dark Micaceous (edm): It consists of micro veins of secondary biotite, potassium
feldspar, quartz and sericite; there is not an important presence of this alteration in
the deposit and it has good correlation with copper grades.

• KSIL: It consists of an aggregate of secondary feldspar, quartz and albite. It is not very
common in the deposit.

• Potassium Background (pf): It is the earliest alteration event in the deposit, it has not
suffered alteration process and it is related to low copper grades. It is associated with
secondary biotite and potassium feldspar.

• Marginal Chlorite (cmh): It is located in the halo of the deposit, and has no copper
concentration. This unit is characterised by the principal association of chlorite-
epidote, calcite and pyrite.

The first model (a) has fewer categories and is defined by: Argillic Supergenes (as),
Advanced Argillic (aa), Marginal Chlorite (clm), Late Sericite (st), Late Quartz Sericite
(qst), Silica-Potassium Feldspar (ksil), and Potassium Background (kf).

methodology
There are several linear and non-linear statistics to use in pattern analysis [4–8] and
these techniques have been applied for geological and Earth Sciences applications with
positive results [9–14] .
CHAPTER V 461

Linear statistics
In this paper two linear tools were used: a correlation matrix that provides a quick way to
check which variables are significantly correlated and a multiple linear regression that
shows the best fit including all the independent variables [15, 16] .
A Fisher test is used to check the significance of a regression

(1)

Where there are N samples and M independent variables. The hypothesis description
would be:

(2)

: At least one is different from 0 (3)

The Fisher test consist firstly in dividing the total sum of quadratic residuals in a sum
related to the model and a sum related to the error.

, sum of quadratic residuals (4)

, sum related to the model (5)

, sum related to the error (6)

(7)

Then a variable that follows a Fisher distribution is defined with M-1 and N-M degrees
of freedom:

(8)

The null hypothesis (H 0) will be rejected if the F value exceeds the critical value for
the significance level assumed. A regression can be characterised using the R 2 and
the adjusted R 2 coefficients. The multiple determination coefficient (R 2) takes values
between 0 and 1 and measures how good is the variable explained by the regression
model obtained.

(9)

However, when the number of independent variables M increases, this indicator tends
to grow, therefore comparisons must be done between models with the same number of
independents variables. The adjusted multiple determination coefficient compensates
for the number of variables and can be used alternatively.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
462 Geological Modelling and Me tallurgical Predic tion Suppor ted...

(10)

The contribution of each variable into the regression can be assessed using some of the
coefficients and statistics reported in the regression:

• Beta: These are the regression coefficients if all the variables are normalised. The
importance of this indicator is that it allows comparing the relative contribution of
each independent variable.

• B: These are the coefficients that weight each variable in the equation.
• Test of the individual coefficients in the regression (t test): It is used to evaluate the
individual significance of the coefficient in a multiple regression model. Adding a
significant variable to the model improves its efficiency, while the inclusion of a non
significant variable decreases its effectiveness.

(11)

(12)

This test is based on the T Distribution:

(13)

where is the standard error of the coefficient. The analysis will fail in the rejection
of the null hypothesis if the T statistic falls into the acceptance zone:

(14)

where n is the number of samples, k is the number of variables and α is the level of
significance.

• P Value: It is the probability to observe such an extreme result with the T value in
a set of data in which the variable has no effect. A P-Value of 5% or less is a point
where generally the rejection of the null hypothesis is accepted, i.e. there is a 95% of
confidence that the independent variable is having effect over the dependent variable.

Non-linear analysis
Among the non-linear tools available, the two-step cluster [17] analysis was used
to characterise the geological domains, because of two of its features: It can handle
simultaneously continuous and categorical variables and it is able to determine the
number of classes automatically.
Clustering is done considering the log likelihood distance, which is defined by:

(15)

The three variables that appear in the latest expression are defined as follows:

(16)

CHAPTER V 463

(17)

(18)

is a dispersion measure inside a cluster . is composed by two


parts, one for continuous variables and the second for categorical ones. The first part
measures the dispersion between continuous variables inside
of the cluster . If only the expression , were used, this would be the exact decrease
of the log likelihood function after joining the clusters and . The term is added to
avoid the singular situation when 0. The entropy is used in
the second part as a dispersion measure of categorical variables, where is the probability
table that summarises the frequency of the categorical variable.
As in hierarchical clustering, in each step the clusters with smaller distance are
merged. Then

(19)

where is the sum of all the clusters dispersion.


To compute the optimal number of clusters, a estimator is used: the Bayesian
Information Criteria

(20)

where is the number of independent parametres. This estimator allows calculating the
maximum number of clusters and is the same when is less than 0.04. In
the second stage, a change distance ratio is defined

(21)

where is the distance if clusters are merged into clusters. The optimum number
of clusters is obtained for the highest ratio change

(22)

for the highest values of obtained in the first step.


Finally each object is assigned deterministically to the closest cluster.

To compare models obtained with this technique, let's consider x to be the variable of
interest (e.g. copper grade). The model I defines the following parametres:

• is the mean of the k cluster in the i model for variable x


• is the standard deviation of the k cluster in the i model for variable x
• is the number of clusters created by the algorithm in the model i

The first step is to set an order from high to low of the means for every k

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
464 Geological Modelling and Me tallurgical Predic tion Suppor ted...

Then we estimate the Discrimination Power of the i model as:

(23)

The larger has the higher discrimination power. The can also be defined as a space
of probability of pertinence to a cluster.

results and discussion


Experiment 1- Testing for variables that control the grade
For the (a) model, the historic drill holes data set was used, and after compositing at
1.5 metres, over 1,500 samples were obtained for the area of interest. For the (b) model, 300
samples were gathered in 2007 in the same area. All the samples include the presence of
each category of alteration mineral in percent and the copper grade. All the analyses were
done by mineral zone: (1) the enrichment zone composed by strong and weak secondary
sulphide and (2) the primary zone of sulphides.

Linear regression

After a correlation matrix analysis, it was clear that in both analyses there is a strong
negative correlation between potassium and quartz. This result can be explained by the
geological alteration process which defined background potassium as the first event and
quartz as the latest; so in order to avoid an ill-conditioned matrix, quartz was eliminated
from the analysis. The other alteration minerals did not show enough correlation to
interfere in the linear regression analysis.

• Primary Sulphides (sp): Model (a) shows a very low determination coefficient (0.19),
while model (b) shows an adjusted R 2 of 0.44. In both regressions the highest beta is
related to the potassium alteration.

• Weak Secondary Sulphides (ssd): Both regressions show higher adjusted R 2 than the
ones obtained on primary sulfides. The explanatory power is better in model (b) (0.58
over 0.3) and the copper grade is still controlled basically by potassium and sericite
alterations.

• Strong Secondary Sulphides (ssf): The determination coefficient is very low in both
regressions mainly because of the low presence of potassium in the enriched zone next
to mixed and oxide areas, thus the copper grade is explained by other phenomena.
Still, model (a) exceeds model (b) performance.

Non-Linear analysis

• Primary Sulphides: In the model (a) three clusters were found. Cluster one has 90
samples (7.2%); cluster two has 385 samples (31%) and cluster 3,766 samples (61.7%).
Table 1 shows that the algorithm was capable of creating a cluster (three) with low
copper grade (mean and standard deviation). The samples that belong to this cluster
had also really high KF (mean of 94%) and low QST and ST. In model (b) there are two
clusters. Cluster one has 30% of the samples. The one with low copper grade is also
characterised by high PF and low values in all the minerals with sericite (Table 2).
CHAPTER V 465

Table 1 Primary sulphides non-linear model (a)

Variables AS CLM AA QST


Cluster Mean SD Mean SD Mean SD Mean SD
1 18.16 14.94 0.91 2.86 – – 19.11 19.11
2 4.72 4.98 – – – – 46.91 39.06
3 2.14 3.52 – – – – 1.47 4.62
Combined 4.10 6.94 0.07 0.80 – – 16.84 20.64
Variables CUT KF KSIL ST
Cluster Mean SD Mean SD Mean SD Mean SD
1 0.74 1.64 35.32 26.14 12.64 20.81 4.91 10.36
2 0.63 0.45 15.34 25.20 0.48 2.41 4.61 5.85
3 0.34 0.20 93.67 7.39 0.12 1.02 2.47 2.07
Combined 0.46 0.55 65.14 40.21 1.14 6.62 3.32 4.70

Table 2 Primary sulphides non-linear model (b)

Variables CUT CMH PF EDM


Cluster Mean SD Mean SD Mean SD Mean SD
1 0.69 0.33 0.06 0.34 62.11 17.71 1.17 2.02
2 0.49 0.17 – 0.02 89.15 6.48 1.40 1.63
Combined 0.55 0.25 0.02 0.19 80.97 16.68 1.33 1.76
Variables KSIL SGV SVCP QSP
Cluster Mean SD Mean SD Mean SD Mean SD
1 0.78 1.51 3.74 4.64 2.01 4.97 16.03 18.51
2 0.10 0.32 1.76 1.91 0.57 1.16 2.28 3.64
Combined 0.31 0.92 2.36 3.13 1.00 2.96 6.44 12.32
Variables AS-ES AA AS-CA
Cluster Mean SD Mean SD Mean SD Mean SD
1 6.45 8.25 – 0.03 7.64 7.38 – –
2 3.46 3.54 – – 1.28 2.09 – –
Combined 4.37 5.56 – 0.01 3.20 5.28 – –

• Weak Secondary Sulphides: In this type of sulphides model (a) fails to discriminate
(Table 3), creating four clusters where the copper grade means and standard deviation
overlap each other. Model (b) was able to create two groups with no overlap in the
copper grade, with remarkable differences also in PF, QSP and SGV (Table 4).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
466 Geological Modelling and Me tallurgical Predic tion Suppor ted...

Table 3 Weak secondary sulphides non-linear model (a)

Variables AS CLM AA CUT


Cluster Mean SD Mean SD Mean SD Mean SD
1 0.01 0.12 – – – – 0.67 0.41
2 2.03 3.06 – – – – 0.49 0.27
3 11.64 11.76 – – – – 1.05 0.70
4 3.78 2.78 0.89 2.18 – – 0.68 0.36
Combined 3.32 7.07 0.01 0.22 – – 0.66 0.48
Variables KF KSIL QST ST
Cluster Mean SD Mean SD Mean SD Mean SD
1 0.15 2.10 – – – 0.05 0.42 0.99
2 91.40 9.25 – – 2.18 6.20 4.28 4.88
3 18.11 20.47 – – 58.80 25.82 7.72 6.10
4 51.56 38.42 5.11 4.55 25.45 34.27 12.33 23.47
Combined 47.08 44.01 0.05 0.65 13.00 26.26 3.80 5.56

Table 4 Weak secondary sulphides non-linear model (b)

Variables CUT CMH PF EDM


Cluster Mean SD Mean SD Mean SD Mean SD
1 1.01 0.38 – – 59.61 19.14 0.06 0.20
2 0.50 0.14 0.06 0.33 90.12 4.55 1.29 2.11
Combined 0.68 0.35 0.04 0.27 79.35 18.83 0.86 1.79
Variables KSIL SGV SVCP QSP
Cluster Mean SD Mean SD Mean SD Mean SD
1 0.02 0.09 4.90 3.57 0.97 1.36 21.50 16.36
2 0.42 0.85 1.64 2.02 0.08 0.24 2.44 2.51
Combined 0.28 0.71 2.79 3.07 0.40 0.92 9.17 13.40
Variables AS-ES AA AS-CA –
Cluster Mean SD Mean SD Mean SD Mean SD
1 3.12 4.03 – – 9.82 8.58 – –
2 3.07 2.54 – – 0.97 1.29 – –
Combined 3.09 3.11 – – 4.03 6.69 – –

• Strong Secondary Sulphides: The copper grade on these is strongly correlated to oxides,
so the alteration minerals were unable to discriminate effectively in either model (a) or
(b). Still some difference can be seen in Table 6 (model b) driven by PF and QSP, with
two clusters, each cluster has a copper grade with means of 0.91 and 1.21, respectively.

Table 5 Strong secondary sulphides non-linear model (a)

Variables AS CLM AA CUT


Cluster Mean SD Mean SD Mean SD Mean SD
1 2.87 4.03 – – – – 0.72 0.42
2 12.55 12.72 – – – – 0.86 0.73
3 – – – – – – 1.23 0.70
Combined 2.15 5.71 – – – – 1.02 0.67
Variables KF KSIL QST ST
Cluster Mean SD Mean SD Mean SD Mean SD
1 89.36 11.26 – – 2.96 7.59 4.81 5.61
2 13.33 15.80 – – 69.60 17.61 4.52 6.43
3 – – – – – – 0.38 1.08
Combined 32.46 42.72 – – 7.36 20.92 2.30 4.47
CHAPTER V 467

Table 6 Secondary sulphides non-linear model (b)

Variables CUT CMH PF EDM


Cluster Mean SD Mean SD Mean SD Mean SD
1 0.91 0.42 – – 85.01 10.61 0.44 0.86
2 1.21 0.49 – – 51.74 30.95 0.91 1.88
Combined 0.95 0.44 – – 80.85 18.03 0.50 1.02
Variables KSIL SGV SVCP QSP
Cluster Mean SD Mean SD Mean SD Mean SD
1 0.07 0.21 2.90 2.72 0.24 0.59 6.39 7.95
2 0.46 1.12 4.46 4.20 0.20 0.45 35.54 33.42
Combined 0.11 0.44 3.10 2.94 0.24 0.57 10.04 16.48
Variables AS-ES AA AS-CA –
Cluster Mean SD Mean SD Mean SD Mean SD
1 3.82 4.54 – – 1.13 1.77 – –
2 0.97 0.87 0.09 0.23 5.63 8.44 – –
Combined 3.46 4.35 0.01 0.08 1.69 3.57 – –

Finally, Table 7 summarises the results using the Discriminating Power Index.

Table 7 Discriminating power comparison

Sulphides Models D (1st – 2nd) D (2nd – 3rd) D (3rd – 4th) D(i)


b -0.29 - - -0.29
SP
a -1.97 -0.36 - -1.16
b 0 - - 0
SSD
a -0.68 -0.76 -0.51 -0.65
b -0.61 - - -0.61
SSF
a -1.07 -1.01 - -1.04

Model (b) overcomes model (a) in every type of sulphides.

Experiment 2- Testing metallurgical predictability


with limited data
To estimate the ore resources and reserves, a significant number of samples is required to
define geological units and apply geostatistical estimation and simulation techniques to
create a block model [18–21] . In our case study, the information is not enough to perform
a geostatistical study therefore the tools used in the first experiment can be helpful.
However, results shown next are relevant only in primary sulphides.
Although there has been some research to characterise mineral types and metallurgical
recovery [22, 23] in the late 90's, not many new researches have been done lately. The
variables to be measured are Copper Recovery in flotation, Molybdenum Recovery in
flotation, Bond Index (work index for Ball Mill) and Starkey Index (work index for sag Mill)
Table 8 shows the descriptive statistics of the data used for this second case (number
of samples, minimum, maximum and the Pearson's parametres).

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
468 Geological Modelling and Me tallurgical Predic tion Suppor ted...

Table 8 Descriptive statistics for primary sulphides metallurgical results

Skewness Kurtosis
Variables N Min Max Mean SD
Value Std. Error Value Std. Error
Copper
182 42.36 92.88 82.30 6.51 -2.12 0.18 8.96 0.36
Recovery
Molybdenum
182 8.00 96.30 43.27 15.89 0.39 0.18 0.31 0.36
Recovery
Bond Index 182 9.80 14.80 12.56 0.74 -0.05 0.18 1.09 0.36
Starkey Index 181 1.33 7.74 4.71 1.21 -0.17 0.18 -0.26 0.36

The correlation matrix shows poor linear correlation (-0.3) between the clay alteration
minerals (AS-ES and AS-CA) and copper recovery. Bond index is related with PF (0.26) and
AS-CA (-0.28). Finally, Starkey index is related to PF (0.61), QSP (-0.51) and AS-CA (-0.48).
The only metallurgical variable with relevant relationships is the Starkey Index.
The linear regression with the Starkey index shows that the only variable relevant is
PF with a beta coefficient of 0.51. In this case, PF is strongly linearly correlated with QSP,
so the two variables are analogue in the analysis.
The non-linear analysis created two clusters for each dependent variable and can be
summarised in Table 9, where the Discriminating Power is divided by the mean of the
variable of interest to compare the results. It is clear that the only variable that can be
predicted by non-linear statistics is the Starkey Index.
As expected, the difference between clusters in the Starkey index lies in PF and QSP.
This result is very useful and has a geological interpretation. Since PF was the first event
and QSP the last one, PF has suffered almost no alteration, and is related to hard rock.
The opposite happens with the QSP which is almost always found in rock with several
alterations. One interesting question is why this property of the rocks is sensitive to
Starkey but not in the Bond Index. To answer this, it is important to incorporate the
understanding of metallurgical process. The Bond index is a definition of the energy used
on comminution in ball mills, where the fractures of rocks are mostly undertaken by steel
balls, while the Starkey index works for sag mills, where the rocks crush between them.

Table 9 Discriminating power

Variables D Combined Mean D/Combined Mean

Copper Recovery -9.36 82.30 -11%


Molybdenum Recovery -31.46 43.27 -73%
Bond Index -0.94 12.56 -7%
Starkey Index -0.07 4.71 -1%

conclusions
The use of statistical tools to compare geological models has shown an efficient and
robust response. These techniques can help improving the understanding of the deposit
geology and the effects of alterations in the mineralisation. They provide an iterative
process of selection of alteration models, indicating the alterations that must be mapped.
These results confirm that the use of statistical tools in the preliminary stages of the ore
deposit characterisation is valuable.
The statistical techniques presented in this paper have shown to be useful to predict
the metallurgical response, when sample information is scarce for the construction of a
block model with geostatistical techniques.
In this specific study, the alteration model defined considering its control over copper
CHAPTER V 469

grades was able to predict the Starkey Index. This approach can also be used in:

• Projects or prospects with marginal grades, where an early assessment of the recovery
and work index associated with milling can help to draw conclusions of their feasibility.

• Geological modelling, in order to define alteration controls or geological units based


on the metallurgical response, rather than in the copper grade alone.

references
Duke, J. H. & Hanna, P. J. (2001) Geological Interpretation for R esource Estimation. The Ausimm Guide
to Good Practice, pp. 147–156. [1]

Carras, S., Wigin, M. & Denham, K. (1990) Grade Control and Mining Policy: Their Effect on Ore bodies and
Ultimate R eser ves. Strategies for grade control aig Bulletin, 10: pp. 43–48. [2]

Watchorn, R. (1990) Open Pit Mapping Aspects of Grade Control: Advantages and Techniques. Strategies
for grade control aig Bulletin, 10: pp. 27–29. [3]

Tukey, J. (1977) Explorator y Data Analysis. Addison- Wesley. Reading, ma. [4]

Mc Lachlan, G. J. (1992) Discriminant Analysis and Statistical Pattern R ecognition. Wiley. [5]

Journel, A. G. (2002) Combining Knowledge from Diverse Sources: An Alternative to Traditional Data
Independence Hypothesis. Mathematical Geology, Vol. 34, No 5. [6]

Breiman, L. (1984) Classification and R egression Trees.Chapman & Hall /crc. [7]

Henríquez, M. (2005) Notes from IN540, Statistical Methods for Economy and Managment. Universidad
de Chile. [8]

Harff, J. & Davis, J. C. (1990) R egionalization in Geology by Multivariate Classification. Mathematical


Geology, Vol. 22, No 5. [9]

Davis, J. C. (2002) Statistics and Data Analysis in Geology. Wiley. [10]

Desbarats, A. J. & Dimitrakopoulos, R. (2000) Geos tati s tical Simulation of R egionali zed Pore-Si ze
Distributions Using Min/Max Autocorrelation Factors. Mathematical Geology, Vol. 32, No 8. [11]

Leal, F. (1998) Application of Correspondence Analysi s in the A ssessment of Groundwater Chemi s tr y.


Mathematical Geology, Vol. 30, No 2. [12] .

Sajn, R. (2006) Factor Analysis of Soil and Attic-dust to separate Mining and Metallurgy Influence, Meza
Valley, Slovenia. Mathematical Geology, Vol. 38, No 6. [13]

Shaw, P. J. A. (2003) Multivariate Statistics for the Environmental Sciences. Hodder – Arnold. [14]

Gujarati, D. (1999) Econometric. Third Edition Mc Graw Hill, 1999. [15]

White, H. (1980) A He teoskeda s ticit y- Con si s tent Covariance Matri x Es timator and a Direc t Tes t for
Heteroscedasticity. Econometrica, 48, 1980, pp. 817–838. [16]

Bacher, J., Wenzig, K. & Vogler, M. (2005) spss Two Step Cluster- A First Evaluation. Universitat Erlangen-
Nunberg. [17]

White, A. (1998) The Commercial Imperatives for Exploration and R esource Acquisition. Proceedings
Ausimm Annual Conference, pp. 123–127. [18]

Mackenzie, D. H. & Wilson, G. I. (2001) Geological Interpretation and Geological Modeling. The Ausimm
Guide to Good Practice, 2001, pp. 111–118. [19]

Wackernagel, H. (2003) Multivariate Geostatistics. Springer. [20]

jorc Committe (2005) jorc code. [21]


Bulatovic, S. M., Wyslouzil, D. M. & Kant, C. (1998) Operating Practices in the Beneficiation of Major
Porphyr y Copper/Molybdenum Plants from Chile: Innovated Technology and Oppor tunities, a R eview.
Minerals Engineering, Vol. 11, No 4, pp. 313–331. [22]

Bulatovi yslouzil, D. M. & Kant, C. (1999) Effect of Clay Slimes on Copper, Molybdenum Flotation from
Porphyr y Ores. Copper 99 International Environment Conference. Vol. ii. [23]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
An Indicator Geostatistical
Approach to Support Mineral
Resource Classification

abstract
Diniz ribeiro Classical geostatistical techniques applied to grade variables do
Marcelo guimarães not take into account the uncertainty of the geometry of the ore
Débora roldão
bodies. The resource classification criteria based only on kriging
Cid monteiro
variance (S2 K) or based on other parameters derived from S2k
Vale, Brazil
(confidence interval, variance of realisations of simulations, etc)
do not consider the risk of changes in the geological interpretation
of the boundary. This work presents a simple indicator kriging
technique which permits us to obtain a “Risk Index” (ri). This
index combines estimation errors and geological continuity
according to the following formula:

Where I K is the kriging of the “ore” indicator at support u and


S2IK is the respective kriging variance. The samples with grades
above the cut off grade assume the value 1 and those with values
below the cut off assume the value 0 (zero). This technique was
adapted from the original proposal of Amorim and Ribeiro [1]
and is applied in all ferrous resource classification done by the
Vale Ferrous Mine Planning Department (gelpf). The ri method
is used after the QA/QC procedures, database validations and
kriging grade evaluations. The simplicity of the method and its
representativeness permits its general application by using any
available geostatistical software not requiring specific resources.
472 An Indicator Geos tati s tical Approach to Suppor t Mineral...

introduction
This paper presents a methodology to be used on mineral resource classification taking
into account the geological continuity and the ore regionalisation. The main concepts
used in this paper are based on Amorim and Ribeiro [1] who proposed an index (Risk
Index, ri) to classify mineral resources by combining geological uncertainty and kriging
variance. Some modifications of the original proposal were done in order to simplify
the understanding of the technique and to adapt it to the currently accepted mineral
resource classification system.
The geological continuity, according to Sinclair and Blackwell [2], is linked to the concept
of mineralisation homogeneity in a specified domain of interest. Geostatistics deals with
spatial homogeneity when considering a second-order stationary hypothesis for a random
function in a certain spatial domain (mean and variance are invariant by translation). The
random function can be simplified by the indicator which is a good tool to measure geological
continuity. The ri is obtained from indicator kriging and represents a complementary tool
to be used in addition to the official codes of mineral resource classification.

mineral resource statement –jorc code


The main mineral resource and ore reserve classification system employed in the mining
industry is the Australasian Code for Reporting of Exploration Results, Mineral Resources
and Ore Reserves (the ‘jorc code’) [3] . This code defines resources as a concentration or
occurrence of material with intrinsic economic interest that can be organised to increase
the geological confidence, into three categories: inferred, indicated and measured.
Despite the universally acceptance of the jorc mineral resource categories, there is
no numeric criteria in this code to define the limits of each resource class. The code
uses general terms to differentiate resource categories, for example, the resource
can be considered as measured, indicated or inferred when there are, respectively,
high, reasonable and low levels of confidence of tonnage, grade and mineral content
estimations. These levels of confidence, according to the code, are based on exploration,
sampling and testing information gathered through appropriate techniques from
outcrops, trenches, pits, workings and drill holes whose locations are spaced closely
enough (measured), too widely or inappropriately (indicated) or insufficiently (inferred),
to confirm geological and grade continuity. The final decision concerning the appropriate
category of Mineral Resource must be determined by a qualified professional.
There are other resource classification systems but all of them are based on the degree
of knowledge of tonnage and grades.

geostatistics and resource classification


Geostatistical methods permit the calculation of confidence limits on grades, however,
there is no agreement about how to obtain such limits for the tonnage. The French school
of geostatisticians has derived a quasi-experimental relationship which produces an
estimate of the average uncertainty to be expected in the total area [4] .
The main geostatistical parameters used to support mineral resource classification are:
kriging variance, mean weight, slope of regression between actual (Z) and estimated data
(Z*), correlation between Z and Z*, variance between simulations, confidence intervals,
etc. All of these parameters can be obtained after the solution of the ordinary or simple
kriging matrix system. The most used parameter is the ordinary kriging variance, which
depends on the geometry between samples and blocks, and on the variogram.
CHAPTER V 473

The kriging system offers a solution to the estimation problems by use of a continuous
stochastic model of space variability. This technique accounts for the space variability
of the data through variogram models. The kriging system is a particular case of linear
regression, where the estimated block values (Z*) are obtained by neighbour value samples
multiplied by kriging weights, minimising |Z-Z*|2 [5] . The kriging variance is usually
used to categorise mineral resource in different ways: frequency curve [6] , confidence
interval [7] , kriging error [8] , etc. All of these methods use categorical cut off over the
probability density function of the normalised kriging variance.
Mean weight and slope of regression line are obtained with, respectively, simple and
ordinary kriging methods [9, 10, 11] . The methods that use mean weight, sample
influence area, interpolation/extrapolation, etc, are directly related to the sampling
density.

identifying risky areas


During the earliest stage of development of a new mine, one deposit may be considered as
known from the exploration point of view, and that means that the available information
is enough to support a long term mining plan, without any important deviation from
reality. Two major problems can be observed in terms of deviation from reality: i) under
or overestimation of the ore quality (assays) and, ii) a local volume error in the geological
interpretation (to find waste where is supposed to be ore or “vice versa”).
The first problem depends basically on the sampling spacing and on the sample
quality assurance and its solution can be found with the use of traditional geostatistical
procedures. The second problem, on the other hand, depends on the sampling spacing as
well as on the geological (geometrical) complexities of the ore body. The aim of this work
is to present a solution for both problems. This means to identify the areas of high risk
following the geological interpretation, here called “risky areas”.
In general, the identification of such areas is achieved subjectively, depending on the
experience of the geologist; however, due to its subjectivity, this experience is not always
considered during the elaboration of a mining plan, which may lead to unexpected
results. To avoid such a problem, a practical and useful procedure has been developed.
The following characteristics were requested in search of an appropriate method:

• It should be numerical to avoid different interpretations for the same problem, and
permit its inclusion in a mining plan.

• It should be simple and consequently easy to be understood and implemented.

• It should be coherent with the geological expectation, reflecting the accumulated


experience of the geologists involved in the project.

• It should take into account the geological complexities of the orebodies, which is an
intrinsic characteristic of each deposit.

Indicator kriging
According to Rivoirard [12] , the expression of a random function may be simplified by
using the indicator. This indicator method has been frequently used when a given cut
off grade is considered [13] . The samples whose grade is above the cut off grade assume
the value 1 (one) and those with grade below the cut off assume the value 0 (zero).
The geological complexity can be represented by the variogram of the indicator.
Figure ➊ shows two empirical models chosen to demonstrate different variograms for
m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
474 An Indicator Geos tati s tical Approach to Suppor t Mineral...

different orebodies geometries. Orebody A is more continuous than orebody B and its
normalised indicator variogram, with a unit sill, has longer range than the corresponding
variogram of orebody B.

Figure 1 Schematic example of indicator variograms (solid line orebody B and dashed black line orebody A).

It is important to conclude that adopting a unit sill variogram introduces standard


assumptions about the sampling density. The unit sill can also be found in gaussian
grade variograms, with mean zero and variance one.
The ordinary indicator kriging takes into account the block support and the result for
each block will be a weighted average of samples that, by definition, assume only the
values 0 (zero) and 1 (one), and so, 1 will be the maximum possible result, and 0 will be
the minimum, except in a few situations with negative kriging weights.
Blocks with kriging result 1 have the highest probability to be composed of ore and
blocks showing the value 0 have high probability to be composed of others materials. The
intermediary values between 0 and 1 indicate some probability for both lithologies. If the
kriging result indicates the most probable lithology, the kriging variance will indicate
the degree of confidence of this assumption. Both parameters must be considered in the
final analysis of the problem.

The Risk Index calculation and resources classes


The ri is a vector calculated after two parameters: [1-I k ] and [S 2 I K ], where I K is the
kriging of the ore indicator at u support and S 2 IK is the respective kriging variance (using
variogram with unit sill).
In the cartesian plan formed by the pair [1-Ik] and [S2ik] (Figure ➋B), the ri vector is
the distance at the origin and so, the value can be obtained with Equation (1):

(1)

In this Cartesian bidimensional space, it is possible to evaluate four situations of each


block, according to their position on this plot (Figure ➋ B). The blocks whose pair
[(1-Ik), S2ik] are located around region I show high geological continuity and low kriging
variance, indicating low risky area; region II —high geological continuity and high
variance— indicates risk due to the lack of information; region III —low geological
continuity and low variance— indicates risk due to ore/waste contact proximity; region IV
—low geological continuity and high variance— indicates high risk due to both reasons.
The horizontal section of Fabrica Nova Mine (Figure ➋A) exemplifies the final results.
CHAPTER V 475

Figure 2 Resource classes of Fabrica Nova Mine (A) and RI vector (B).

Finally, the resource of each mining block can be classified according to its risk index.
At this point, after a detailed analysis of the data by the qualified professional and, in
order to obtain coherence with the geological expectation, the following values have
been defined as ideal:

Table 2 Mineral resources classification

Class of resources Risk index range

Measured RI < 0.6


Indicated 0.6 < RI < 0.9
Inferred 0.9 < RI

The proposed method is not absolute. It permits to classify resources in a relative basis,
which is useful to the identification of the risky areas of a deposit. Since it is relative, the
limits 0.6 and 0.9 for the risk index were arbitrated according to our geological conditions.
In case the method is applied somewhere else, these factors shall be re-evaluated and
they may change according to the local geological expectation.

case study
Two different iron deposits were selected in two different mineral provinces: Deposit B
from Quadrilátero Ferrífero (qf) and Deposit A from Carajás Province (cp). These provinces
are the most important high grade iron producer areas in the world, responsible for most
of the Brazilian iron ore production. The iron ore of these areas is mainly supergenic
in origin, being the result of the weathering over a Banded Iron Formation, bif, locally
called itabirite (in qf) and jaspelite (in cp).
Itabirite and jaspelite are banded rocks composed of quartz and iron oxide bands
that can vary from a few millimetres to a few centimetres in thickness, averaging one
centimeter. The dolomitic itabirite variety is composed of iron oxides layers interbedded

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
476 An Indicator Geos tati s tical Approach to Suppor t Mineral...

with dolomite instead of silica. The natural leaching of silica and/or carbonates of both
types of itabirite has generated large bodies of high grade hematite ore, an almost pure
monomineralic mass, and friable itabirites, surrounded by low grade hard itabirites or
jaspelites and volcanic rocks, filites, schist, quartzite, etc.
The geological contacts between hematite ore, friable itabirites, and hard itabirites or
jaspelites are irregular, generating complex shapes of which accurate prediction from
borehole information is difficult.
Two cut offs based on lithological description were applied to create the indicators:
for the Northern system (cp) the hematite samples (Fe>60%, in general) assume the value
1 and the other lithological samples assume the value 0. For the Southern system (qf),
hematite and friable itabirites assume value 1 and the hard itabirite and others rocks
assume value zero.
The two mineral deposits have different degrees of geological complexities: Deposit A,
more homogeneous, has 115,387 m of drilling, and deposit B, more complex, with 23,204
m of drilling. In general, the drilling grid is spaced of 100 m x 100 m, being more regular
in Deposit A than Deposit B. Deposit A has relatively more drilling information than
deposit B. The variograms of Figure ➌ confirm the difference, in terms of geological
continuity, between the deposits.

Figure 3 Indicator variograms for both deposits (normalized variograms below).

After the indicator kriging results, once the R isk Index is associated with each mining
block, it was possible to identify the “risky areas” in the model and use it to classify the
resources (Figure ➍).
The total resources are two billion tonnes and 1,5 billion tonnes for Deposits A, and B,
respectively. The distributions of the class resources are 71% measured, 20%, indicated
and 9%, inferred, for Deposit A, and 20% measured, 22% indicated, and 58% inferred, for
CHAPTER V 477

Deposit B. The difference between the resources class distribution is due to the following
factors: the geological complexity and the sampling density.
By having these areas identified, the next step is to check the mining plan. It shall not
be concentrated in risky areas and, if so, the previous knowledge of this fact will alert to
the need of making a decision: changing the mining plan and/or doing in time, some
extra geological work. This procedure will avoid unexpected conditions.

Figure 4 Estimated RI and resources classes in a horizontal section and


RI histograms (Deposit A above, and Deposit B below).

conclusions
The ri methodology considers the intrinsic characteristics of each mineral deposit. The
case study reveals that the ri is a good parameter to measure geological continuity. The
indicator variogram of Deposit A shows better continuity than Deposit B. Deposit A, apart
from its better geological continuity, has higher sampling density than Deposit B, as can
be clearly seen in the ri histograms.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
478 An Indicator Geos tati s tical Approach to Suppor t Mineral...

The Vale Ferrous Department has used the ri methodology to classify most of its iron
deposits and mines. In 2008, Vale reported 14 billion tonnes of iron reserves: 63% proven
(from the measured resources) and 37% probable (from the indicated resources).
Through the method, it is possible to simulate different sampling spacing for different
deposits, according to the local geological complexities, in order to optimise future
exploration works.
In the short-term mining plan, the method can be used to identify risky areas that
will demand additional work.
The changing in the block size will change the results (kriging variance). The simplicity
of the method permits its general application by using any available geostatistical
software and not requiring specific resources.

references
Amorim, L. Q. & Ribeiro, D. T. (1996) An Useful Ore R eser ve Classification Criterion Based on Indicator
Kriging. Proceedings of Mine Planning and Equipment Selection , Hennies, Ayres da Silva &
Cahves (eds), Balkema, Rotterdam, pp. 117–121. [1]

Sinclair, A. J. & Blackwell, G. H. (2002) Applied Mineral Inventor y Estimation. Cambridge, New York,
Melbourne: Cambridge University Press, p. 381. [2]

JORC CODE (1999) Australasian Code for R eporting of Mineral R esources and Ore R eser ves. p. 16. From:
http://www.jorc.org/jorc_code.asp. [3]

Journel, A. G. & Huijbregts, C. J. (1978) Mining geostatistics. Academic Press. [4]

Souza, L. E. (2007) Proposição geoestatística para quantificação do erro em estimativas de tonelagens e


teores. Tese (Doutorado em Engenharia de Minas, Metalúrgica e de Materiais) - Universidade
Federal do Rio Grande do Sul, p. 193. [5]

Annels, A. E. (1991) Mineral deposit e valuation, a practical approach. Chapman and Hall, London,
p. 436. [6]

Goria, S., Armstrong, M. & Gallia, A. (2002) Using bayesian approach to incorporate new information when
estimating resources. In: 8th annual conference of the International Association for Mathematical
Geology, 15-20 September 2002, Berlin, pp. 75–79. [7]

Diehl, P. & David, M. (1982) Cla ssification of ore reser ves/resources ba sed on geos tati s tical me thod.
Canadian Mining, and Metallurgical Bulletin. Vol. 75, No. 838, pp. 127–136. [8]

Rivoirard, J. (1987) Two key parameters when choosing the kriging neighborhood. Mathematical Geology,
Vol. 19, No. 8, pp. 851–856. [9]

Krige, D. G. (1994) An analysis of some essential basic tenets of geostatistics not always practised in ore
valuations. Proceedings, Regional apcom: Symposium on Computer Application in the Mineral
Industries, Slovenia, pp. 15–18. [10]

Armstrong, M. (1998) Basic linear geostatistic. Springer, Berlin, p. 153. [11]

Rivoirard, J. (1994) Introduction to disjunctive kriging and non-linear geostatistics. Clarendon Press,
Oxford. [12]

Isaaks, E. H. & Srivastava, R. M. (1989) An Introduction to Applied Geostatistics. Oxford University


Press. [13]
Adding Flexibility to Support
Vector Classification for Modelling
Categorical Variables

abstract
Miguel cuba This document highlights the flexibility of support vector clas-
Oy leuangthong sification (svc) for modelling categorical variables via the imple-
University of Alberta, Canada mentation of additional structural information variables in the
training dataset. With the use of these extra variables the result-
Julián ortiz ing model can be customised according to the requirements of
Universidad de Chile the user. A 3-d dataset with binary categorical information is pre-
sented as a case study and the goal is to generate a geologic model
that looks realistic according to the user's expertise. The process
of customisation of the svc models, alternatives of generating the
extra variables and some aspects of the conventional implementa-
tion approach are discussed.
480 A d ding F le x i bilit y to Suppor t Vec tor C la ssif ication...

introduction
In recent years many applications of svc for building geologic models were proposed
such as [1, 2] , etc. In this paper the flexibility of svc is highlighted by dealing with
many attributes of the categorical variable in the dataset. It is shown that the resulting
models can be customised according to the user's requirements by adding local structural
information in absence of relevant information to the categorical variable. In the
document, a 3-d dataset with exploratory boreholes is used. The only attributes considered
are the 3-d coordinates of the sampled intervals and a categorical variable that indicates
presence or absence of a certain rock type. The implementation of svc was done using
libsvm [3] . There are many variants of it that use different interfaces, e.g. standalone,
r, Python and matlab among others.
The initial dataset consists of 36 inclined boreholes that identify two rock types in a
three - dimensional domain (Figure  ➊). The information consists of three dimensional
coordinates and a categorical binary variable. For modelling, the domain is defined
within the limits shown in Table 1 .

Table 1 Domain limits in units of distance

Minimum Maximum
East 875 1150
North 1650 1975
Elevation 650 1175

Figure 1 Conditional dataset


that consists of 36 boreholes with
categorical data, the two rock types
coded as 0 and 1 are coloured as red
and gray respectively.

Support vector machine (svm) has gained popularity in modern machine learning theory
since it was introduced by Vapnik in 1992 [4]. The theoretical aspects of svm are described
in many books such as [4–6]. Usually, when svm is used for classification it is referred to
as svc. Initially it was proposed for classifying two linearly separable groups or classes.
The decision line that is selected is the one that produces the widest margin 2m of all
the possible lines that separate the two classes. The margin widens until it reaches the
closer points of each class. These points are called the support vectors sv. Consider x as the
attributes, w as the weights of the separation hyper plane and y i the target value. The length
of vector x is the number of attributes of the target value y i that participate in the analysis
of the problem. By making the two categories - 1 and + 1, the limits of the separation
margin can be expressed as wT x + b = - 1 and wT x + b = 1 respectively, where b is the offset
CHAPTER V 481

of the linear separator or bias weight, and the separation margin is .


Finding the largest margin is similar to maximising 2m or minimising w.w.
Considering the latter, the constraint of the optimisation problem is y i (w T x + b )  ≥ 1. The
problem can allow for some errors by measuring the distances of the n misclassified
points to their respective margin limits ε i ,∀ i  = 1,…n and penalising them C (Figure  ➋).
In doing this, the optimisation problem consists of minimising w.w + c   subject to
=1 ε i
y i (wT x + b )  ≥ 1 - ε i  and ε i ≥ 0. To be solved, the problem is expressed in quadratic terms by
using Lagrange multipliers. Finally it is changed to a maximisation problem by finding
its dual form by using the Karush-Kuhn-Tucker construction [4]. The problem in dual
form is expressed as max  = 1 α i - 0.5   = 1 = 1 α i α j y i y j x i . x j subject to 0 ≤ α i ≤ C and
 = 1 α i x k  = 0, where α i is a Lagrange multiplier. There are cases when the two classes
in the dataset cannot be separated linearly. svc deals with this problem by moving the
dataset to a higher dimensional space where the two classes can be separated linearly.
The translation is done by replacing the x i . x j part in the maximisation problem by
ϕ( x i ). ϕ(x j), where ϕ(x)is a function of the attribute x. Actually, the dot product can
be replaced by a licit kernel. This is also known as the kernel trick.

Figure 2 Sketch of SVM classification with


tolerance that allows for some errors. The
separation line wT x + b = 0 is the classifier
that produces the largest margin M with
respect to the support vectors SV.

The goodness of the svc model is judged by its capacity for generalisation. It is the ability
to be able to predict new outcomes given the same attributes that had been supplied in the
training process. It is a way to avoid over-fitting in the training stage. K-cross validation
is used to verify the model generalisation accuracy. It sub-divides the dataset in equal
sub-groups and predicts each of them using a model trained with the rest of the sub-
groups using a given kernel and penalty parameters [7] . Thus, the resulting measure of
accuracy is related to the generalisation of the trained model. On the other hand, data
accuracy is a measure of performance on the training dataset. It informs the percentage
of the training dataset that is correctly classified with the given parameters. Recall that
due to the tolerances allowed, some of the training points may be misclassified.

Conventional Implementation
In [7] a practical workflow is proposed when implementing svc for real datasets. In this
document some of those suggestions are followed. Prior to implementation of svc, the
scaling of the dataset is important. In geometric terms, the attributes of the dataset are

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
482 A d ding F le x i bilit y to Suppor t Vec tor C la ssif ication...

considered as the position vector x i of the categorical variable y i. The scaling process tries
to make it that the attributes of large numerical values do not diminish the influence
of the attributes of small numerical values [7]. For example, consider that dimensional
unit of one of the attributes is millimetres and the dimensional unit of another attribute
is metres. Even when the numerical values of both attributes may be similar after their
units are standardised, not taking into account the scale of the values tends to reduce the
contribution of attributes of small numerical values when the inner product is calculated.
In mining, an attribute that represents the content of a mineralogical characteristic may be
expressed in different ways, e.g. gold grade can be expressed as pounds /tm, kilograms /tm,
etc. they all represent the same content but with different numerical values. The scaling
stage is up to the modeller and many strategies can be considered for it. In the same way
that scaling can be beneficial to the learning process because it avoids the negative effect of
dimensional units; it can be also harmful if it is not properly carried out [8] . The selection
of the kernel and its parameters is important for implementing svc. The radial basis
function kernel (rbf) (1) is preferred because it only has one parameter y and combined
with the penalty parameter C allows sensitivity analysis of the svc model in a much more
flexible manner by using two dimensional search – grids of parameters. The selection of
the values of the pair of parameters (C, y) is done by choosing the pair that gives the best
accuracy in terms of k–cross validation [4, 7] . More complex strategies for selecting the
parameters have been proposed in [2] .

K (xi , xj ) = exp ( -γ || xi - xj ||2 ), ∀γ > 0 (1)

For the example dataset, the k-cross validation and the training data accuracy are
presented using a grid search (Figure  ➌). The range of the penalty values (log2 c) varies

from -5 to 8 and of the rbf kernel (log 2 γ) from 1 to 13, which is similar to the range

suggested in [7] . Finding a pair of parameters that maximise the data accuracy has the
side effect of over - fitting and getting a bad generalisation of the model. Furthermore,
finding the set of parameters that only maximise the k-cross validation may lead to
finding a model that does not look realistic. However, in mining applications it is very
important to try to honour the input dataset. Therefore, the initial criterion for selecting
the model that is suited to the requirements is that the model generalises well and at
the same time has high training data accuracy. Both criteria have to be satisfied and
balanced in selecting the parameters.

Figure 3 Grid–search of Gaussian kernel and penalty parameters for k–fold


cross validation (left) and data accuracy (right) for the initial dataset.
CHAPTER V 483

From the grid - search maps (Figure  ➌) it can be seen the maximum values of k - cross
accuracy are around 98%, whereas of the training data accuracy nearly reach 100% for
high values of the rbf kernel and penalty parameters. This behaviour is evidence of the
complexity of the categorical variable in the spatial configuration of the input dataset. For
the example dataset, the different alternative models that correspond to the combination
of pairs of parameters in the grid-search maps find a balance of the selection criteria in
the region of high training data accuracy values (greater than 99%) since it is contained
within the region of high values of the k-cross accuracy (greater than 97.5%). In order to
show the effect of the parameters γ and C in building a categorical model, four pairs of
parameters are drawn from the candidate region in the search - grid and the results are
plotted and discussed (Figure  ➍). The parameter pairs (C, γ) selected are: (0,12) for the
top - left model, (6,12) for the top - right model, (3,9) for the bottom - left model and (6,8) for
the bottom - right model. As the rbf kernel parameter γ increases (Figure  ➍  - top row),
the boundaries tend to wrap only the dataset with category one. It is due to the penalty
parameter C that the isolated locations of category one are left misclassified. The larger
the penalty parameter, the lower the number of misclassified points that are allowed
to occur. For the spatial configuration of category one in the dataset, the intervals of
very small length along the boreholes are the ones that are more difficult to honour.
Even when these two models from the perspective of k-cross accuracy and training data
accuracy may satisfy the initial criteria, they do not seem to be very realistic for a geologic
rock formation. Notice there is no extra information that tells the system what is realistic
and what is not, under the provided conditions the results are perfectly correct. In this
case, that level of judgement comes only from the modeller and is subjective. Using high
values of the kernel parameter makes the process of building the categorical model slower
since more support vectors are necessary. By reducing the values of the kernel parameter
the system becomes computationally less expensive to solve, because less support vectors
are necessary and the boundaries tend to cover a larger region in the domain while
still satisfying the k-cross and training data accuracy criteria (Figure  ➍ -bottom row).
Isolated samples with category one still remain misclassified by the model due to their
spatial configuration. It means that the generated model does not account completely
for all the data when building categorical volumes.
At a first glance, it may seem the results are smooth and do not look geologically
realistic due to the complexity of the dataset. There are additional variables that are not
present in the dataset that describe the occurrence of the categories at their respective
positions. svc results are good according to the information supplied, which is limited
for a good prediction. With the use of additional geologically relevant information to
the occurrence of a certain rock type the classification may improve. That is, it is going
to be easier to separate the dataset in the higher-dimensional space? One question the
modeller may ask is: What other characteristics contributed to the genesis of the two
rock types? Perhaps, other additional variables such as some mineralogical content, rock
quality characteristic, etc., which can be found in rock-log reports can be used. This extra
information, if relevant, will give better results than models generated using limited
information. Though, the degree of goodness of the final model is subjective to the expert
who analyses the results.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
484 A d ding F le x i bilit y to Suppor t Vec tor C la ssif ication...

Figure 4 Four 3-D solids generated using SVM with four combinations of penalty and Gaussian kernel
parameters (C, γ ), the pairs of the selected parameters are:
(0, 12 – top left), (6, 12 – top right), (3, 9 – bottom left) and (6, 8 – bottom right).

methodology
svc as a machine learning technique has the ability to handle many attributes. As more
attributes are added to the learning process, prediction performance increases. However,
the additional attributes have to contribute relevant information to the prediction of
the variable of interest. For example, considering the case of a skarn copper deposit, if
mineralogical information such as presence of bornite, chalcopyrite, etc. is added to the
analysis, the prediction of total copper content at an unsampled location would be more
accurate than if only the spatial position of the samples is used.
For the example dataset, an additional artificial variable is added that captures
information about the spatial structure of the contact of the intervals with category one (2),
giving 4-d attributes. For simplicity, the expression of inverse distance weighted is
used, so that the additional variable gets larger values as samples with category one are
close to category 0 intervals and/or the length of the interval of category one is smaller
(Figure  ➎). A power p of one and a search radius r of 7.5 units are used to calculate the
extra information in the dataset.

(2)
CHAPTER V 485

where, s( i, y i ) is the additional variable as a function of the categorical value of the i - th
sample, nR is the number of samples in the search region, j is the index of the samples
in the search region, d j is the distance of the j - th sample to the location with index i. y j
is the j - th categorical value in the search region, r the radius of the search region and p
the power of the distance weights.

Figure 5 Sectional view of boreholes with intervals coloured by categorical


variable (left) and with additional structural information (right).

This extra information makes the small intervals of category one easier to model than in
the previous case in which no extra information was used. That is, the component of this
extra variable increase the Euclidean distances between the samples close to the contacts
between category one and category 0. Therefore, separating the two categories becomes
a much easier task. The extra variable can be built in several ways and many of them
can be used at the same time, according to the spatial information that is required. For
example a measure of entropy, presence of veins, small fractures and directional regional
faults could be considered. After calculating the extra variable the two search-grids that
correspond to k-cross validation and training data accuracy are built (Figure  ➏). Notice
the accuracy values for the k-cross validation increases with respect to the initial case.
The same occurs in the training data accuracy search-grid; lower values of the rbf kernel
parameter are necessary to get high accuracy values.
As expected, the addition of the extra information made the dataset easier to be
classified. This can be observed by comparing the search-grids in Figure  ➎ and
Figure  ➏ . However, the extra information must be transferred to the domain since it
is now part of the attributes. At unsampled locations all the attributes have to be provided
before the svc can predict the categorical variable.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
486 A d ding F le x i bilit y to Suppor t Vec tor C la ssif ication...

Figure 6 Grid search of Gaussian kernel and penalty parameter for k–fold cross validation
(left) and data accuracy (right) for the fixed extra information variable in the dataset.

For simplicity, the transfer is carried out by using the idw approach. The kriging approach
was discarded since it requires a considerable time to solve the corresponding sets of
equations to get the estimates. In the end, that would make the overall modelling
process computationally expensive. Varying the parameters of idw for interpolating the
extra information variable is enough to give flexibility to the resulting svc models. The
use of anisotropic patterns also influences the results. However, using a unique global
pattern gives less room for customising the resulting models than using local anisotropic
patterns. These local anisotropic patterns can be implemented as extra variables as they
capture local structural information. For all the cases of the presented dataset, a power
of 5 and an isotropic search radius of 30 units were used. Four different models were
generated using svc, and are compared using vertical cross sections (Figure  ➐). In the
first case, the extra information is transferred using the initial settings of idw and
the pair of parameters (6, 7) from the search grid in Figure  ➏ (Figure ➐ b). Notice
that the two small intervals in the borehole #1795 that were not modelled using the
conventional approach (Figure  ➐ a) are now included in the model. Compared to the
conventional case, the shape of the boundary of category 1 is quite different, basically
because the interpolation of the extra information guides the generation of the contours.
The second case considers the use of a maximum number of samples per quadrant.
Even after selecting the same pair of parameters as in the previous case, the boundaries
appear a bit noisier and less rounded than in the first case (Figure  ➐ c) since fewer
samples participate in the transfer of the extra information. Finally, the fourth case
considers the combination of the results obtained using the conventional approach for
the upper part and the result of using the additional information variable for the lower
part (Figure  ➐ d).
There is no parameter other than the user's expert judgement that can say whether
the categorical model is correct or not. The system uses only the available attributes to
solve a problem, whereas the expert geologist has an extensive geologic background that
allows him/her to analyse globally and locally what the deposit may look like. All the
additional variables and conditions that the geologist considers were not fed into the
system to produce the models.
CHAPTER V 487

Figure 7 Sectional of four 3-D models built using SVC.

In Figure  ➐, from left to right, the first plot corresponds to a model built using only
the 3-d coordinates as attributes, in the second and the third extra information variable
is used an and transferred using two strategies of idw, and in the fourth plot the extra
information variable is partially used.
Notice that to combine the two approaches (Figure  ➐), the extra information in
the segments of category one of the two boreholes that delineate the upper part of the
boundary is removed by making it zero. The extra information is present only in the
segments of the boreholes that delineate the lower part. The boundaries generated by
the conventional approach are retrieved by removing the values of the extra information
variable in certain segments of selected boreholes and their respective regions of influence
in the domain. Therefore the delineation of the conventional approach prevails in such
regions (Figure  ➑). This is one example of how the svc can be guided according to the
user requirements by using extra information variables.

Figure 8 Influence of the use of extra information variable in the modelling process using
SCV for the semi-automatic case (left) and the manually guided cases (right).

The extra information variable used for the case study tries to capture the occurrence
of isolated small intervals of category one, giving them more importance in the

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
488 A d ding F le x i bilit y to Suppor t Vec tor C la ssif ication...

classification. This variable was built in such a way that it considers the surrounding
information around each existing sample, hence it can be considered as a structural
informative variable. In the mining industry, additional information exists on the
sampled intervals. Some of this information is relevant to the occurrence of certain
types of rocks, e.g. mineralogical characteristics, hardness, etc. The extra information
can be also used for svc modelling. However, these variables are different to the one used
in the case study because they are informative at a point scale since they do not capture
surrounding structural features but rather information pertaining to the sample itself.
In both cases, the problem still remains in the transfer process. The way the transfer
process of the extra information is made guides the svc modelling. For the case of the
extra information variables relevant to the occurrence of the rock type, the transfer
process should depict in a realistic manner the occurrence of these additional variables
in the domain, whereas for the structural informative variable it is up to the modeller.
In presence of several variables that are relevant to the occurrence of the rock type, the
use of the structural informative variables become unnecessary because there tends to
be enough information to describe the surrounding geologic structures.
In the absence of additional relevant information on the rock type, the use of a structural
informative variable is convenient, as shown in the case study. However, customising the
geologic model implies training the model every time the extra information variable is
modified. The selection of a new pair of parameters to implement svc is made based on
each new search - grid resulting after each round of modification of the extra information
in the boreholes. This is the part in the proposed modelling process that takes most time
to compute. However, comparing it to manual contouring or modelling using conventional
geostatistical techniques at a finer resolution, svc is still a good option. Figure  ➒ shows
the search - grids that were used to select the pair of parameters (C, γ) to build the model
in Figure  ➐-d. The modification of the extra information in certain segments of the
boreholes was done manually by visiting various vertical and planar sections of one
selected model resulting from the semi - automatic case.

Figure 9 Grid search of Gaussian kernel and penalty parameter for k -fold cross validation (left) and data
accuracy (right) for the manually fixed extra information variable in the dataset.

discussion
The svc technique can be used to build realistic geologic models which can be customised
to suite the user's requirements. This can be linked to a manual contouring process where
the geologist decides on the local and global structures of the deposit, since the user can
CHAPTER V 489

guide the result. svc tries to mimic the manual process by giving freedom to the user to
decide. The advantage is that it is much faster and more flexible than hand contouring for
two reasons: (1) small changes can be implemented easily without compromising the rest
of the modelling, as happens in the manual case where modifications in a vertical section
compromises the rest of the sectional drawings, (2) it is not required to draw horizontal
and / or vertical sections and link them with the use of any triangulation method.
Also, compared with semivariogram - based conventional geostatistical techniques, it is
easier to implement since there is no assumption of stationarity that reduces flexibility
of the customisation of the model. Calculation time of svc is faster compared to the
conventional geostatistical approach, and additional attributes can be considered in the
modelling process.
The svc models can be customised by modifying the extra variables, both for artificial
and geological information. The transfer of a geological informative variable has to depict
a realistic behaviour of that variable in the domain, which relies on the expert judgement
of the geologist. The artificial variables try to give flexibility to the modelling process
when geologic informative variables are not available. They guide the model according
to the user requirements. The other parameters used to implement svc are selected from
the search - grid following the conventional approach.
Using svc for modelling is relatively new and has been gaining popularity in recent
years. Many applications for modelling are yet to come. One of the advantages compared
to non - machine learning methods is the integration of relevant attributes to the target
variable in a logical manner, if properly implemented. This makes the results easier to
be accepted from the user perspective.

acknowledgements
Thanks to alges, ccg and their sponsors for providing financial support for this research,
and a special thanks to Enrique Gallardo for the discussions about Machine Learning
topics and the many potential applications of it in the various fields of engineering.

references
Smirnoff, A., Boister, E. & Paradis, S. J. (2008) Support Vector Machine for 3-d Modelling from Sparse
Geological Information of Various Origins. Computer and Geosciences, pp. 127 – 143. [1]

Gallardo, E. C. (2009) m. sc . Thesis: Support Vector Classification for Geostatistical Modeling of Categorical
Variables. Edmonton: University of Alberta. [2]

Chang, C. C. & Lin, C. J. (2001) National Taiwan University. Retrieved November 24, 2009, from libsvm:
a library for support vector machines: www.csie.ntu.edu.tw/~cjlin/libsvm. [3]

Marsland, S. (2009) Machine Learning: An Algoritmic Perspective. Boca Raton: Chapman & Hall/crc. [4]

Vapnik, V. N. (1995) The Nature of Statistical Learning Theor y. New York: Springer. [5]

Hastie, T., Tibshirani, R. & Friedman, S. (2009) The Elements of Statistical Learning. New York: Springer. [6]

Hsu, C. W., Chang, C. C. & Lin, C. J. (2008) National Taiwan University. Retrieved September 12, 2008,
from Department of Computing Science: www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf. [7]

Sarle, W. S. (2002) comp.ai.neural -nets faq . Retrieved November 23, 2009, from Part 2 of 7: Learning:
www.faqs.org/faqs/ai-faq/neural-nets/part2/. [8]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Numerical Modelling in Coal
Roof Fracture Mechanics in
Fully Mechanised Top Coal
Caving Technology

abstract
Pan weidong Fully Mechanised Top Coal Caving Technology (fmtct) is one of
China University of Mining the considerably advanced coal mining methods, which are widely
and Technology, China used in the present world, and plays an important role on mining
in the thick coal seams underground. However, due to the different
Dimitre antonov reserve conditions and physical and mechanical properties of
South Dakota School of coal, it is difficult to ensure a high extraction ratio and efficient
Mines and Technology, USA production without a comprehensive evaluation of coal roof
fracture mechanics. In this paper, theoretical fracture mechanics
of coal roof were analysed, and the top coal caving process was
simulated using flac (Itasca Consulting Group). The stress and
displacement distributions around the face were presented, and
the caving mechanics of coal roof with different physical and
mechanical parameters were evaluated. The results have great
reference significance on the prediction of caving properties of
coal roof and the determination of the pre-fracture method.
494 Numerical Modelling in Coal R oof Frac ture Mechanic s in Fully...

introduction
Fully Mechanised Top Coal Caving Technology (fmtct) is one of the considerably advanced
coal mining methods, which are widely used in the present world, and plays an important
role on mining in the thick coal seams underground. fmtct can change the reverse
advantage of thick coal seam into technical and economic advantages. However, the
theory research of the long wall top-coal caving shows relatively sluggish, and many
researches are limited in physical and numerical modelling in the laboratory. The
connection between the working parameters and mechanical parameters hasn't been
established. It is still difficult to describe the deformation, fracture and drawing processes
of top coal by mechanics theory during the caving.
The deformation and failure process of roof coal is one of the basic questions for the
top coal caving. The roof coal changes from continuity to discontinuity step by step. It is
a very complicated, non-linear mechanical process and is difficult to be analysed only by
mechanical analytic methods. In 1975, the mechanical properties of top coal under the
vertical stress by using damage mechanics were studied [1] . The roof coal was divided
into four zones: elastic, plastic, fracture and falling zones [2] . Meanwhile, the horizontal
and vertical movement equations of roof coal were established [3] . The displacement of
top coal under different hardness coal seam conditions were measured, which results
showed that the deformation range and the distance between the initial motion point of
top coal and the working face for soft coal are bigger than those of hard coal [4] . In the
research of stress distribution, roof beam model on damaged ground by the relation of
displacement and damage parameters were established. The model cannot work unless
the real values of vertical stress in roof strata were obtained [5] . N.E. Yasitli and B. Unver
(2005) formed a numerical model of a long wall panel at an underground mine, and found
that the characteristics of stress distribution measured in situ coincided with the results
presented in previous researches [6] .
In this study, we analyse theoretical fracture mechanics of coal roof and simulate the
top coal caving process using flac2d. The distributions of stress and displacement around
the face were presented, and the caving mechanics of coal roof with different physical
and mechanical parameters were evaluated.

fracture mechanics of coal roof


Fully mechanised top coal caving technology is applied on the underground mining of
thick coal seams. The principal point is to arrange a long wall working face along the floor
of the thick coal seam. When the coal in the working face is cut down by the shearer, the
top coal is caved behind the support as the result of vertical stresses. Compared with the
slicing mining, the top coal caving system has a layer of coal above the supports, which
is thick, loose and can be caved through the window of shields. So the strata behaviours
and basic mechanics problems are also different (Figure   ➊).
Vertical stress

Damaged and fractured coal

Roof coal
loose coal
Figure 1 The sketch of fully
mechanised top coal caving tech­
nology (after Wang J.C. (1999)) [7] .
CHAPTER VI 495

To research the fracture mechanics of coal roof, we established the mechanical


constitutive relation of coal sample under load. In two dimensions, the roof coal was
separated into many micro-elements. All the micro-elements of coal were assumed to
be linear elastic, but they showed nonlinear in the macro mechanical property. Due to
the fact that every element contains different discontinuities, the strengths of every coal
micro-element are different but fit Weibull statistical distribution [8] :

(1)

Where:
= a shape parameter of Weibull statistical distribution.
= a measure of average strain.
= strain of micro-element.
= the probability density of the element strengths.
In three dimensions, if it is accepted that the intermediately principal stress is equal
to minor principal stress ( ), we can obtain the damage constitutive relation:

(2)

Where:
= Young's modulus.
= the axial stress of coal.
= the axial strain of coal.

Figure 2 The mechanical model of


coal roof samples in top coal caving.

However, during the top coal caving, the stresses of coal zones were changed along with
the caving. As shown in Figure  ➋, along with the caving, the horizontal stress in
direction of the panel was decreased to zero. The real values of major and minor principal
stresses were difficult to measure in situ. In this research, we evaluated the mechanical
constitutive relation of top coal using numerical simulation method.

numerical modelling of top coal caving


Modelling procedure
flac is a widely used numerical software for stress and deformation analysis on both
rock and coal. The software is based on the finite difference numerical method with
Lagrangian calculation. The finite difference method is more suitable for modelling stress
distribution and deformation of coal in comparison to other numerical techniques. flac2d
is a commercially available software that is capable of modelling in two dimensions.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
496 Numerical Modelling in Coal R oof Frac ture Mechanic s in Fully...

Modelling for research on fracture mechanics of coal roof in top coal caving is
performed in four steps:

• Define the objectives for the model analysis


• Define the basic constitutive models of strata and material properties
• Determination of the boundary and initial conditions
• Presentation and analysis of modelling results.

Objective for the modelling analysis


The research is based on the data received from the No.4103 working face of Xinliu coal
mine, which is located in the middle part of China. On the working face, average depth
below surface was around 240 m and the 6 m thick coal-seam had no slope. Coal has been
produced with the long wall top coal caving method where a 2.5 m high long wall face
was operated at the floor of the coal-seam. Top slice coal having a thickness of 3.5 m was
caved and produced through windows located at the back of shields.
In order to determine the physical and mechanical parameters of coal and surrounding
rocks in the working face, a laboratory test programme was carried out on the samples
taken from the roof, floor and coal seam in the No. 4103 working face. The results are
presented in Table 1 .

Table 1 Physical and mechanical properties of coal and surrounding rocks

Uniaxial Elastic Poisson's Friction Tensile


Density Cohesion
Formation Lithology strength modulus ratio angle strength
(kg/m3) (MPa)
(MPa) (GPa) ( ) ( ) (MPa)
Based roof limestone 2612 76.7 31.5 0.21 11.46 43 5.45
roof shale 2106 39.2 17.6 0.26 7.25 36 3.16
coal coal 1450 18.6 8.3 0.4 2.77 30 1.33
floor claystone 2433 45.5 21.4 0.31 9.12 38 4.04

In this research, the objectives of model analysis are the coal and surrounding rocks in
the No. 4103 working face. The distribution of front abutment stress will be presented
along with the caving, and the relation between different caving properties and stress
and displacement distributions of coal roof will be evaluated.

Assessment of material properties


The Mohr-Coulomb model, which is included in flac, is the conventional model used
to represent shear failure in rock and coal. The results provided by other researchers,
who showed that the Mohr-Coulomb model more correctly presented the behaviour of
coal than other models [9–11] . The ubiquitous-joint model is an anisotropic plasticity
model that includes weak planes of specific orientation embedded in a Mohr-Coulomb
solid. Considering there are much more discontinuities in the coal than the surrounding
rocks, we used the ubiquitous-joint model for the modelling of deformation behaviour
of coal roof.
It is crucial to properly assess material properties in order to obtain acceptable results
in modelling of flac. Therefore, the physical and mechanical properties of each geological
unit must be properly determined. In general, intact rock properties are determined with
laboratory tests. The data presented in Table 1 are representative of partial parameters'
value of coal seam and surrounding rocks, including density, cohesion, friction angle and
tensile strength. The other parameters —bulk modulus, shear modulus, dilation angle
and joint properties— have to be defined before modelling.
CHAPTER VI 497

We have obtained the values of elastic modulus and Poisson's ratio of coal seam and
surrounding rocks. The values of bulk modulus and shear modulus were calculated with
the formulas:

(3)

Where:
= bulk modulus
= shear modulus
= elastic modulus
= Poisson's ratio.

The dilation angle ( ) controls an amount of plastic volumetric strain developed during
plastic shearing and is assumed constant during plastic yielding. The value of
corresponds to the volume preserving deformation while in shear. In most cases the
assumption of can be adopted. For non-cohesive soils (sand, gravel), with the internal
friction angle , the value of the dilation angle can be estimated as [9] .
In this research, we calculate the values of dilation angles with formulas ( ) and
the results are: of limestone is 13°, of shale is 6°, of coal is 0°and of claystone is 8°.
Joint properties are conventionally derived from laboratory testing (e.g., triaxial and direct
shear tests). These tests can produce physical properties for joint friction angle, cohesion,
dilation angle and tensile strength, as well as normal and shear stiffness. The joint cohesion
and friction angle correspond to the parameters in the Coulomb strength criterion.
Values for normal and shear stiffness for rock joints typically can range from roughly
10 to 100 MPa/m for joints with soft clay in-filling, to over 100 GPa/m for tight joints in
granite and basalt. Published data on stiffness properties for rock joints is limited, and
summaries of data can be found in Kulhawy (1975), Rosso (1976) and Bandis et. al. (1983).
In this research, the values of joint properties of coal were accepted as joint cohesion 69
KPa, joint friction angle 30°, normal stiffness for coal joints 6.27 GPa/m, shear stiffness
for coal joints 8.53 GPa/m, joint dilation angle 0°and joint tension 119.5 KPa.
The physical and mechanical properties of coal and surrounding rocks used for
modelling are presented in Table 2 .

Table 2 The input parameters regarding coal and surrounding rocks used in numerical modelling

Parameters Limestone Shale Coal Claystone Joint parameters Coal


Density 2612 kg/m3 2106 kg/m3 1450 kg/m3 2433 kg/m3 Joint angle 30°
Bulk moduli 18.1 GPa 12.2 GPa 13.8 GPa 18.8 GPa Joint cohesion 69 KPa
Shear moduli 11.1 GPa 7.0 GPa 3.0 GPa 8.2 GPa Joint friction angle 30°
Cohesion 11.46 MPa 7.25 MPa 2.77 MPa 9.12 MPa Normal stiffness 6.72 GPa/m
Friction angle 43° 36° 30° 38° Shear stiffness 8.53 GPa/m
Dilation angle 13° 6° 0° 8° Joint dilation angle 0°
Tension 5.45 MPa 3.16 MPa 1.33 MPa 4.04 MPa Joint tension 119.5 KPa

Setting of boundary and initial conditions


The actual panel length of No. 4103 working face is 1330 m. However, the front abutment
vertical stress in 30 m front of the working face is too small to be considered in situ. In
this research, we built the model in the length of 60 m. The height of the model is 21 m,
including four strata: 5 m for floor stratum (claystone), 6 m for coal seam, 2 m for roof

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
498 Numerical Modelling in Coal R oof Frac ture Mechanic s in Fully...

stratum (shale) and 8 m for basal roof stratum (limestone). The finer meshes (more zones
per unit length) were applied on the coal seam, which is the keystone of this research.
The weight of geological strata from the basal roof to ground surface was applied on
the top of the model as a vertical stress, whose value was calculated by the formula:

(4)

Where:
= vertical stress, Pa
= density of roof strata, kg/m3
= gravity, 9.81 m/s2
= thickness of roof strata, m.

The average depth of No. 4103 working face is 240 m, and the density of geological strata
is accepted as 2000 kg/m3. We calculated the value of vertical stress approach to 4.7 MPa.

Figure 3 The boundary and initial conditions of top coal caving model.

In order to prevent displacements at the beginning, the right-hand and left-hand sides of
the model were fixed in x-axis directions respectively, and the bottom of the model was
fixed in y-axis directions, as shown in Figure  ➌. The coal seam of model was excavated
with 6 m length each time from the right-hand side to the left-hand side as in situ.
Actual shield supports of the working face were modelled in the form of structural shell
elements, which are regarded as rigid bodies.

Presentation and evaluation of modelling results


Stress distributions

As a result of stepwise modelling, stress distributions were calculated under various


conditions. The vertical stress distributions are presented after face advances of 10 m,
15 m and 20 m.
CHAPTER VI 499

Figure 4 Vertical stress distribution when face advance 10 m.

Figure 5 Vertical stress distribution when face advance 15 m.

Figure 6 Vertical stress distribution when face advance 20 m.

As shown in Figures  ➍, ➎ and ➏, the biggest vertical stress appeared at the frontage
of the working face, and the value was increased from 10 MPa to 20 MPa.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
500 Numerical Modelling in Coal R oof Frac ture Mechanic s in Fully...

Figure 7 Vertical stress distributions advance the face. (a) Found by


numerical modelling; (b) Found by other researchers.

The vertical stress distributions obtained from the model in Figures  ➍, ➎ and ➏ can
also be presented together. As shown in Figure  ➐ (a), after 10 m of face advance, the front
abutment stress increased up to a distance of 2 m front of the face and reached a maximum
stress level of 10 MPa. After 15 m of face advance, the maximum value of abument stress
increased to 15 MPa. When the working face advanced to 20 m, the maximum abutment
stress was 20 MPa, and decreased to field stress at a distance of 7 m in front of working face.
Based on in situ measurements, abutment vertical stress around long wall faces has
been studied by various researchers [10–12] . As it can be seen in Figure  ➐ (b), vertical
stress increased in the front of the working face and gradually decreased to a value equal
to field stress. Following the eventual failure of the coal seam in the maximum front
abutment region, maximum stress would tend to shift a couple of metres away ahead
of the working face.
The value of field stress was calculated as 3.7 MPa and presented with a dashed line
in Figure  ➐(a). After comparing with the vertical stress distribution in Figure ➐(b),
it is obvious that the characteristics of stress distributions obtained by numerical
modelling are in good agreement with the results of actual measurements in
underground conditions.

conclusions
In this study, theoretical fracture mechanics of coal roof was analysed. The fracture form
of coal roof lies on the distributions of major and minor principal stresses, which changed
along with the advance of working face and are difficult to obtain by measurement in situ.
The results of flac 2d numerical modelling of No. 4103 working face at Xinliu
underground mine were presented. Modelling results of vertical stress and displacement
vector distributions visually presented the coal roof response to mining operation in thick
coal seams. Results revealed that the front abutment vertical stress increased along with
the face advancing. When face advanced 10 m, the maximum vertical stress was 10 MPa
and formed at a distance of 2 m in front of the face. When the distance of face advance
increased to 20 m, the vertical stress reached a maximum stress level of 20 MPa. After
reaching the highest value, the front abutment pressure gradually decreased towards the
initial field stress of 3.7 MPa. Characteristics of stress distribution found by numerical
modelling coincided with the results of in situ measurements given in the literature.
CHAPTER VI 501

The results presented in this paper showed that flac2d code is an effective method
to research the caving mechanics of coal roof in fully mechanised top coal caving
technology. With a comprehensive evaluation on the caving properties of coal roof, the
results have great reference significance on the determination of pre-fracture method
to top coal caving.

references
Kulhawy, Fred H. (1975) Stress Deformation Proper ties of Rock and Rock Discontinuities. Engineering
Geology. 9, pp. 327–350, 1975. [1]

Fang, Z. & Harrison, J. P. (2002) Development of a Local Degradation Approach to the Modelling of Brittle
Fracture in Heterogeneous Rocks. International Journal of Rock Mechanics and Mining Sciences, 39,
pp. 443–457. [2]

Hakami, H. (2001) Rock Characterisation Facility (RCF) Shaft Sinking — Numerical Computations Using
FL AC. International Journal of Rock Mechanics and Mining Sciences, 38(1), pp. 59–65. [3]

Holland, K. L. & Lorig, L. J. (1997) Numerical Examination of Empirical Rock-mass Classification Systems.
International Journal of Rock Mechanics and Mining Sciences, 34(3), pp. 1–14. [4]

Yumlu, M. & Ozbay, M. U. (1995) A Study of the Behaviour of Brittle Rocks Under Plane Strain andTriaxial
Loading Conditions. International Journal of Rock Mechanics & Mining Sciences, 32(7), pp. 725–
733. [5]

Yasitli, N. E. & Unver, B. (2005) 3D Numerical Modelling of L ong Wall Mining With Top-coal Caving.
International Journal of Rock Mechanics and Mining Sciences, 42(2), pp. 219–235. [6]

Wang, J. C. (1999) The Basic Mechanics Problem s and the De velopment of L ong Wall Top Coal Caving
Technique in China. Journal of Coal Science and Engineering, China. 5(2), pp. 1–7. [7]

Chen, Z. H., Zhao, X. Q. & Zhang, Y. (1999) Damage Analysis on the Deformation and Failure of Top-coal
During Top-Coal caving. Proceedings of 1999 International Workshop on Underground Thick-Seam
Mining, 1999, pp. 66–71. [8]

Fine Ltd.(On-line Contextual Help of Civil Engineering Sof tware, retrieved 10 September 2009 from
http://www.finesoftware.eu/geotechnical-software/help/fem/angle-of-dilation/. [9]

Rosso, R. S. (1976) A Comparison of Joint Stiffness Measurements in Direct Shear, Triaxial Compression, and
In Situ. Int. J. Rock Mech. Min. Sci. & Geomech., 13, pp. 167–172. [10]

Bandis, S. C., Lumsden, A. C. & Barton, R. N. (1983) Fundamentals of Rock Joint Deformation. Int. J. Rock
Mech. Min. Sci. & Geomech, 20(6), pp. 249–268. [11]

Singh, R., Mandal, P. K., Singh, A. K., et al. (2001) Cable-bolting-based Semi-mechanised Depillaring of a
Thick Coal Seam. International Journal of Rock Mechanics & Mining Sciences, 38, pp.245–257. [12]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
Calculation of Pillar Stress
in Room and Rib Pillar Mine
during the Ore Exploitation

abstract
Saeed dehghan The stability design and stress distribution on pillars of an
Islamic Azad  University, Iran underground mine located in Iran is described. The exploitation
was carried out by the rooms and rib pillars method. The width
Kourosh shahriar of the rooms and pillars are five and seven metres, respectively.
Parviz maarefvand
The study of stress and deformation around the mine rooms and
AmirKabir University of
pillars was carried out by applying the two dimensional Flac2D.
Technology, Iran
With this respect, the pillar stress and the vertical displacements
in the rooms and pillars were determined as the ore was extracted.
Kamran goshtasbi
The results obtained from the numerical model are then
Tarbiat Modares University, Iran
compared with a model that applies the tributary area method. The
results show that the calculated average vertical stresses from
the analytical method are within a close range to the maximum
values obtained from the numerical analysis.
504 Calculation of Pillar Stress in R oom and R i b Pillar Mine...

introduction
The Faryab Mine is located 143 km northeast of the town of Bandar- e-abbas, in the province
of Kerman. The Faryab Corporation began exploration in 1940, with chromite production
beginning in 1953. This mine included surface and underground mines but operation in
open pit mines was ceased some time ago. However, the Fetr6 is the biggest underground
mine that is in operation at present. There are two major faults acting on ore in this mine
and the ore is divided in three zones called Phases One to Three. Figure ➊ shows a 3-d
view of the Fetr6 orebody.

Figure 1 3-D view of fetr6 orebody wire frame [1] .

The mining method includes room and pillar in Phase One and secondary exploration
and conceptual design of Phase Two carried out. According to the exploration carried out,
the orebody in Phase Two have a 180 m length, 20 m height and 80 m width. The mining
method employed for this phase is designed for 100% extraction with complete pillar
recovery. This mining method include rib pillar with delayed backfill. This is a simple,
common, low cost and safe mining method, integrating mining and backfilling systems.
Figure   ➋ shows schematic view and Figure ➌ shows 3-d view of the designed method
in Phase Two of the Fetr6 underground mine.
As shown in Figure   ➋  and ➌, the mine consists of sub parallel rooms separated by
rock rib pillars. The width and height of the rooms are 5 and 20 metre, respectively while
the pillars are 7 m width. With consideration of orebody and rooms geometry, we will
have 14 parallel rooms at the end of primary mining.
The sequence of operations used to exploit each room is as follows [1] :

• Driving the two tunnels from the secondary tunnels to the foreseen end of the room.
• Exploitation of remained ore between top and bottom tunnels by bench blasting which
has a maximum height of 8 m.
CHAPTER VI 505

Figure 2 Schematic view of the room and rib pillar mining method
with delayed backfill in a Fetr6 underground mine.

Figure 3 3-D view of the room and rib pillar mining method in a Fetr6 underground mine [1] .

For stability analysis and calculation of pillar's load, the study of stress and deformation
around the mine rooms and pillars was carried out by applying both the two-dimensional
finite difference code, Flac 2d and the tributary area analytical method. The use of a
two-dimensional model is suitable for the analysis of this problem because the exploited
rooms are developed mainly in the longitudinal direction and it is therefore possible
to disregard the three-dimensional effect due to the excavation face when the rooms
have been excavated [2] . The numerical models were developed as rooms excavated and
the effect of sequence of room extraction was determined on pillars load. The result of
numerical modelling was then compared with the analytical method.

GEOMECHANICAL PROPERTIES
Detailed investigations, both in the laboratory and in-situ were carried out to provide
reliable data which needed for the numerical analyses.
Laboratory investigations were carried out in order to determine the physical and
mechanical properties of the intact rocks. Mechanical and physical test were carried out
on samples from hanging wall, orebody and footwall. With this consideration, uniaxial,
triaxial and shear strength test were carried out in accordance to isrm suggested methods [3] .
Table 1 shows the intact rocks properties.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
506 Calculation of Pillar Stress in R oom and R i b Pillar Mine...

Table 1 Intact rock properties [1]

Property Orebody Hanging wall Foot wall

U.C.S(MPa) 29 50 112

Young's Modulus (GPa) 15.9 16.2 32

Poisson's ratio 0.05 0.04 0.22

Cohesion(MPa) 4.2 4.8 6.4

Friction angle(deg) 53 55.4 55.3


3
Unit Weight(KN/m ) 38 27.1 27.1

Field investigations were carried out to determine rock mass conditions. According to the
Bieniawski rock mass classification [5] , the calculated Rock Mass Rating (rmr) values
are 45 (Class III) and the Geological Strength Index (gsi) are 40 for the orebody and the
hanging wall.
The results of the in-situ and laboratory investigations were then processed using
Roclab code (Ver. 1.0, Rocscience) and rock mass parameters were determined,(Table 2 ).

Table 2 Rock mass properties from Rocklab [1]

Property Orebody Surrounded rock

Modulus of deformation (GPa) 7.5 8

Poisson's ratio 0.25 0.22

Cohesion(MPa) 2.5 2.9

Friction angle(deg) 32 33

NUMERICAL ANALYSIS
There are various numerical methods and programs available, each of which has its own
applicability regarding the rock mass and discontinuities encountered in underground
structures. In this research, the study of stress and deformation on pillars and around
the mine rooms was carried out by applying two-dimensional Flac2D.
The following stages were considered for developing the model:

• Creation of a base model. The model is 1000 m wide and 100 m height based on the
orebody's geometry.

• Application of the natural field stress.


• Excavation of the upper part of the first room was simulated.

• Simulation of another stages of room exploitation, as above mentioned. At the end of


this stage, stress distribution on the pillars was recorded.

• Repetition of stages two and three until the end of the mine exploitation (excavation
of 14 rooms).

Numerical models were analysed under various horizontal-to-vertical stress ratios (K),
ranging from 0.33 to 1.0. The comparison of the results obtained from different horizontal-
to-vertical stress ratios with observations and measurements that were taken locally,
CHAPTER VI 507

especially in Phase One, shows that K= 0.5 produced a much closed results. Therefore,
this stress ratio was utilised for all numerical models.
The developed numerical models, which simulate the excavation of 14 rooms allowed
the rock rib pillar stress and displacement to be verified. Figure  ➍  shows the model
geometry and Figure  ➎ shows the final model geometry which all rooms to be excavated.

Figure 4 Basic numerical models geometry [1] .

Figure 5 Geometry of numerical model at the end primary mining (14 rooms excavated) [1] .

The results of numerical analysis are obtained in terms of stress and displacement. Two
points in pillars, located at top and middle, and one point at roof of rooms were considered
and the vertical stresses and the vertical displacements were recorded as ore extraction
was developed. Figures   ➏  to �� show the results obtained from the numerical model.
Figure  ➏  ➒  to show that the induced stress in pillars was increased as rooms excavation
completed and maximum vertical stress applied on middle pillars (pillar No. 7, 8 and 9)
when all rooms excavated (at the end of primary mining). The maximum stress in the
top and middle point of pillar No.8 was 4.2 and 5.4 MPa, respectively. Figures  �� to ��
show that the vertical displacement in the pillars and rooms were increased progressively
as room's excavation was developed. The results also show that maximum displacement
occurs in the middle pillars and rooms.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
508 Calculation of Pillar Stress in R oom and R i b Pillar Mine...

Figure 6 Effect of ore extraction sequences on stress distribution. (Top of pillars) [1] .

Figure 7 Effect of ore extraction sequences on stress distribution. (Middle of pillars) [1] .

Figure 8 Effect of pillar position and sequence of ore extraction on pillar stress. (Top of pillars) [1] .
CHAPTER VI 509

Figure 9 Effect of pillar position and sequence of ore extraction on pillar stress. (Middle of pillars) [1] .

Figure 10 Effect of ore extraction sequences on vertical displacement occurs in pillars. (Top of pillars) [1] .

Figure 11 Effect of pillar position and sequence of ore extraction on


vertical displacement occurs in pillars. (Top of pillars) [1] .

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
510 Calculation of Pillar Stress in R oom and R i b Pillar Mine...

Figure 12 Effect of ore extraction sequences on vertical displacement occurs in rooms [1] .

Figure 13 Effect of pillar position and sequence of ore extraction on


vertical displacement occurs in rooms. (At roofs) [1] .

The results obtained from the numerical models were verified by applying the tributary
area method. The analytical equations used for the calculation of vertical stress acting
on the rib pillars are defined by Hoek [5] as summarised by Equation (1):

sP = γ × H ⋅ +Wo W (1)
P

Which sp is vertical stress on rib pillars, γ is overburden unit weight, H is mine depth
and wo and wp are room and pillar width, respectively.
The results of the tributary area calculations and numerical modelling evaluations,
Figures   ➏ and ➑, are summarised in Table 3, while Figure �� shows the maximum
stress acting on rib pillars which calculated by the two methods.
CHAPTER VI 511

Table 3 Vertical stress of the pillars obtained using the tributary area
method and numerical modelling [1]

Stress Calculated by:(MPa)


Pillar position
Numerical model Analytical method
First Pillar 3.1 4.2
Middle Pillar 4.2 4.2
End Pillar 3.3 4.2

It is clear that in middle pillars, there is any difference between the maximum vertical
stress (according to numerical method) and the average vertical stress(according to
tributary method),but meaningful differences (up to 30%) occurred between pillar stress
computing by above mentioned methods, in side pillars.

4.5

4.0

3.5

3.0
STRESS (MPa)

2.5

2.0

1.5

1.0

0.5

0.0

First pillar Middle pillar End pillar

Figure 14 Stress on rib pillars computing by numerical method and tributary method [1] .

conclusion
To verify the stability condition of an underground room and rib pillar mine, it is necessary
to carry out computations that are able to check the stress redistribution around the
rooms and in the pillars. This computation was carried out for the underground chromite
mine, in Iran and the stress redistribution were calculated by numerical modelling
with Flac2D and analytical method. The results obtained from numerical method were
processed by using, Excel 2007 (Microsoft office 2007).
From the results of two methods, it is possible to point out that the average vertical
stress values, calculated according to the analytic method, are closed to the maximum
value obtained from the numerical models. This is very important for optimisation of
pillar's size or estimation of backfill material uniaxial compressive strength which are
used for pillar recovery.

references
Dehghan, S., Shariar, K., Maarefvand, P. & Goshtasbi, K. (2009) Analysis and Numerical Modeling
of Cemented Back fill Pillars Behaviour in Stope and Pillar Mining Me thod. PhD thesis, Mining
Engineering Department, Science and Research Branch, Islamic Azad University, Poonak,
Hesarak, Tehran, Iran. [1]

Peila, D., Guardini, C. & Pelizza, S. (2008) Geomechanical Design of a Room and R ib Pillar Granite Mine.
Journal of University of Science and Technology Beijing, Vol. 15, No. 2, pp. 97–103. [2]

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
512 Calculation of Pillar Stress in R oom and R i b Pillar Mine...

Brown, E. T. (1981) Rock Characterisation Testing &Monitoring .ISRM suggested methods, Pergamon
Press. [3]

Bieniawski, Z.T. (1989)Engineering Rock Mass Classification, J. Willey, New York. [4]

Hoek, E. (2000) Practical Rock Engineering. , www.rocscience.com. [5]

www.mininglife.com. [6]
Is Geomechanics a Hindrance or
Opportunity in the Development
of a Mine Project?

abstract
Manuel rapiman Industr ial development world-wide has led to increased
José valenzuela consumption of primary natural resources, particularly mineral
Minera Escondida, Chile ores, and consequently, mining companies have been encouraged
to maximise the mining of their mineral deposits.
Geotechnical engineering has faced many difficult challenges
to meet the ever increasing requirements of maximising ore
extraction, as mines in operation are currently larger and deeper
than they were historically. In addition, to satisfy the great
demand of ore volumes, mine operations are currently using large
and sophisticated equipment, yet safety is even more paramount
than in the past. Accordingly, mine sites are complex operations
with numerous considerations for success, amongst which
geotechnical engineering plays a vital role.
This paper will present the role of geotechnical engineering in
this process, especially in the development of large open pit mines,
specifically answering why geotechnical engineering is usually
considered a constraint to mine operations and yet, why its use
has increased over time. Furthermore, currently recommended
practices and the most recent geotechnical developments through
recent international research will be presented.
The examples and databases used in this presentation are
derived from the Minera Escondida Geotechnical Area, located
in the Second Region of Chile. This study will also demonstrate
the current state of geotechnical understanding and the need for
future development to achieve the short, mid and long term mine
plans that will result in an open pit that is over 1.5 km deep. In
conclusion, experience in large open pit mine operations such as
Escondida show that best geotechnical engineering and modern
geotechnical operational practices are not a hindrance, but
rather an opportunity to successfully aid in the development of
a mine project.
514 I s Geomechanic s a Hindrance or Oppor tunit y...

introduction
This paper presents the updated vision that the authors have regarding the role that
Geomechanics play in the mining of mineral deposits, particularly open pit mines, in
order to guide project engineers and mine planners in the geotechnical strengths and
weaknesses that characterise the rock mass to be mined.
The enormous growth in size and depth of open pit mines are challenges that
are becoming more relevant. Today's challenges have resulted in enhancements to
geotechnical analysis methodologies with the passing of time and it is noted that there
are still important aspects left to be investigated.
Recommendations regarding the use of geomechanics at different stages of a project
are introduced from inception to implementation. Practical examples based upon
experience gained at Escondida are provided.
Conceptual issues are not treated in depth as this paper is primarily intended for non-
geotechnical professionals who are normally in charge of mining projects, including
operational and economic feasibility, a scenario where geotechnics play a fundamental role.
The objectives are as follows:

• To introduce in general terms the role of geomechanics and recent developments aimed
at improving support to mining projects.
• To present the strengths and weaknesses of geomechanics and to provide a better
understanding of its contribution to the different stages of a project.
• To recommend better practices for the application of geomechanics with the intent of
visualising the benefits that an appropriate application offers.
• To strengthen the concepts applied in geomechanics to the vast spectrum of
professionals involved in project design and mining extraction.

intrinsic aspects of geomechanics associated


with slope stability
Of primary concern to geomechanics is the responsibility of evaluating material
resistance, typically of rock nature with relatively few data, often of marginal quality as
compared to databases used to support the evaluation of an ore body. This scenario can
result in poor analysis, modelling and design, which could result in costly slope failures,
moreover the loss of life.
Currently, slope failures, which are of significant concern in mine extraction,
continue to occur and therefore it is worth asking: Why do they still occur? Why can't they
be avoided? Can they be predicted? There are numerous questions that arise in the face of
the occurrence of slope failures. Apart from associated losses that these events cause,
they deteriorate the safety index of companies, and safety is a priority goal for today
and f the future in mining sectors.
Apart from the quantity and quality of available information it is also important to
highlight that geotechnical parameters used for characterising a rock mass are made
up of non-random variables, which is another obstacle to permit reliable interpolations.
This is typically not the case for ore body interpretation where grades can be interpolated
by means of geostatistics.
In general, the variables used for geotechnical characterisation of a rock mass, are
made up of the types of rocks present in a particular area, the types of mineralisation,
degrees of alteration present, the structural system (faults and joints) and the presence
of water. Each of these variables can generate important differences in a geotechnical
CHAPTER VI 515

evaluation depending on the manner that they are applied, as explained in the
following points:

• It is important to keep in mind that for the type of ore body mined, i.e., sedimentary,
metamorphic and/or porphyry, or other, the sets of variables will have different
contributions. For instance, in the case of porphyritic bodies such as Escondida, it is
possible to determine some important and distinct alteration types that have different
associated geotechnical characterisations; For instance, for quartz-sericite alteration,
competency depends on the amount of sericite present.

• Regarding structural systems, interpretations rely heavily upon drilling information


that can result in uncertainty in assigning an appropriate extension and/or continuity
of geologic structures. Apart from new mine development, this situation is also
likely to be found in actively mined pits. New methodologies are being implemented
to mitigate part of this condition, based on the use of stereographic pictures for
mines in operation. At Minera Escondida, the geological structural model and a 3-d
geotechnical-structural model are continuously reviewed and revised based upon
proposed mine plans.

• In relation to the presence of water in the rock mass, often phreatic water levels are
used and pressures are considered hydrostatic with depth. A more realistic approach is
the use of pore-water pressures determined from piezometers constructed at different
levels within the aquifer. The installation of piezometers must be put in context to the
geomechanics i.e. all of the same variables must be considered (structure, jointing,
alteration and so on). For instance, faults may form barriers to groundwater flow
or may provide conduits to flow; the type of alteration present will likely affect the
rock masses ability to transmit water, etc… In general, the degree of fracturing and
connectivity of water within the rock-mass affects how fast it will drain. Research
is still underway to try to understand the real effect that pore pressure has on the
rock mass and how to best represent it in the construction of geotechnical models.
Inadequate interpretation of water-rock interactions is perhaps one reason why
although responsible geotechnical studies have been performed failures occur.

All these points are perhaps some of the reasons why collapses still exist in open pit mine,
even when responsible geotechnical supporting studies have been conducted.
Another significant aspect in geotechnical characterisation of a rock mass is the
determination of its geotechnical properties, which are related to the resistance to
rupture and deformation capacity of the rock mass. During the past few years significant
progress has been made in the generation of new procedures and methodologies that
have allowed for better laboratory and field measurement of geotechnical parameters.
Downhole geophysical methodology provides opportunities to evaluate in-situ
properties of the rock–mass. Apart from others, both acoustic and/or optical systems are
available for determining the direction of structures within the borehole as well as to
provide a digital photographic log of the borehole. Advancements in geophysical methods
provide opportunities to gain a better understanding to the determination of rock-mass
properties from intact rock.
The progress made in the determination of the geotechnical properties of rock masses
has been important and it has made it possible to mitigate some limitations of the past,
nevertheless it is necessary to recognise that the issue of defining the properties of a
rock mass has not been fully resolved and this is an aspect that must be borne in mind
in geotechnical evaluation and/or modelling.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
516 I s Geomechanic s a Hindrance or Oppor tunit y...

Recently at Itasca Consulting Group a model for evaluating synthetic rock masses has
been developed. This model, still under development and validation, may prove useful
in the future in 3-d modelling large areas of a given slope.
Taking as a reference to what has been described to this point, it can be concluded that
the geotechnical characterisation of a rock mass still lacks some precision. At present
this condition is managed in accordance to the different stages of a mining project
(prefeasibility, feasibility and engineering stages and construction phases), nevertheless
it is important to consider that in the best of cases about of good geomechanical
information it will be hard to achieve a precision of less than 20%.
All of the above has the sole intent of demonstrating to open pit mine designers and/or
planners that in general terms, the shortfalls existing in the characterisation of a rock
mass, which is fundamental to geotechnical engineering must be kept in mind. If there
is significant uncertainty with respect to stability, one must ask: Is the project still feasible?
Based upon what has been discussed, and taking into consideration that these are
concurrent facts in the evolution and execution of an open pit mining project, it is
possible to give some practical advice to partly mitigate geotechnical uncertainties so
that progress is made.

Determination of the range of geotechnical properties


Geotechnical rock tests are made on selected drill cores in a laboratory. Lab results
typically show the upper limit or maximum resistance of intact rock. It is also important
to know the lower limit of the rock–mass properties. Today, several world-wide rock
mechanics laboratories have the capability of making tests on rock of low competency.
Determining the minimum geotechnical properties of a rock mass ensures, to a
certain extent, the validity of evaluations that endorse a project. This practice has been
implemented at Minera Escondida and it has made it possible to find minimum values of
properties of intact andesite rock, with strong argylic alteration. The table below provides
typical values.

Table 1 Lower-most value for andesite [1]

Type Quantity Lithology + Alteration Cohesion (KPa) Friction Angle (°)


Block rock cored 10 Andesite Argylic 59 26.8°
Note: These values are a little higher of rock structures properties, normally used in geotechnical Escondida models (cohesion: 60 KPa
and friction angle: 20°).

Definition of a structural geotechnical model


The definition of a structural model, at the level of a project under study, could hardly be
improved, since it depends on information mainly originated from drilling, nevertheless
when the pit is already in operation, there are important improvement instances. The
Geotechnical Engineer has the responsibility to review the geologic structural model,
which is typically built upon objectives to understand the genesis of the deposit. For a
geotechnical structural model faults and the rock fabric must be explicitly defined to
determine possible failure modes (Wedge, Planer, toppling, slide…). Escondida periodically
updates and validates its 3-d geotechnical models. Key to validating the model is good
structural information. Figure  ➊, is a simplified structural representation for the
Escondida Pit.
CHAPTER VI 517

Figure 1 Tridimensional geoestructural model.

Scope of geotechnical tests


To avoid misinterpretation of rock-mass properties care must be taken with the use of
various test methods, which through correlations, and often without formal backup are
used to determine the basic parameters of rock-competency. For example, the use of Point
Load Test (plt) can be applied to obtain simple compression stress resistance of the rock
through correlations. In fact, simple compression stress resistance itself is difficult to
interpolate and correlate, therefore it is difficult to qualify how this type of testing applies
to other testing or correlations to determine rock-mass properties. This methodology can
be rejected by the application of theoretical correlations, in addition, it does not correlate
well at the extremes of rock resistance; that is, it contradicts the necessity to know the
range of properties of rocks analysed. This methodology is a widespread activity in many
mines because of its simplicity. It is vital that one recognises its limitations.
At Minera Escondida these practices are carried out with the intent of determining
competency trends of intact rock with direct application to blasting operations,
nevertheless they cannot be used directly in stability analysis or rock properties
determination.

Recommendation from a ‘competent person’


It is unquestionably necessary and recommended that the characterisation of a rock mass
is endorsed by a geotechnical professional of vast experience, qualified as a competent
person, analogous to procedures used to define mineral models for determine reserves. It
is advisable that this professional must be an employee of the company, due to the high
relevance of geotechnical issues in mine business and the requirement of a constant
verification including strict ground control.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
518 I s Geomechanic s a Hindrance or Oppor tunit y...

Guide manual for slope stability in open pit mines [2]


Escondida is one of several mining company's, consulting groups and independent
experts working concurrently, over the past five years, to publish a book that addresses
geotechnical concerns for Large Open Pits (lop). The intent is to reduce uncertainty with
respect to geotechnical information. Upon publication, it is recommended that all mine
operators consider this guide-manual as a valuable resource. The lop book will be made
available through csiro, Australia. It is intended for a technical audience, with special
messages addressed to open pit mining company executives.

external aspects of geotechnics associated to


slope stability
External aspects that affect geomechanics include those conditions that are not inherent
to the rock-mass properties that could result in instability of a rock-mass, including
but not limited to: Pit designs that do not consider geomechanics can result in wall
instability or uncontrolled blasting practices that can weaken the rock-mass resulting in
wall failures. These two causes of instability are not related to the geotechnical nature of
the rock mass, but rather are related to how the project is managed from an engineering
and operations point of view. For this reason it is vital that companies endorse the active
participation of the geotechnical engineer in the development of the operation.
It is the job of the geotechnical engineer to provide the mine designers sufficient
detail to design a safe operation. Geomechanics should be considered part of the mining
business as a whole. From a geotechnical point of view, the success of a given project
depends upon teamwork.
Active participation of the geotechnical engineer in mine design and operations is
strongly recommended. This requires that the engineer has relevant knowledge of the
different stages of the mining process, namely long, medium and short-term planning.
The following are some important aspects to consider:

Long term planning


• For long term planning, the geotechnical engineer must understand and participate
in the creation of a final pit, so as to know the calculation algorithm and the reserve
model with the intent of driving the geotechnical recommendations in relation to the
business needs. Normally the recommendations are interramp and global angles for
different sectors for operational and final pit.

• The geotechnical engineer can also have an important role in the design of different
expansions, making sure that both their geometry, vertical and horizontal, are actually
in agreement with the geotechnical characteristics of each area.

• Lately, large production increases at some mines have required that the geotechnical
engineer participate also in mining sequence strategy. From a stability point of view
it is considered much different to mine a single expansion versus multiple expansions
along the same wall. Also, it is considered geotechnically different to begin an
expansion now versus postponing an expansion for five or ten years due to hydrology
depressurisation and unconfined effect.

• A geotechnical engineer who is proactively involved in the different stages of the


mining business is a means for reducing the uncertainty of the geotechnical model
and will allow for appropriate direction with respect to collecting geotechnical data.
CHAPTER VI 519

• In Escondida, the long-term planning process has improved in the last few years; it
has defined a series of enhanced processes or methodologies for the selection of the
best long- term business plan. A layout of the different stages of the planning process
is shown below in Figure  ➋:

Figure 2 Cycle of planning [3] .

Medium-term planning
Medium-term planning has the responsibility of developing two-year mine plans. A
minimum level of uncertainty is required to develop these plans, hence requiring ample
support from other areas. Only those having a geotechnical character will be reference
on this occasion.

• In connection with the uncertainty declared for the characterisation of a rock mass,
which could hardly be smaller than 20%, this could be compensated with a validation
and continuous improvement of the medium term mine planning, to adjust the
recommendations previously granted with new information or data originating from
different stages of the business process.

• The scenario is different where the estimated geotechnical uncertainty is greater than
20%. In this case, consequences could be quite different from above, mainly affecting
the design parameters with the inevitable beginning of instability situations during
the development of the mine. This is a typical case for mines that have repeated slope
stability problems.

• The prevention, mitigation and/or uncertainty level control mechanisms referred


to in the previous items are related to the geotechnical slope monitoring procedures
applied which are becoming more effective and precise, as is the case with radar that
is capable of measuring millimetres of slope displacement over 2,000 metres distance.

Effective and efficient geotechnical monitoring is capable of detecting early warnings


signs of instability, which could allow for taking actions to mitigate the occurrence of
instability.
There are other instances, often depending on the type of rock present (brittle rock),
where early warning systems do not always allow time for taking preventive actions.
Here the objective is to minimise damages and associated costs during the occurrence
of a failure.
With regard to the monitoring systems currently applied in open pit mines, the
following can be said:

• The occurrence of sustained or repeated failures or instabilities in a pit has not been
solved with increased geotechnical monitoring;

• Last generation monitoring allows for preventing fatalities and other major losses
generated by a collapse. However, they don't avoid a rock failure.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
520 I s Geomechanic s a Hindrance or Oppor tunit y...

• Justification of monitoring must arise from a geotechnical plan, not from the
occurrence of failures.

Short-term planning and mine operations


In short-term planning and/or operational activities, the role of the geotechnical engineer
is perhaps the most widely known and disseminated activity in the realm of the mining
business and it is known as ground control. Responsibilities among others include safety
inspections and monitoring activities during the active mine life. The area provides
continuous input to all stakeholders.
There are three significant aspects to be highlighted in the realm of geotechnical
performance:

• There is need to consider that the geotechnical characteristics of a rock mass are
natural properties that do not change. Perhaps the only variable that can be changed
is hydro geological via dewatering or depressurisation of the rock-mass. With the
implementation of appropriate dewatering and depressurisation plans, the amount
of water in the rock can be reduced. Most important are those external variables
doing not significantly reduce the properties or natural competency that the rock
mass originally had.

• Mining of the rock mass generates a permanent process of unconfined, which implies
relaxation, expansion or weakening of the in-situ rock.

• Operational processes as blasting and loading of material also causes damage to the
intact rock-mass.

All of the above can result in a change to the geotechnical scenario originally described.
To support this effect, calculations of geotechnical properties originally determined have
an associated concept of the additional external damage caused by mining, nevertheless
this is a factor that must be properly controlled and/or be verified during mine operations,
being one of the main responsibilities that ground control has, by means of inspection
activities and control of the recommendations and/or geotechnical restrictions of mine
areas in operation.
In Minera Escondida, geomechanics have expertise in open-pit blasting and the
responsibility of defining all trim blasts at both pits currently in operation. This
information is passed on to the blasting group for execution and for inspecting that
the activity has been fulfilled. A typical diagram of a blast pattern with its main
characteristics is shown in Figure  ➌.
CHAPTER VI 521

Figure 3 Typical design of trim blasting, with electronic detonation, sequence


of 42 ms/hole & 100 ms/row, presplit group of 8 holes each 17 ms.

Keeping a permanent control over this matter allows for effective controls to reduce
damages caused by this external factor and to limit uncontrolled blasting activities.
The following can be said with regard to external variables in geotechnical
characterisation of a rock-mass.

• External variables must be recognised.

• Stakeholders (planning, mine operations, hydrology etc…) must understand in general


terms the strengths and weaknesses of the geotechnical model, which is the input
for their processes.

• Geotechnical engineers must have a strong participation in mine planning and


operations.

• Mine planning and operating activities must have a permanent awareness of the
geotechnical scenario.

• There must be priority action plans intended for continuous control of damage related
to the rock-mass.

conclusions
According to the information presented, our conclusions are as follows:

• All geotechnical characteristics of a rock-mass have an uncertainty that at a minimum


is 20%, agree with the manual guide slope stability. This is a reality that everyone must
know and observe in the definition of an open pit mining project.

• The minimum geotechnical uncertainty level of a rock mass does not eliminate the
risk of collapse or slope instability at an open pit mine, a condition that must also be
understood by all stakeholders responsible for the operation.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
522 I s Geomechanic s a Hindrance or Oppor tunit y...

• Geotechnical engineers must have an understanding of each stage of the mining


process to partially mitigate uncertainties produced by the geotechnical model.

• It is advisable to keep active plans and training related to the role of geomechanics in
the areas of mine planning and operations.

• There must be development plans underway at all mining operations to empower


geotechnical professionals to the level of “competent person”, which is one of the best
options to mitigate geotechnical uncertainties.

• Is geomechanics an opportunity? The answer is Yes. It is an opportunity because it allows


mining projects to develop as anticipated. It is an opportunity because it also allows for
important benefits and cost improvements for slope optimisation in mining project.

• Is geomechanics a hindrance? The answer is Yes. If geomechanics is not considered or


properly understood and no reasonable geotechnical characterisation exists, then yes,
geomechanics will be a hindrance.

It is left to the readers of this paper to make their own conclusions. Finally, the authors
wish to express their acknowledgements to minin 2010 for the opportunity to present
this subject.

references
Bro, A. (2008) Escondida Mine Weak Rock Triaxial Testing, pp.6 – 8. [1]

Read J. & Stacey P. (2009) Guide Manual for Slope Stability in Open Pit Mine. [2]

Supcia. Mediano Plazo Minera Escondida (2008) Manual de Planificación Mediano Plazo, pp.11–12. [3]
Probabilistic Stability Analysis of
Mine Waste Slopes

abstract
Julio beniscelli Mining and processing create waste materials that must be
CODELCO, Chile disposed of. When the wastes are composed of particles of gravel or
sand size, an attractive approach is to create piles or embankments
Alfredo urzúa of the materials by dumping them along the faces of existing rock
John christian
slopes. Some of the resulting slopes in the present case are expected
Prototype Engineering,
to reach heights of several hundred metres. To estimate the
Inc., USA
probability of failure of the slopes, a probabilistic seismic hazard
analysis was first performed to develop statistical parameters of
the seismic loading, expressed as a horizontal seismic factor. This
is obtained by computing a weighted average of the accelerations
developed when the seismic time histories are amplified through
the slopes of different heights. Then the statistical descriptions
of the seismic loading are combined in an event tree with the
stochastic descriptions of the material's shear strength, its unit
weight, and the water table to develop a computational model
that reflects the interaction of these parameters. The actual
computations are performed by the Point-Estimate method and
confirmed by the first order second moment method. Two modes
of failure were considered: the infinite slope model and the
single-plane Culmann model. Results are expressed in the form
of probabilities of failure for different heights of slope. The most
critical slopes are not the highest.
524 Probabili s tic Stabilit y Analysi s of Mine Wa s te S lopes

introduction
An inevitable consequence of mining and processing ores is the creation of waste
materials that must be disposed of. When the wastes are composed of particles of gravel
or sand size, an attractive approach is to create piles or embankments of the materials
by dumping them along the faces of existing rock slopes. Some of the resulting slopes in
the present case are expected to reach heights of several hundred metres. In view of the
size of the slopes, the criticality of the operation of the mine, and the strong probability
of significant seismic motion, a rational analysis of the probabilities of failure and of the
resulting displacements resulting from earthquake shaking becomes imperative. This
paper describes such an analysis.

analytical methodology
The analysis started with a Probabilistic Seismic Hazard Analysis (psha). The purpose
of this study was to use the latest ground-motion attenuation relations and the most
updated information on the earthquake activity in the vicinity to compute the expected
ground motions from large earthquakes that might affect the site. From the psha, both
the Peak Ground Acceleration (pga) and the spectral response accelerations at 5% damping
(sa) at a spectrum of natural periods were computed at several different probability levels
of occurrence. The ground-motion results that are presented in this study can be used to
estimate the Operating Basis Earthquake (obe) and the Maximum Credible Earthquake
(mce) that might affect the site. Synthetic ground motions based on the results of the
psha were also generated. An event tree governed the computation of the response and
behaviour of the slopes. Some simplifying assumptions made it possible to reduce the
event tree to a form that could be solved by analytical methods avoiding the need for
extensive Monte Carlo simulations. In particular, the Point Estimate (pe) method worked
well for these cases. Two sets of cases were run. In the first set, the failure mechanism
was a single sliding plane, also known as the Culmann mechanism. In the second,
the slope slid as an infinite plane, which allowed the effects of water pressures to be
incorporated. In both the Culmann and infinite slope cases cumulative displacements due
to seismic loadings were estimated using the Whitman-Newmark sliding block analogy.

Probabilistic seismic hazard analysis


The phsa was performed to develop statistical parameters of the seismic loading,
expressed as a horizontal seismic factor. It was based on Cornell's [1] methodology,
which involves selecting seismic source zones for regions that contain the site for which
the seismic hazard is to be determined or lie close enough to it that their seismic activity
can be felt at the site. For each seismic source zone, the rates at of earthquake activity
at different magnitudes are determined, and the maximum and minimum magnitude
earthquakes that can occur within that source zone specified. The distribution of
earthquake activity rate with magnitude is commonly specified using a Gutenberg-
Richter recurrence relation. The other important input for a phsa analysis is a ground-
motion attenuation relation. The ground-motion attenuation relation is a mathematical
description of the strength of the expected ground shaking at some distance R from an
earthquake source with magnitude M. Attenuation relations are often determined for
Peak Ground Acceleration (pga) and Spectral Acceleration (sa) for various frequencies.
Some attenuation relations also compute the Peak Ground Velocity (pgv), Peak Ground
Displacement (pgd), or the Spectral Velocity (sv). In recent years, attenuation relations
CHAPTER VI 525

have become much more detailed, and many contain multiple definitions of the source-
to-receiver distance, differentiate between different earthquake focal mechanisms, take
into account directivity of the earthquake source, and depend on the tectonic setting
where the earthquake takes place. Uncertainties in the attenuation relation in a psha
study are often the largest contributor to uncertainty in the results of a psha for a site.
The site is in central Chile. Chile is a country that lies atop an active subduction zone,
where the Nazca plate subducts beneath the South American continent. Earthquakes
greater than M 8 have taken place on this subduction zone, including the M 9.5 event in
1960 that was the largest earthquake ever recorded by seismographic instrumentation.
The seismicity was divided into three different seismic source zones: (1) an interplate
source zone encompassing the earthquakes that occur on the subduction zone boundary
between the Nazca and South American plates; (2) a intraplate source zone encompassing
those earthquakes that occur within the subducting Nazca plate; and (3) a crustal source
zone encompassing the earthquakes the occur in the shallow South American crust
overlying the South American plate. The three source zones for the Sur Sur site are the
same three source zones that were defined by Crempien [2] .
A number of different published attenuation relations were used in the psha analysis.
Four of these attenuation relations, those of Crempien [2] , Atkinson and Boore [3] ,
Youngs et al. [4] and Zhao et al. [5] were used because they were developed specifically
for subduction zones. The Crempien [2] and Zhao et al. [5] relations were developed
for interplate, intraplate and crustal seismicity at subduction zones, and so they could
be used for all three source zones. The Atkinson and Boore [3] and Youngs et al. [4]
attenuation relations were developed for interplate and intraplate source seismicity only,
so their use necessitated the application of an additional attenuation relation for the
crustal zone [6 – 8] . The Crempien [2] relation was developed for pga only, so it could not
be used to estimate response spectral ground motions. Different psha runs were made,
each with a different attenuation relation (or set of attenuation relations). These results
were then combined using a weighted average to compute the final psha result.
Calculations were carried out for three different levels of probability: 10% chance of
exceedance in 50 years (475–year mean repeat time), 2% chance of exceedance in 50 years
(2475–year mean repeat time), and 1% chance of exceedance in 50 years (4975–year mean
repeat time). The 2% in 50 years (also called the 2500 year event) calculation was carried
out because this level of probability is commonly used in other applications as the mce
earthquake. The acceleration spectra were used to generate synthetic time histories of
ground motion using the computer code simqke, which was developed at mit in 1976.
The corresponding response spectra are shown in Figure   ➊ for the 2500 year event.

Figure 1 Response spectra for 2500 year event computed from PSHA.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
526 Probabili s tic Stabilit y Analysi s of Mine Wa s te S lopes

Failure modes and material properties


Two modes of failure were analysed: sliding of the failure mass along a single plane (also
called the Culmann failure mode) and sliding along a single plane as an infinite slope.
Other, more complicated modes could also be used, but these provide useful insight into
the behaviour of the system and the relative importance of the parameters. Figure  ➋
and Table 1 illustrate the Culmann mode of failure, identify the relevant parameters,
and give the numerical values and distributive parameters used in the analyses.

Figure 2 Geometry and parameters for


Culmann analyses.

Table 1 Material properties and distributions used in Culmann analyses

Parameter Value Comments

ah calculated calculated according to Chapter 7


H varies 20, 30, 60, 180, 300 m used
ψ 26° fixed value
θ varies iterated to find minimum
α 0 no top slope
γ μ = 18, σ = 2 kN/m3 normal distribution
c μ = 10, σ = 3 kPa normal distribution – corr. with φ = -0.3
φ μ = 35, σ = 2 deg normal distribution – corr. with c = -0.3

The factor of safety for the Culmann stability analysis is

(1)

Figure  ➌ and Table 2 give the corresponding information for the infinite slope
analyses.

Figure 3 Geometry and parameters for infinite slope analyses.


CHAPTER VI 527

Table 2 Material properties and distributions used in infinite slope analyses

Parameter Value Comments

ah calculated calculated according to Chapter 7


H varies 20, 30, 60, 180, 300 m used
ψ 12° fixed value
dw μ = H/3, COV = 0.3 normal distribution
γ μ = 20, σ = 2 kN/m3 normal distribution
c μ = 10, σ = 3 kPa normal distribution – corr. with φ = -0.3
φ μ = 35, σ = 2 deg normal distribution – corr. with c = -0.3

The factor of safety for the infinite slope stability analysis is

(2)

Event tree and point-estimate solution technique


The many possible combinations of the uncertain parameters can be investigated
by considering that they form an event tree. Figure   ➍ shows the event tree for the
Culmann analysis of a 300 m high slope with a 26° face slope.

Figure 4 Event tree for Culmann analyses.

Although in the general case it would be necessary to use Monte Carlo simulation to solve
for the probability of failure, the limited number of parameters in this case makes it
possible to use Rosenblueth's Point-Estimate Method (pem). Details of the technique are
set forth by Baecher and Christian [9] . In the present case, there are 24 points –3 points for
the acceleration factor, 2 for the unit weight, and 2 each for the strength parameters. The
weighting factors for the unit weight are each 0.5. The weighting factors for the strength
parameters are 0.25 x (1±ρ), with the sign selected according to the procedures set out by
Baecher and Christian [9] . Figure   ➍ shows the full development of the simplified event
tree for this analysis. The probabilities for each branch are written above the branch, and
the values of the parameters for the branch are written below the branch.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
528 Probabili s tic Stabilit y Analysi s of Mine Wa s te S lopes

Wedge acceleration factors


The seismic stability analyses require that an average horizontal acceleration factor
be applied to the failure mass. The base input motions are the synthetic earthquakes
described above. These are used as inputs to the amplification of seismic shear waves
through the potential sliding mass. The average acceleration in a failure wedge is the
weighted average of these results. The weights increase linearly with height above the
base of the failure mass because the mass is a triangular wedge. In the infinite slope case
the accelerations are weighted equally at all depths. The resulting acceleration factors
are listed in Table 3 . The standard deviations of the acceleration factors were assumed
to conform to the same coefficients of variation that were found for the peak ground
accelerations in the psha.

Table 3 Seismic acceleration factors for Culmann and infinite wedge analyses

Culmann wedge failure model


10% in 50 years 2% in 50 years 1% in 50 years
H μ σ μ σ μ σ
Base case 0.39 g 0.05 g 0.67 g 0.09 g 0.81 g 0.12 g
20 m 0.3336 0.0428 0.4566 0.0613 0.5526 0.0819
30 m 0.2412 0.0309 0.3193 0.0429 0.3585 0.0531
60 m 0.1551 0.0199 0.1689 0.0227 0.2095 0.0310
180 m 0.0904 0.0116 0.1130 0.0152 0.1368 0.0203
300 m 0.0603 0.0077 0.0777 0.0104 0.0939 0.0139
Infinite slope failure model
10% in 50 years 2% in 50 years 1% in 50 years
H μ σ μ σ μ σ
Base case 0,39 g 0.05 g 0.67 g 0.09 g 0.81 g 0.12 g
20 m 0.3236 0.0415 0.4648 0.0624 0.5663 0.0839
30 m 0.2475 0.0317 0.3488 0.0469 0.3891 0.0576
60 m 0.1653 0.0212 0.2017 0.0271 0.2481 0.0368
180 m 0.0862 0.0111 0.1186 0.0159 0.1428 0.0212
300 m 0.0668 0.0086 0.0963 0.0129 0.1196 0.0177

Displacements
The probability of failure defined as an instantaneous instance of factor f safety below
1.0 is not the most meaningful measure of the stability of the slope. If the factor of safety
just dropped below 1.0, there would actually be no displacement of the slope because the
factor of safety would be below 1.0 for a mere fraction of a second, providing no time
for displacements to occur. A more critical situation would exist if the factor of safety
were to become, say, 0.8. In that case, the slope would be in a state of failure for enough
time for some displacements to develop. The standard procedure for calculating expected
displacements in such a case is the Whitman-Newmark sliding block procedure (see,
for example, Kramer [10] . To calculate the displacement probabilities, the following
procedure was used: (1) compute the probabilities of occurrence of factors of safety of 1.0,
0.9, 0.8, etc., then (2) for each value of the factor of safety, compute the displacements in
the Whitman-Newmark algorithm with the time history corresponding to the appropriate
earthquake. This procedure involves propagating the earthquake up through the failure
CHAPTER VI 529

mass and finding the computed displacements for sliding on intermediate planes. The
average of the displacements is taken as the best estimate of the displacements. The
displacements correspond to the probabilities of observing the corresponding factors of
safety.

results
The results for the Culmann single plane analysis of the 180 m slope and the 2% in 50
years earthquake are summarised in Table 4 and Figure  ➎. It can be seen that the
annual probabilities of displacements on the order of metres are quite small for this
earthquake. Note that the probabilities in Figure  ➎ are the annual probabilities, not
the probabilities of displacement given that the earthquake occurs. Calculations were
also carried out for the 30 m, 60 m, 180 m, and 300 m high infinite slopes with the input
acceleration corresponding to the 2% in 50 year event. Typical results are summarised in
Tables 5 and 6 and Figure  ➏. The tables show the probabilities of falling below different
levels of factor of safety for 60 m and 180 m slopes. The probabilities are expressed
as the probability in case the earthquake occurs and also as the annual probability,
which is simply the product of the first probability and the annual probability that the
earthquake occurs. The displacement for each threshold factor of safety is computed from
the Whitman-Newmark calculation in each amplification analysis.

Table 4 Combination of Whitman-Newmark sliding block analyses with probabilities of exceedance for 180 m slope

Threshold FS W-N Displacement (m) Probability if EQ Occurs Annual Probability

1.0 0.000 2.63 x 10-4 1.05 x 10-7


0.9 0.027 3.83 x 10-6 1.53 x 10-9
0.8 0.111 2.11 x 10-8 8.45 x 10-12
0.7 0.265 4.35 x 10-11 1.74 x 10-14
0.6 0.592 3.32 x 10-14 1.33 x 10-17
0.5 1.376 9.34 x 10-18 3.74 x 10-21
0.4 2.849 9.66 x 10-22 3.86 x 10-25

Figure 5 Displacements for 2% in 50 years earthquake, 180 m high slope, Culmann wedge failure mode.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
530 Probabili s tic Stabilit y Analysi s of Mine Wa s te S lopes

Table 5 Combination of Whitman-Newmark sliding block analyses with probabilities of exceedance for 60 m
infinite slope with water forces and 2% in 50 years earthquake

Threshold FS W-N Displacement (m) Probability if EQ Occurs Annual Probability

1.0 0.00002 1.273 x 10-2 5.143 x 10-6


0.9 0.00284 2.253 x 10-3 9.102 x 10-7
0.8 0.02159 2.841 x 10-4 1.148 x 10-7
0.7 0.12435 2.535 x 10-5 1.024 x 10-8
0.6 0.39327 1.593 x 10-6 7.240 x 10-10
0.5 0.87105 7.032 x 10-8 2.841 x 10-11
0.4 1.71650 2.174 x 10-9 8.783 x 10-13
0.3 3.28495 4.697 x 10-11 1.898 x 10-14
0.2 6.21350 7.084 x 10-13 2.861 x 10-15
0.1 14.72960 7.449 x 10-16 3.010 x 10-18

Table 6 Combination of Whitman-Newmark sliding block analyses with probabilities of exceedance for 180 m
infinite slope with water forces and 2% in 50 years earthquake

Threshold FS W-N Displacement (m) Probability if EQ Occurs Annual Probability

1.0 0.00000 6.915 x 10-5 2.794 x 10-8


0.9 0.00385 7.301 x 10-6 2.950 x 10-9
0.8 0.02467 5.929 x 10-7 2.396 x 10-10
0.7 0.11576 4.873 x 10-8 1.969 x 10-11
0.6 0.33075 1.765 x 10-9 7.133 x 10-13
0.5 0.86051 6.456 x 10-11 2.609 x 10-14
0.4 2.00927 1.805 x 10-12 7.291 x 10-16
0.3 4.62352 3.857 x 10-14 1.559 x 10-17
0.2 9.69848 6.297 x 10-16 2.544 x 10-19
0.1 22.60311 7.848 x 10-18 3.171 x 10-21

Figure 6 Displacement probabilities for 2% in 50 years event, infinite slope with water, all slope heights.
CHAPTER VI 531

The analyses of slope stability analyses and probability of failure were also carried out for
the slope without flowing water. Displacement results for the 180 m slope are presented in
Table 7, and the displacements for all the slope height are plotted in Figure  ➐.

Table 7 Combination of Whitman-Newmark sliding block analyses with probabilities of exceedance for 180 m
infinite slope without water forces and 2% in 50 years earthquake

Threshold FS W-N Displacement (m) Probability if EQ Occurs Annual Probability

1.0 0.00000 3.190 x 10-9 1.276 x 10-12


0.9 0.00385 1.118 x 10-10 4.471 x 10-14
0.8 0.02467 2.952 x 10-12 1.181 x 10-15
0.7 0.11576 5.870 x 10-14 2.348 x 10-17
0.6 0.33075 8.783 x 10-16 3.513 x 10-19
0.5 0.86051 9.883 x 10-18 3.953 x 10-21
0.4 2.00927 8.359 x 10-20 3.344 x 10-23
0.3 4.62352 5.313 x 10-22 2.125 x 10-25
0.2 9.69848 2.536 x 10-24 1.014 x 10-27
0.1 22.60311 9.091 x 10-27 3.636 x 10-30

Figure 7 Displacement probabilities for 2% in 50 years event, infinite slope without water, all slope heights.

An important observation is that the slope that is most likely to exhibit the largest
displacements it not the highest slope. In Figures   ➏ and ➐ the 30 m and 60 m slopes
have the largest motions.

conclusions
Preliminary results are in the form of probabilities of failure and probabilities of various
amounts of displacement during an earthquake. The tables and figures provide the details
of these results. Certain general trends can be observed.

m i n i n 2 0 1 0 •  s a n t i a g o , c h i l e
532 Probabili s tic Stabilit y Analysi s of Mine Wa s te S lopes

First, the expected factor of safety in some cases falls at or below 1.0. This means that
the probability of a factor of safety below 1.0 in these cases is essentially the same as the
probability of occurrence of the earthquake. However, in other cases the probability of the
factor of safety falling below 1.0 is several orders of magnitude less than 1.0 even given
that the earthquake occurs, and the overall probability of failure is further reduced by
the low probability of occurrence of the earthquake in the first place.
Given that the earthquake occurs, the probability of significant displacement–that is,
displacement greater than about 0.3 m–in the infinite slope case for the earthquake with
2% exceedance probability in 50 years is on the order of 10-5 for a 30 m high slope. When
this is multiplied by the annual probability of the earthquake, the annual probability
is on the order of 10-9.
The lowest factors of safety, highest probabilities of failure, and greatest probabilities
of significant displacement are found for the lowest slope heights, not for the highest
slopes. This is a result of the de-amplification of seismic motions through the thicker
deposits of the higher slopes, which results in lower values of the seismic acceleration
factor in the higher slopes.
It is emphasised that these are preliminary results. Alternative scenarios must be
investigated. More important, the assumptions about material properties and geometry
must be verified or adjusted on the basis of local knowledge and experimental results.

references
Cornell, C. A. (1968) Engineering Seismic Risk Analysis, Bull. Seism. Soc. Am., Vol. 58, pp. 1583–1606. [1]

Crempien, J. (2007). Actualizacion Estudio de R iesgo Sismico Sector R io Blanvo – Sur Sur, Report to the
Andina Division of Codelco [2]

Atkinson, G. M. & Boore, D. M. (2003) Empirical Ground-Motion Relations for Subduction-Zone Earthquakes
and Their Application to Cascadia and Other Regions, Bull. Seism. Soc. Am., Vol. 93, pp. 1703–1729. [3]

Youngs, R. R., Chiou, S.-J., Silva, W. J. & Humphrey, J. R. (1997) Strong Ground Motion Attenuation
R elationships for Subduction Zone Earthquakes, Seism. Res. Lett., Vol 68, No. 1, 58–73. [4]

Zhao, J. X., Zhang, J., Asano, A., Ohno, Y., Oouchi, T., Takahashi, T., Ogawa, H., Irikura, K., Thio, H.
K., Somerville, P. G., Fukushima, Y. & Fukushima, Y. (2006) Attenuation R elations of Strong Ground
Motion in Japan Using Site Classification Based on Predominant Period, Bull. Seism. Am., Vol.93,
pp. 898–913. [5]

Boore, D. M. & Atkinson, G. M. (2008) Ground-Motion Prediction Equations for the Average Horizontal
component of pga, pgv, and 5%-Damped psa at Spectral Periods between 0.01 s and 10.0 s, Earthquake
Spectra, Vol. 24, pp. 99–138. [6]

Campbell, K. W. & Bozorgnia, Y. (2008) nga Ground Motion Model for the Gedometric Mean Horizontal
Component of pga, pgv, pgd and 5% Damped Linear Elastic R esponse Spectra for Periods R anging from
0.01 to 10 s, Earthquake Spectra, Vol. 24, pp. 139–171. [7]

Chiou, S.-J. & Youngs, R. R. (2008) An nga Model for the Average Horizontal Component of Peak Ground
Motion and R esponse Spectra, Earthquake Spectra, Vol. 24, pp. 173–215. [8]

Baecher, G. B. & Christian, J. T. (2003) R eliability and Statistics in Geotechnical Engineering, Chichester,
John Wiley & Sons. [9]

Kramer, Steven L. (1996) Geotechnical Earthquake Engineering, Upper Saddle River, nj, Prentice-Hall,
pp. 438–442. [10]

You might also like