Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

MIDLANDS STATE UNIVERSITY

FACULTY OF MINING AND MINERAL PROCESSING ENGINEERING

ZVISHAVANE, ZIMBABWE

CHIKONYE NYASHA CEPHAS (R177365J)

ANUKWAWO EMMANUEL (R178090Y)

ALLEN MBERIKUNASHE (R176776A)

LEVEL : 3.2

MODULE : HMINE321

LECTURER : MR GUMBO

ASSIGNMENT : 1
Q1) Characteristics of the following database modelling:

i) Stress field modelling


ii) Rock mass modelling
iii) Ore body modelling [15]
[i] Stress field modelling.:

Vertical stress component σv originates from the gravity and its magnitude at some depth is
expressed as the weight of the above lying rock mass. Other two stress field components are
minimum horizontal stress σh and maximum horizontal stress σH. State of the field stress
where those three stress components are equal is known as lithostatic stress. Biaxial stress state
assumes that

The assumption of a lithostatic stress state as we penetrate deeper into the

Earth's crust is known as Heim's rule. This may

be expressed as:

Where:

Many measurements confirmed that lithostatic stress state exists below the depth of 3000m,
while horizontal stresses may be up to 3.5 times higher than vertical the depths to 300m,
Teraghi and Richart (1952) came to the conclusion that in undisturbed sedimentary rock masses
biaxial stress field exists:

Elastic modulus (Young's modulus) of rock is not constant whether the

specimen is test for unconfined or triaxial compressive strength. Figure 1 illustrates the
strain change for different load intervals prior to unconfined compression test.

A stress field model has to consider the following aspects:


• Choice of suited numerical simulation technique and code
• Incorporation of geological layering and formations (stratigraphy)
• Considering of discontinuities, like faults, fractures, bedding planes, interfaces etc.
• Choice of appropriate constitutive laws and parameters for describing the geo-logical
units and discontinuities
• Considering of groundwater (pore and joint water pressure)
• Considering of topography
• Incorporation of geological history, especially erosion
• hoice of appropriate boundary conditions, especially tectonic stresses
• Consideration of available measurement results and indicators for calibration
• Determination of appropriate model dimensions and meshing

[ii]Rock mass model

Among the empirical formulars for failure of rock materials ,the Mohr Culoumb failure
criterion is generally accepted to be one of the most applicable means of strength analysis of
rock materials.The theory is based on the Culoumb-Navier criterion for shear failure expressed
as :

Where are normal and shear stresses across the plane, i

Is the inherent shear strength of the rock and is the coeffiecient of internal friction of the
material.
Generalising the above model,the Mohr principle can anon-linear criterion for rock failure can
be written in the form :

[iii] Orebody modelling techniques are computerized represantations of portions of the earth’s
crust based on geological and geophysical observations made on and below the earth’s
surface.Ore body models are numerical equivalent of a 3D geological map complemented by
description of physical quantities in the domain of interest.

Its objectives are :

• To understand better the shape of orebody and make predictions as to further activity
in the field.
• To interprete the subsurface since the information involved is examined along both
horizontal slices (plan maps) and vertical cross sections (drill sections)
• To predict the continuity of the subsurface geology by extrapolation from known
sample data into the areas that are not yet drilled.
It also involves compositing.

Compositing : it is the standard processing technique for regularizing the length or height of
desurveyed drill hole samples/It involves the sample assay data combeined by computing
weighted average over longer intervals to provide a smaller number of data with greater length
for use in developing the resource estimate. It involves two processes :

(i) composite down drill holes

(ii) Composite over benches


Procedure in building a 3D model :

(a)Starts with an empty chunk of a 3-D space on a paper or computer screen

(b)Later on,the drill hole information is brought to that space as it exists in the real world

(c)The 3D space will be divided into zones representing different rock types and different zones
of mineralization

A block model is developed in order to determine the grade.A block model is composed of
rectangular blocks or cells each of which has attributes such as grade ,rock types,average
density ,resource classification and oxidation codes.They are constructed based on wireframes
and drill hole information previously created. The resultant block will be used in estimating
grade into the model cells.Figure 1 below shows a block model.

Q2) Briefly explain the listed data mining functionalities and give the examples of
functionality, using real life database that you are familiar with.

i) Characterization

Data characterization is a summarization of general features of objects in a target class and


produces what is called characteristic rules. Note that with a data cube containing
summarization of data, simple OLAP operations fit the purpose of data characterization

For example, one may want to characterize the OurVideoStore customers who regularly rent
more than 30 movies a year. With concept hierarchies on the attributes describing the target
class, the attribute-oriented induction method can be used, for example, to carry out data
summarization.

ii) Discrimination

Data discrimination produces what are called discriminant rules and is basically the comparison
of the general features of objects between two classes referred to as the target class and the
contrasting class.

For example, one may want to compare the general characteristics of the customers who rented
more than 30 movies in the last year with those whose rental account is lower than 5.

iii) Association
Association analysis is the discovery of what are commonly called association rules.It studies
the frequency of items occurring together in transactional databases, and based on a threshold
called support, identifies the frequent itemsets.Another threshold, confidence, which is the
conditional probability than an item appears in a transaction when another item appears, is used
to pinpoint association rules.

For example, it could be useful for the “OurVideoStore” manager to know what movies are
often rented together or if there is a relationship between renting a certain type of movies and
buying popcorn or pop.

iv) Classification

Classification analysis is the organization of data in given classes. Also known as supervised
classification, the classification uses given class labels to order the objects in the data
collection. Classification approaches normally use a training set where all objects are already
associated with known class labels. The classification algorithm learns from the training set
and builds a model. The model is used to classify new objects.

For example, after starting a credit policy, the “OurVideoStore” managers could analyze the
customers’ behaviors vis-à-vis their credit, and label accordingly the customers who received
credits with three possible labels “safe”, “risky” and “very risky”.

The classification analysis would generate a model that could be used to either accept or reject
credit requests in the future.

v) Prediction

Prediction has attracted considerable attention given the potential implications of successful
forecasting in a business context. There are two major types of predictions: one can either try
to predict some unavailable data values or pending trends or predict a class label for some data.
The latter is tied to classification.

Once a classification model is built based on a training set, the class label of an object can be
foreseen based on the attribute values of the object and the attribute values of the classes.
Prediction is, however, more often referred to as the forecast of missing numerical values, or
increase/ decrease trends in time-related data.

Q3) Brielfy explain how is the data warehouse different from the database. (10)
Data warehouses and databases are both relational data systems, but were built to serve
different purposes.

i) Storage

A data warehouse is built to store large quantities of historical data and enable fast, complex
queries across all the data, typically using Online Analytical Processing (OLAP). A database
was built to store current transactions and enable fast access to specific transactions for ongoing
business processes, known as Online Transaction Processing (OLTP).

ii) Optimization

A database is optimized to maximize the speed and efficiency with which data is updated
(added, modified, or deleted) and enable faster analysis and data access. Databases use Online
Transactional Processing (OLTP) to delete, insert, replace, and update large numbers of short
online transactions. Other features include fast query processing, multi-access data integrity,
and a number of processed transactions per second. Databases performing OLTP transactions
contain and maintain current, and detailed data from a single source.

Data warehouses use Online Analytical Processing (OLAP) that is optimized to handle a low
number of complex queries on aggregated large historical data sets. Tables are denormalized
and transformed to yield summarized data, multidimensional views, and faster query response
times. Additionally, query response times are used to measure an OLAP system’s effectiveness.

iii) Data Structure

Databases use a normalized data structure. Data normalization means reorganizing data so that
it contains no redundant data, and all related data items are stored together, with related data
separated into multiple tables. Normalizing data ensures the database takes up minimal disk
space while response times are maximized.

The data in a data warehouse does not need to be organized for quick transactions. Therefore,
data warehouses normally use a denormalized data structure. A denormalized data structure
uses fewer tables because it groups data and doesn’t exclude data redundancies.
Denormalization offers better performance when reading data for analytical purposes.

iv) Data Timeline


A database processes day-to-day transactions within an organization. Therefore, databases
typically don’t contain historical data—current data is all that matters in a normalized relational
database.

Data warehouses are used for analytical purposes and business reporting. Data warehouses
typically store historical data by integrating copies of transaction data from disparate sources.
Data warehouses can also use real-time data feeds for reports that use the most current,
integrated information.

v) Analysis

While databases are normally used for transactional purposes, analytical queries can still be
performed on the data. The problem is that the complexity of the data’s normalized
organization makes analytical queries difficult to carry out. A skilled developer or analyst will
be required to create such analytical queries. The depth of analysis is limited to static one-time
reports because databases just give a snapshot overview of data at a specific time.

The structure of data warehouses makes analytical queries much simpler to perform. No
advanced knowledge of database applications is required. Analytics in data warehouses is
dynamic, meaning it takes into account data that changes over time.

vi) Concurrent Users

An OLTP database supports thousands of concurrent users. Many users must be able to interact
with the database simultaneously without it affecting the system’s performance.

Data warehouses support a limited number of concurrent users compared to operational


systems. The data warehouse is separated from front-end applications and it relies on complex
queries, thus necessitating a limit on how many people can use the system simultaneously.
REFERENCES

Bieniawski Z.T. (1984): "Rock mechanics design in mining and tunneling. " A.A. Balkema,
Rotterdam, 272 pp.

Brekke T.L. and Howard T.R. (1972): "Stability problems caused by seams and faults."
Rapid Tunneling & Excavation Conference, 1972, pp. 25-41.

Hoek E. (1994): Strength of rock & rockmasses. ISRM News Journal Vol. 2, No 2, pp. 4-16.

Palmström A. (1995): "RMi - a rock mass characterization system for rock engineering

purposes." Ph.D. thesis, University of Oslo, Norway,400 pp

You might also like