Professional Documents
Culture Documents
Modeling Condition and Performance of Mining Equipment: Tad S. Golosinski and Hui Hu
Modeling Condition and Performance of Mining Equipment: Tad S. Golosinski and Hui Hu
Modeling Condition and Performance of Mining Equipment: Tad S. Golosinski and Hui Hu
2
1 ) (
j
p s gini
) ( ) ( ) (
2
2
1
1
s gini
n
n
s gini
n
n
s gini
split
18
19
0 0 0 0 0 6 5 4 3 2 1 0 0 0 0 0 0
High Engine Speed
Snapshot
Normal Engine Speed
Normal Engine Speed
High Eng
767_1 767_2
Eng_1 Eng_2 Other Other Other Other
VIMS
Data
Predicted
Label
Event_ID
VIMS EVENT PREDICTION
20
One-Minute
decision tree
21
Total Errors = 120 (6.734%)
Predicted Class --> | Other | Eng1 | Eng3 | Eng2 | Eng4 | Eng6 | Eng5 |
----------------------------------------------------------------------------------------------------------------
Other | 1331 | 18 | 9 | 5 | 16 | 6 | 1 | total = 1386
Eng1 | 0 | 62 | 1 | 3 | 0 | 0 | 0 | total = 66
Eng3 | 0 | 11 | 51 | 2 | 2 | 1 | 0 | total = 67
Eng2 | 0 | 12 | 8 | 38 | 7 | 0 | 0 | total = 65
Eng4 | 0 | 3 | 7 | 2 | 55 | 0 | 1 | total = 68
Eng6 | 0 | 0 | 0 | 1 | 0 | 61 | 4 | total = 66
Eng5 | 0 | 0 | 0 | 0 | 0 | 0 | 64 | total = 64
--------------------------------------------------------------------------------------------------------------
1331 | 106 | 76 | 51 | 80 | 68 | 70 | total = 1782
Decision Tree: Training on One-Minute Data
22
Total Errors = 24 (24%)
Predicted Class --> | Other | Eng1 | Eng3 | Eng2 | Eng4 | Eng6 | Eng5 |
-----------------------------------------------------------------------------------------------------------
Other | 59 | 3 | 0 | 2 | 3 | 0 | 1 | total = 68
Eng1 | 4 | 1 | 0 | 1 | 0 | 0 | 0 | total = 6
Eng3 | 0 | 3 | 1 | 0 | 1 | 0 | 0 | total = 5
Eng2 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | total = 4
Eng4 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | total = 4
Eng6 | 0 | 0 | 0 | 0 | 0 | 7 | 0 | total = 7
Eng5 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | total = 6
-----------------------------------------------------------------------------------------------------------
65 | 9 | 2 | 5 | 5 | 7 | 7 | total = 100
Decision Tree: Test#1 on One-Minute Data
23
Decision Tree: Test#2 on One-Minute Data
Total Errors = 35 (17.86%)
Predicted Class --> | Other | Eng1 | Eng3 | Eng2 | Eng4 | Eng6 | Eng5 |
--------------------------------------------------------------------------------------------------------
Other | 141 | 9 | 2 | 4 | 4 | 0 | 0 | total = 160
Eng1 | 2 | 2 | 1 | 1 | 0 | 0 | 0 | total = 6
Eng3 | 2 | 1 | 2 | 0 | 1 | 0 | 0 | total = 6
Eng2 | 2 | 1 | 2 | 1 | 0 | 0 | 0 | total = 6
Eng4 | 1 | 0 | 1 | 1 | 3 | 0 | 0 | total = 6
Eng6 | 0 | 0 | 0 | 0 | 0 | 6 | 0 | total = 6
Eng5 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | total = 6
---------------------------------------------------------------------------------------------------------
148 | 13 | 8 | 7 | 8 | 6 | 6 | total = 196
24
Two-Minute
decision tree
25
Total Errors = 51 (5.743%)
Predicted Class --> | OTHER | ENG1 | ENG2 | ENG3 |
---------------------------------------------------------------------
OTHER | 657 | 6 | 19 | 3 | total = 685
ENG1 | 0 | 62 | 10 | 0 | total = 72
ENG2 | 0 | 13 | 54 | 0 | total = 67
ENG3 | 0 | 0 | 0 | 64 | total = 64
---------------------------------------------------------------------
657 | 81 | 83 | 67 | total = 888
Decision Tree
Training on Two-Minute Data Sets
26
Total Errors = 14 (29.79%)
Predicted Class --> | OTHER | ENG1 | ENG2 | ENG3 |
---------------------------------------------------------------------
OTHER | 28 | 5 | 4 | 1 | total = 38
ENG1 | 1 | 0 | 0 | 0 | total = 1
ENG2 | 2 | 1 | 1 | 0 | total = 4
ENG3 | 0 | 0 | 0 | 4 | total = 4
---------------------------------------------------------------------
31 | 6 | 5 | 5 | total = 47
Decision Tree
Test #1 on Two-Minute Data
27
Total Errors = 15 (15.31%)
Predicted Class --> | OTHER | ENG1 | ENG2 | ENG3 |
---------------------------------------------------------------------
OTHER | 71 | 8 | 1 | 0 | total = 80
ENG1 | 3 | 3 | 0 | 0 | total = 6
ENG2 | 0 | 3 | 3 | 0 | total = 6
ENG3 | 0 | 0 | 0 | 6 | total = 6
---------------------------------------------------------------------
74 | 14 | 4 | 6 | total = 98
Decision Tree
Test #2 on Two-Minute Data
28
Three-Minute
decision tree
29
Total Errors = 28 (4.878%)
Predicted Class --> | OTHER | ENG1 | ENG2 |
----------------------------------------------------
OTHER | 411 | 23 | 4 | total = 438
ENG1 | 1 | 65 | 0 | total = 66
ENG2 | 0 | 0 | 70 | total = 70
----------------------------------------------------
412 | 88 | 74 | total = 574
Decision Tree
Training on Three-Minute Data
30
Total Errors = 12 (19.05%)
Predicted Class --> | OTHER | ENG1 | ENG2 |
----------------------------------------------------
OTHER | 42 | 9 | 0 | total = 51
ENG1 | 3 | 5 | 0 | total = 8
ENG2 | 0 | 0 | 4 | total = 4
----------------------------------------------------
45 | 14 | 4 | total = 63
Decision Tree
Test #1 on Three-Minute Data
31
Decision Tree
Test #2 on Three-Minute Data
Total Errors = 9 (14.06%)
Predicted Class --> | OTHER | ENG1 | ENG2 |
----------------------------------------------------
OTHER | 47 | 5 | 0 | total = 52
ENG1 | 4 | 2 | 0 | total = 6
ENG2 | 0 | 0 | 6 | total = 6
----------------------------------------------------
51 | 7 | 6 | total = 64
32
Decision Tree Summary
One-Minute model needs more complex tree
structure
One-Minute model gives low accuracy of
predictions
Three-Minute decision tree model gives
reasonable accuracy of predictions
Based on test #1 
Other - 13% error rate
Eng1 - 50% error rate
Eng2 0 error rate
Other approach?
33
Backpropagation
A Neural Network Classification Algorithm
Input
Hidden
Layer
Out
Some choices for F(z):
f(z) = 1 / [1+e
-z
] (sigmoid)
f(z) = (1-e
-2z
) / (1+e
-2z
) (tanh)
Characteristic: Each output
corresponds to a possible classification.
f(z)
x
1
x
2
x
3
w
3
w
2
w
1
Node Detail
z = S
i
w
i
x
i
Node
34
m
k
k k
y t E
1
2
) (
2
1
min
m
k
k k
y t E
1
2
) (
2
1
y
k (output)
is a function of
the weights w
j,k
.
t
k
is the true value.
SSQ Error Function
Freeman & Skapura, Neural Networks,
Addison Wesley, 1992
Minimize the Sum of Squares
k j ,
,
,
,
for W 0 solve and
k j
W
k j
k j
W
E
W
E
E
In the graph:
E
p
is the sum of
squares error
E
p
is the gradient,
(direction of maximum
function increase)
More
35
Neural Network Modeling Results
Three-Minute training set
36
Neural Network Modeling Result
Three-Minute set: test #1 and #2
Test #1
Test #2
37
NN Summary
Insufficient data for one-minute and two-
minute prediction models
Three-minute network shows better
performance than the decision tree
model:
Other - 17% error rate
Eng1 - 28% error rate
Eng2 - 20% error rate
38
Conclusions
Predictive model can be built
Neural Network model is more accurate
than the Decision Tree one
Based on all data
Overall accuracy is not sufficient for
practical applications
More data is needed to train and test the
models
39
References
Failure Pattern Recognition of a Mining
Truck with a Decision Tree Algorithm
Tad Golosinski & Hui Hu, Mineral Resources
Engineering, 2002 (?)
Intelligent Miner-Data Mining Application
for Modeling VIMS Condition Monitoring
Data
Tad Golosinski and Hui Hu, ANNIE, 2001, St. Louis
Data Mining VIMS Data for Information on
Truck Condition
Tad Golosinski and Hui Hu, APCOM 2001, Beijing,
P.R. China
40