Professional Documents
Culture Documents
Standard Definitions For The Benchmarking of Availability and Utilization of Equipment
Standard Definitions For The Benchmarking of Availability and Utilization of Equipment
Dedication
This report is dedicated to the memory of Ian Muirhead, who at the time of his
untimely passing was Director of the Department of Mining and Petroleum
Engineering at the University of Alberta. Ian understood Industrys need to
conduct this study, and provided the inertia to get the project off the ground.
Credit goes out to Blair Tuck, whose research for his Masters Project formed the
basis for the conclusions reached within this report.
Credit is also extended to Chris Barclay of Luscar Ltd, and Denise Duncan from
Syncrude Canada Ltd. , who as members of the industry steering committee for
the project provided the initial direction and assistance in conducting the
surveys.
Executive Summary
The survey found that the formulas and definitions for availability and utilization
parameters were similar, however differences in the meanings behind the
formulae and the classification of events occurring in the course of the
operation of a mine created inconsistencies in reporting. While it is possible to
derive common definitions for operating parameters, comparison is meaningless
without addressing the discrepancies occurring at a more fundamental level of
classification of operating events into time categories.
With this finding, the project objectives shifted to identify the fundamental
differences between the way operations classified normal operating events. The
results are summarized in the report.
Background
The motivation for this study was to enhance the collective efficiency of the
Canadian mining industry by enabling sharing of information on operating
performance.
One success story in mining industry collaboration is the Large Tire User Group.
Under the auspices of Surface Mining Association for Research and Technology
(SMART), the Large Tire User Group established a multi-company, large tire
database, which was successful in establishing consumption information, sharing
of large tire testing data, and the sharing of procedures for tire and rim
maintenance.
Despite its benefits, the mining industry in general has lagged other industries in
the adoption of benchmarking. Some of the barriers limiting the application of
benchmarking are:
Project
The project was coordinated through the University of Alberta, the research
forming the basis for a graduate level thesis. A steering committee was formed
among the project participants to direct the project.
The second and third phases were contingent on successful execution of Phase
1, which requires the acceptance of proposed standard reporting definitions.
This report summarizes the conclusions of Phase 1, the survey of current practice,
and provides recommendations to enable action toward phases 2 and 3.
The first stage of data collection took place in late February and early March of
1998. This stage consisted of site visits to eight surface mines, allowing
participants elaborate on responses.
The original survey was revised for a mail survey, sent to forty-four large surface
mines in Canada and another fifty-five in the United States. Seventeen more
responses were received for a total of twenty-five.
Results
For illustrative purposes, the flow of information from which the performance
definitions are derived is reflected in Figure 1. During the course of a day various
planned and unplanned events occur. These events are recorded either
manually or electronically through an automated data collection system. Based
on established rules and guidelines developed over the history of the operation,
these events are coded to defined time classifications, again either manually or
electronically. These classifications are for the most part are common to the
mining industry, and make up the terms of the definitions of the performance
measures used in the industry.
In most cases total hours consisted of scheduled hours (or a sum of operating,
delay, standby, and down hours) or calendar hours.
The other significant difference was in the use of the term "operating hour".
Several operations made the distinction between net, or a "pure" operating hour
vs a gross operating hour, which includes operating delay. Of the respondents
using the term "operating hour" only, the meanings varied from a pure operating
hour (similar to a net operating hour), to an operating hour which includes
delay.
The most significant difference between operations that affects the ability to
compare results, is the allocation of events to the time classifications terms
making up the formulas. For example operations comparing on the basis of
mechanical availability, which excludes standby or idle time, may be affected
by differences that occur between what is considered operating delay and
standby time at the individual operations. The inclusion of planned downtime in
idle or standby time results in different availability than operations that consider
planned outages as scheduled outage.
In order to identify the differences between the way operations classify typical
events the Table 1 was developed to reflect the major time classifications used
within the mining industry.
All events encountered in the course of operating a mine would fall into one of
the time classifications. Table 2 is a summary of the classification by the survey
participants of some of the most common events.
Table 1.
Total Hours
Total hours is not used as a classification for this study, however because it is
used by many of the operations participating in the study, its relationship to the
other parameters is noted.
The definition of total hours varied depending on how the operation classified
scheduled outages.
Where scheduled outages were part of the operation, total hours were
generally equal to scheduled hours, defined as calendar hours, less scheduled
outages. Where there were no scheduled outages, or in cases where scheduled
outages were considered part of operating or standby time, total time equated
to Calendar hours.
Calendar Hours
Scheduled Outages
Almost half the mines surveyed did not classify scheduled outage separately. Of
those, planned shutdowns and scheduled downshifts were classified as idle or
standby. This difference affects the calculated standby time.
Down Time
The distinction between down and available was quite clear throughout. In
most cases the unit was mechanically operable or it was not. Opportune
maintenance or maintenance taking place during planned shutdowns was in
almost all cases classified as down time.
Idle or standby time was in most cases considered the time the equipment was
available, but not manned or used.
The major discrepancies affecting idle time were the classification of planned
outages as discussed above, safety and crew meetings, which were equally
defined as operating delay, and to a lesser extent lunch breaks and power
outages.
Operating hours
The majority of discrepancies occurred in the definition of operating hours, and the allocation of
events between operating delay, Gross Operating Hours, and Net Operating Hours. Several
operations had one classification, Operating Hours. In some cases Operating Hours incorporated
delay, reflecting the entire time the unit operated, while in others, Operating Hours referred
strictly to the time the unit was producing.
Gross Operating Hours were generally calculated as Available hours less idle or standby time.
GOH was generally defined as operating time plus operating delay.
Net Operating hours, also referred to as operating time, or production hours, is the difference
between GOH and operating delay.
Operating Delay, generally referred to activity where the unit was available and manned, but
not involved in production.
Working hours was a term used by a number of mines, also with multiple meaning; at one
operation it equated to a GOH definition, while at another the definition reflected a Net
Operating Hour.
One of the major areas of disagreement is in the classification of queue time as delay or
operating time. It was found that operations with manual time and data collection tended to
incorporate queuing as operating time to an upper limit, where it was then classed as delay.
Operations with automated data collection systems where more likely to classify any queuing as
delay. Further discrepancies resulted from the definition of a queue. In some cases if truck
waiting was caused by shovel repositioning or face cleanup, it was not defined as queue, or the
delay was not considered a queue until more than one truck was waiting. These discrepancies
came to light after the surveys were completed, and were not further addressed.
Maintenance Survey
The most common maintenance indicator is mechanical availability, while some use reliability to
varying degrees. Other indicators include; maintenance ratio (maintenance hours to operating
hours); Cost per Hour; Backlog and PM compliance.
All operations have maintenance management systems, though some are limited to work order
generation and history. Retrieval of historical information has been raised as an issue at some
operations.
All operations keep component histories, however in most cases the history is limited to hours at
replacement. Half the operations surveyed keep a failure history, though in most cases this is
simply failure cause. Two respondents indicated they kept records of failure analysis on major
components, or accidents.
Some operations were either in the process of moving towards a function or usage based metric
for replacement as an alternative to hours, or strongly considering it. Examples include tonne
km for tires, tonnes for hoist ropes, or BCM on buckets.
About half had some form of downtime analysis, most relating to distribution of downtime by
equipment component or system. About a third documented maintenance time by activity, ie
wait labour, wait shop space, cleaning, Preventive, breakdown, warranty etc.
The majority expressed interest in some form of maintenance information sharing, although
many were not sure what form it should take. Interest was expressed in sharing in component
histories or common equipment problems.
Conclusions
1. Any comparison is meaningless due to the lack of consistency in the way in which operating
events are classified. Until this is resolved there is limited value in proposing common definitions
for availability and utilization.
The focus moving ahead must then be on the consistent allocation of operating events to
agreed on time classifications.
2. The consensus through the survey interviews was that there is strong interest in information
sharing and comparison, however none of the operations felt they would be willing to adopt
new definitions for operating parameters or adopt new standards for allocation of operating
events in order to enable information exchange.
To enable comparison of data, information sharing must take place in such away that existing
operating data collection and reporting systems at individual mines can operate unaffected,
and that access to historical data is protected.
To accomplish these constraints, a solution that utilizes the data storage and manipulation
capability of existing data collection systems could be implemented.
3. There is interest in pursuing some form of maintenance information sharing. Most operations
recognize the need to improve maintenance management systems and processes. The
development of maintenance performance management systems lags that of other production
tracking systems in mining. There appears to be little collaborative effort in this area, as a result
most operations seem to be "reinventing the wheel". A study comparing maintenance practice
and the development of performance standards for maintenance would be of value to the
mining industry.
The Large Tire User Group, which requires data common to this initiative;
The data management infrastructure could be used to improve reporting parameters
specific to OEM availability guaranty reporting.
Loss Control system benchmarking.
It was also suggested that the structure developed for this initiative could form the
framework for other web based collaborative initiatives such as purchasing.
Path forward
A decision to proceed is required; the benefits must be weighed against the resources needed
to establish the benchmarking infrastructure, as well as the ongoing upkeep of the system. The
value in the initiative will be realized by the ongoing participation of several operations.
The path forward is to develop a process which makes use of existing data management
systems to collect data on operating events, and establish the infrastructure to collect event
based data from participating operations to an independent, central benchmarking data
warehouse. The proposed data management structure is reflected in Figure 2. Participating
operations will have access to the data at a high level, which can either be reported in the
definitions agreed to by a benchmarking steering committee, or inserted into their own formulas,
to enable comparison with their own historical data. The definitions developed for the purpose
of comparison will represent a "straw dog" for industry wide standardization. Participants will not
be obligated to adopt the proposed standards, as they will have access to the database data.
The development requirements and costs of the proposal are summarized in Figure 3 and Tables
3 and 4 .
For the initiative to
advance, the following actions are recommended;
Identify project manager to coordinate system design, help establish rules for allocation
of events within the benchmarking database, incorporate standardized reporting
parameters, and coordinate efforts at participating mines.
Determine if system would facilitate Large Tire User Group information data collection.
Coordinate development to ensure system accommodates their needs.
Agree on location of benchmarking data warehouse. The initial proposal was to establish
central data store at the University of Alberta School of Mining. Another option is to
develop a database accessed through the SMART website. The location of the
database may have an impact on who will maintain the database on an ongoing basis.
Initiate design and coordinate database modifications with dispatch system vendors, or
onsite support.
Establish if interest exists for a similar study relating to maintenance practice and
performance measures.
Table 3 System Development Requirements
SMART/Steering Committee
Participants
o Create alternate benchmark data files and ASCII data file formats
for transfer (3 Days)
o Testing ( 3 Days)
Data Administrator
o Ongoing 2 Days/month
Development Cost
(Estimates assume contract labour; any in-kind support by participants will reduce cost)