Professional Documents
Culture Documents
Prognostics and Remaining Useful Life RUL Estimation Predicting With Confidence 1st Edition Diego Galar Kai Goebel Peter Sandborn Uday Kumar
Prognostics and Remaining Useful Life RUL Estimation Predicting With Confidence 1st Edition Diego Galar Kai Goebel Peter Sandborn Uday Kumar
https://ebookmeta.com/product/augmented-humanity-being-and-
remaining-agentic-in-a-digitalized-world-peter-thomas-bryant/
https://ebookmeta.com/product/second-grade-math-with-confidence-
instructor-guide-math-with-confidence-7-1st-edition-kate-snow/
https://ebookmeta.com/product/modern-battery-engineering-a-
comprehensive-introduction-kai-peter-birke/
https://ebookmeta.com/product/predicting-heart-failure-invasive-
non-invasive-machine-learning-and-artificial-intelligence-based-
methods-1st-edition-kishor-kumar-sadasivuni/
Sparse Estimation with Math and R 2nd Edition Joe
Suzuki
https://ebookmeta.com/product/sparse-estimation-with-math-
and-r-2nd-edition-joe-suzuki/
https://ebookmeta.com/product/confidence-pocketbook-little-
exercises-for-a-self-assured-life-1st-edition-gill-hasson/
https://ebookmeta.com/product/control-and-communication-for-
demand-response-with-thermostatically-controlled-loads-kai-ma/
https://ebookmeta.com/product/data-statistics-and-useful-numbers-
for-environmental-sustainability-bringing-the-numbers-to-
life-1st-edition-benoit-cushman-roisin/
https://ebookmeta.com/product/the-mycocultural-revolution-
transforming-our-world-with-mushrooms-lichens-and-other-fungi-
good-life-peter-mccoy/
i
Diego Galar
Kai Goebel
Peter Sandborn
Uday Kumar
iv
Contents
Preface..............................................................................................................................................xxi
Authors..........................................................................................................................................xxvii
v
vi
vi Contents
Contents vii
viii Contents
Contents ix
x Contents
Contents xi
xii Contents
Contents xiii
xiv Contents
Contents xv
xvi Contents
Contents xvii
xviii Contents
Contents xix
Index............................................................................................................................................... 449
xx
xxi
Preface
Reliability, availability, and safety are determining factors in the effectiveness of industrial perform-
ance, thus calling for enhanced maintenance support. Traditional concepts like preventive and cor-
rective strategies are being replaced by new ones like predictive and proactive maintenance.
Fault detection, failure diagnosis, and response development are basic maintenance concepts,
corresponding to the need to “see” phenomena, to “understand” them, and to “act” appropriately.
However, rather than simply understanding a phenomenon which has just appeared, like a failure,
it is useful to anticipate it to take action as soon as possible. This can be defined as the “prog-
nostic process” –the topic of this book. Prognostics are defined by the International Organization
for Standardization (ISO) as “the estimation of time to failure and risk for one or more existing
and future failure modes”. Another way to put this is the prediction of a system’s lifetime or its
remaining useful life (RUL) before failure, given the current condition and past history. Prognostic
approaches to maintenance have the potential to reduce costs and increase reliability.
Many techniques have been proposed to support the prediction of RUL and prognosis, but
choosing a technique depends on classical constraints: data availability, system complexity, imple-
mentation requirements, and available monitoring devices. There is no general agreement on an
appropriate and acceptable set of metrics that can be employed in prognostics, and researchers and
condition-based maintenance (CBM) practitioners are still working on this. Building an effective
tool to measure RUL is not a trivial task: prognostics is an intrinsically uncertain process, and prog-
nostic tools must take this into account when performing decisional metrics to select preventive
actions.
Methods of data analysis can be:
1. Physical-model-driven.
2. Experience-based.
3. Data-driven.
Experience and expert opinion are still important, but the output generated from real monitoring
data tends to give more precise information. Today, experience-based analysis tends to supplement
and/or complement other types of analysis.
Data-driven methods rely on previously observed data to predict the system’s future state or
match similar patterns in the history to infer RUL. These include statistical methods, reliability
functions, and artificial intelligence (AI) methods. The following data- driven techniques are
discussed in this book:
xxi
xxi
xxii Preface
For the last decade, the development of prognostics has been a popular research topic from both
an academic and an operational point of view. The vast majority of research has focused on the
prediction of the RUL of individual components. Moreover, much of this research has focused
on the propagation of a single mechanism leading to a single failure mode. However, industrial
equipment is complex and can have concurrent multi-failure modes and multi-failure mechanisms
involving several components and sub-components. The propagation of failure mechanisms may
also involve several components, and various diagnostic tools are used to detect and track them
at different system scales. Once predetermined degradation thresholds are reached, specific main-
tenance actions should be taken to avoid a system failure. Depending on the types of active failure
mechanisms and their progression, maintenance actions may not have the same effect to stop or slow
down their propagation toward their related failure modes. It is therefore important to understand
how the mechanisms have and will propagate when we want to apply specific maintenance tasks to
extend the RUL of complex equipment.
One approach to predictive maintenance of complex equipment is physics of failure (PoF). A PoF
model uses knowledge of a product’s lifecycle loading and failure mechanisms to perform reliability
assessment. It provides insight into life and reliability aspects and addresses the root causes of
failure, such as fatigue, fracture, wear, or corrosion. PoF-based prognostics permit the assessment
of system reliability under its actual application conditions. The method integrates sensor data with
models that enable the in situ assessment of the deviation or degradation of a system from the
expected normal operating condition and the prediction of its RUL.
Since the prediction of RUL is critical to operations and maintenance decision-making, it is
imperative that it be estimated accurately. Prognostics deal with predicting future behavior. Several
sources of uncertainty influence such future prediction; therefore, it is rarely feasible to obtain
an estimate of the RUL with complete precision. In fact, it is not even meaningful to make such
predictions without computing the uncertainty associated with RUL. Because we obviously can’t
really make “future measurements”, predicting RUL necessarily entails propagating uncertainty.
Methods for quantifying uncertainty in prognostics and RUL prediction can be broadly classified
as applicable to two types of situations: offline prognostics and online prognostics. Methods for off-
line prognostics are based on rigorous testing before and/or after operating an engineering system;
xxi
Preface xxiii
methods for online prognostics are based on monitoring the performance of the engineering system
during operation.
In general, uncertainty management is the most significant challenge faced by monitoring
systems in an era of ever-increasing system autonomy. As decision-making in complex engineered
systems shifts toward highly evolved algorithms, simple threshold-based diagnostic decisions or
single-valued model or data-driven prognostics are insufficient. The various sources of uncertainty
inherent to diagnostics and prognostics must be accounted for in a probabilistic fashion for the
approach to make sense. The Bayesian tracking framework allows the estimation of state of health
parameters in prognostics, making use of available measurements from the system under consider-
ation. It models uncertainty in the measurement process and in the evolution of degradation.
Context awareness is also desirable for prognostics and can be extended to support monitoring of
multiple asset types. Context information in health monitoring can be used to improve the quality of
prognostics, use limited maintenance resources more efficiently, and better match the maintenance
services to the current asset conditions and needs of the machines under health monitoring.
For complex assets, a great deal of information must be captured and mined to assess the overall
condition of the whole system. Various types of information must be integrated to get an accurate
health assessment and determine the probability of a failure or slowdown. The data collected are not
only huge but are often dispersed across independent systems that are difficult to access, fuse, and
mine because of their disparate nature and granularity. If the data from these independent systems
were combined, this new set of information could add value to the individual data sources through
data mining.
System maintenance is obviously important to users of complex systems, with the potential to
improve reliability, cost, and safety. To improve the maintenance capabilities of a complex system,
an onboard health monitoring system must be deployed; it must provide enough informative data
to determine which type of maintenance operation is required. Complex systems are an assemblage
of heterogeneous components (continuous, discrete, and hybrid); each may require a different tech-
nique to monitor its health. A global health monitoring system would be useful but presents certain
difficulties.
Fortunately, monitoring systems continue to increase in complexity and spatial coverage.
Methods of digital signal processing, machine learning, knowledge representation, machine vision,
and machine reasoning have been adopted in architectures and system implementations. In addition,
modern integrated data communications systems include smartphones, allowing workers to commu-
nicate data immediately and permitting flexible production management for on-demand products.
Achieving high reliability and availability of complex systems is a crucial task. The goal is
to detect and correct problems before they become severe and shut down the system. One recent
technology that has the potential to transform the monitoring landscape is digital twin. Acting
as a mirror of the real world, digital twin provides a means of simulating, predicting, and opti-
mizing physical manufacturing systems and processes. Using digital twin, together with intelli-
gent algorithms, organizations can achieve data-driven operation monitoring and optimization,
develop innovative products and services, and diversify value creation and business models.
The explosion of Internet of Things (IoT) sensors makes digital twins possible. And as IoT
devices are refined, digital twin scenarios can include smaller and less complex objects, giving add-
itional benefits to companies. With additional software and data analytics, digital twins will opti-
mize an IoT deployment for maximum efficiency and help designers decide where things should go
or how they operate before they are physically deployed.
While the benefits that digital twins will bring to the built environment are clear, they can also
play a role in achieving sustainability and solving other societal challenges in the design and engin-
eering of civil infrastructure. The ultimate vision is the generalized development and adoption of
a higher-level digital twin able to autonomously learn from and reason about its environments.
The evolution of digital twin will help asset owners, managers, and larger society make informed
xxvi
xxiv Preface
decisions on the basis of real-time data. While we currently have widespread autonomy of warning
systems, this will turn into widespread predictive systems, and finally into reasoning twins.
Not surprisingly, given the increasing complexity of assets, organizations also face increasingly
complex problems. For example, there is a need to recognize the possibility of unlikely critical
situations –this is not easy, but organizations have access to huge amounts of data. This vast know-
ledge should be managed systematically to identify and eliminate unpredictable events or reduce the
undesirable consequences. Operations and maintenance may experience cases of extreme natural
degradation and man-made accidental or malevolent intentional hazards in critical facilities; these
can be handled by taking a risk-based approach, where risk is a function of the likelihood of event
occurrence and the resulting consequences. However, a “black swan effect”, an event that comes as a
surprise, has a major effect, is often inappropriately rationalized after the fact, and is not foreseeable
by the usual calculations of correlation, regression, standard deviation, reliability estimation, or pre-
diction. In addition, expert opinion has minimal use, as experience is inevitably tainted by bias. The
inability to estimate the likelihood of a black swan precludes the effective application of asset man-
agement and risk calculation, making the development of strategies to manage their consequences
extremely important. In the not-so-distant future, we believe Industrial AI-empowered digitalization
will be able to cope with the potential or actual consequences of these unforeseen, large-impact, and
hard-to-predict events.
The increased complexity of systems calls for the development of integrated systems for fault
detection, diagnostics, failure prognostics, maintenance planning, operation decision support,
and decision-making as Industry 4.0 and smart manufacturing become a reality. Extracting useful
knowledge from Big Data is one of the major challenges of smart manufacturing. In this sense,
advanced data analytics is a crucial enabler of Industry 4.0. Advanced analytics focus on perform-
ance improvement in the context of smart manufacturing in the information age. To succeed, this
evolution requires organizational commitment and work process adaptation on all levels, from the
board of directors to the field operators.
Tomorrow’s smart manufacturing will include the use of deep machine learning, prescriptive
analytics in industrial plants, and analytics-based decision support in manufacturing operations.
Advanced analytics will be able to answer an organization’s questions about its past activities
(descriptive analytics), probable future outcomes (predictive analytics/prognosis), and how to capit-
alize on short-, medium-, and long-term activities (prescriptive analytics).
Prescriptive analytics is an integral part of advanced analytics. It goes beyond predictive analytics
(i.e., prognostics) to include predictions to estimate the effect of possible actions. Prescriptive
analytics aim at going beyond the question “What just happened and why?” to answer the questions
“What should I do?” and “Why should I do it?” These analytics yield business value through
adaptive, time-dependent, and optimal decisions on the basis of accurate predictions about future
events. Prescriptive analytics are part of smart manufacturing and are considered to be the next evo-
lutionary step in increasing data analytics maturity for optimized decision-making ahead of time.
The most important challenges of prescriptive analytics include:
1. Addressing the uncertainty introduced by predictions, incomplete and noisy data, and subject-
ivity in human judgment.
2. Combining the knowledge acquired by machine learning and data mining methods with the
knowledge of domain experts.
3. Developing generic prescriptive analytics methods and algorithms utilizing AI and machine
learning instead of problem-specific optimization models.
4. Incorporating adaptation mechanisms capable of processing data and human feedback to
continuously improve decision- making processes over time and generate non- intrusive
prescriptions.
5. Recommending optimal plans from a list of alternative actions.
xxv
Preface xxv
Although automated decisions with the use of prescriptive analytics may potentially yield more
extensive benefits than other advanced analytics, this possibility has only recently become of signifi-
cant interest. As the method becomes ubiquitous, it will undoubtedly lead to changes in the work-
place. It may lead to the replacement or extinction of certain work positions, for example, teams of
decision-makers searching for feasible solutions using trial-and-error procedures.
Advanced analytics have the potential to reduce costs, increase profit, and improve customer ser-
vices. For this, a mixed model-based and data-driven architecture for solving complex problems in
the industry must be combined with comprehensive information and communication technologies,
high-performance computing, and auxiliary mechatronics. The major challenge is to find and inte-
grate good quality data.
The massive volume, variety, and velocity of data are changing work processes. The timeliness,
consistency, and, above all, the integration of these Big Data are key considerations of advanced
analytics.
Good data are the smart manufacturing basis for better data reconciliation and better param-
eter estimation to be used for real-time optimization and control. If a company’s data are out-
of-date or non-integrated, there will be inconsistencies in both offline and online applications,
and the company will not benefit from the value of advanced analytics. The bottom line is the
data: well-polished and timely data seamlessly integrated into the decision-making body are the
foundation of smart manufacturing in the Industry 4.0 context and the secret to future success.
xxvi
xxvi
Authors
Diego Galar is Full Professor of Condition Monitoring in the
Division of Operation and Maintenance Engineering at Luleå
University of Technology (LTU), where he is coordinating sev-
eral H2020 projects related to different aspects of cyber-physical
systems, Industry 4.0, IoT or Industrial AI, and Big Data. He was
involved in the SKF UTC located in Luleå focused on SMART
bearings and actively involved in national projects with Swedish
industry or funded by Swedish national agencies like Vinnova.
He is also principal researcher in Tecnalia (Spain), heading the Maintenance and Reliability
research group within the Division of Industry and Transport.
He has authored more than 500 journal and conference papers, books, and technical reports in
the field of maintenance, working also as member of editorial boards and scientific committees,
chaired international journals and conferences, and actively participated in national and international
committees for standardization and R&D on the topics of reliability and maintenance.
In the international arena, he has been visiting Professor at the Polytechnic of Braganza (Portugal),
University of Valencia and NIU (USA), and the Universidad Pontificia Católica de Chile. Currently,
he is visiting professor at the University of Sunderland (UK), University of Maryland (USA), and
Chongqing University in China.
Kai Goebel is a Principal Scientist in the System Sciences Lab at the Palo Alto Research Center
(PARC). His interests are broadly in condition-based maintenance and system health management
for a broad spectrum of cyber-physical systems in the transportation, energy, aerospace, defense,
and manufacturing sectors. Prior to joining PARC, He worked at the NASA Ames Research Center,
and General Electric Corporate Research and Development Center. At NASA, he was a branch chief
leading the Discovery and Systems Health Tech area which included groups for machine learning,
quantum computing, physics modeling, and diagnostics and prognostics. He founded and directed
the Prognostics Center of Excellence, which advanced our understanding of the fundamental aspects
of prognostics. He holds 18 patents and has published more than 350 papers, including a book on
prognostics. He was an adjunct professor at Rensselaer Polytechnic Institute and is now adjunct pro-
fessor at Luleå Technical University. He is a co-founder of the Prognostics and Health Management
(PHM) Society, and associate editor of the International Journal of PHM.
Peter Sandborn is a Professor in the CALCE Electronic Products and Systems Center and the
Director of the Maryland Technology Enterprise Institute at the University of Maryland. His group
develops lifecycle cost models and business case support for long field life systems. This work
includes obsolescence forecasting algorithms, strategic design refresh planning, lifetime buy quan-
tity optimization, return on investment models for maintenance planning and system health man-
agement, and outcome-based contract design and optimization. He is the developer of the MOCA
refresh planning tool. He is an Associate Editor of the IEEE Transactions on Electronics Packaging
Manufacturing and a member of the Board of Directors of the PHM Society. He is the author of over
200 technical publications and several books on electronic packaging and electronic systems cost
analysis. He was the winner of the 2004 SOLE Proceedings, the 2006 Eugene L. Grant, the 2017
ASME Kos Ishii-Toshiba, and the 2018 Jacques S. Gansler awards. He has a B.S. in engineering
physics from the University of Colorado, Boulder, in 1982, and an M.S. in electrical science and a
Ph.D. in electrical engineering, both from the University of Michigan, Ann Arbor, in 1983 and 1987,
respectively. He is a Fellow of the IEEE, the ASME, and the PHM Society.
xxvii
newgenprepdf
xxvi
xxviii Authors
Uday Kumar is the Chair Professor of Operation and Maintenance Engineering, Director of
Research and Innovation (Sustainable Transport) at Luleå University of Technology and Director of
Luleå Railway Research Center.
His teaching, research, and consulting interests are equipment maintenance, reliability and
maintainability analysis, product support, lifecycle costing (LCC), risk analysis, system analysis,
eMaintenance, and asset management.
He is visiting faculty at the Center of Intelligent Maintenance System (IMS), a center sponsored
by the National Science Foundation, Cincinnati, USA since 2011, External Examiner and Program
Reviewer for the Reliability and Asset Management Program of the University of Manchester,
Distinguished Visiting Professor at Tsinghua University Beijing, and Honorary Professor at Beijing
Jiaotong University, Beijing. Earlier, he was visiting faculty at Imperial College London, Helsinki
University of Technology, Helsinki, and University of Stavanger, Norway.
He has more than 30 years’ experience in consulting and finding solutions to industrial problems
directly or indirectly related to maintenance of engineering asserts. He has published more than 300
papers in international journals and conference proceedings dealing with various aspects of main-
tenance of engineering systems and has co-authored four books on maintenance engineering and
contributed to World Encyclopaedia on Risk Management.
He is an elected member of the Royal Swedish Academy of Engineering Sciences.
1
1 Information in Maintenance
1.1
TRADITIONAL MAINTENANCE: CORRECTIVE AND PREVENTIVE
1.1.1 Traditional Corrective Maintenance
Traditionally, the main tool of maintenance was the reactive response to equipment failures;
that is, repairing damage as quickly as possible. A good traditional maintenance plan included
the identification of critical parts of the machinery, a stock of spares, and the availability of
specialized personnel to solve breakdowns efficiently.
For example, if an engine suffered an irreparable breakdown, it was replaced by a new engine
(Sigma Industrial Precision, 2018).
Corrective maintenance may be grouped under the following five categories (Dhillon, 2006):
1. Fail repair: Restoring the failed item or equipment to its operational state.
2. Overhaul: Repairing or restoring an item or equipment to its complete serviceable state,
meeting requirements outlined in maintenance service ability standards.
DOI: 10.1201/9781003097242-1 1
2
PM is Issue is Yes
Fix onsite
performed detected Fast fix?
No
Create work
order
3. Salvage: Disposing of non-repairable materials and using salvaged materials from items that
cannot be repaired in the overhaul, repair, or rebuild programs.
4. Servicing: Performing maintenance required after a corrective maintenance action; for
example, engine repair can result in a requirement for crankcase refill, welding, and so on.
5. Rebuild: Restoring an item or equipment to a standard as close as possible to its original state
in appearance, performance, and life expectancy. This is accomplished through actions such
as complete disassembly, examination of all parts, replacement or repair of unserviceable
or worn components according to original specifications and manufacturing tolerances, and
reassembly and testing to original production requirements.
Corrective maintenance downtime is made up of three major components as shown in Figure 1.2.
Active repair time has six sub-components: checkout time, preparation time, fault correction time,
fault location time, adjustment and calibration time, and spare part obtainment time (Dhillon, 2006).
To improve corrective maintenance effectiveness, it is important to reduce corrective mainten-
ance time. Useful strategies for reducing system-level corrective maintenance time include the
following (Dhillon, 2006):
1. Improve accessibility: Past experience indicates a significant amount of time is spent accessing
failed parts. Careful attention to accessibility during design can help to lower the accessibility
time of parts and, consequently, reduce the corrective maintenance time.
2. Improve interchangeability: Effective functional and physical interchangeability is important
when removing and replacing parts or components, as it lowers corrective maintenance time.
3
Information in Maintenance 3
Acve repair
me
Administrave
and logisc Delay me
me
Components
3. Improve fault recognition, location, and isolation: Experience shows that within a corrective
maintenance activity, fault recognition, location, and isolation consume the most time. Factors
reducing corrective maintenance time are good maintenance procedures, well-trained mainten-
ance personnel, well-designed fault indicators, and unambiguous fault isolation capability.
4. Consider human factors: During design, careful attention should be paid to human factors,
such as selection and placement of indicators and dials; size, shape, and weight of components;
readability of instructions; information processing aids; and size and placement of access and
gates. This can lower corrective maintenance time significantly.
5. Employ redundancy: Redundant parts or components should be designed to be switched in
during the repair of faulty parts so that the equipment or system continues to operate. In this
case, although the overall maintenance workload may not be reduced, the downtime of the
equipment could be impacted significantly.
1. Calendar-based maintenance:
A recurring work order is scheduled in the CMMS when a specified time interval is reached.
When work order data are logged in the CMMS, maintenance managers can predict when an asset
will fail based on historical events and create specific PMs to prevent this from happening again.
5
Information in Maintenance 5
3. Usage-based maintenance:
Meter readings are used and logged in the CMMS. When a specific unit is reached, a work order is
created for routine maintenance.
4. Prescriptive maintenance:
This is similar to PdM, but in this case, machine learning software assists the maintenance manager
prescribing PM.
1.1.2.1.2 Preventive Maintenance Components and Principles for Choosing Items for
Preventive Maintenance
There are seven elements of PM (Dhillon, 2006):
The following formula can used to decide whether to implement a PM program for an item or
system (Dhillon, 2006):
where n is the total number of breakdowns, θ is 70% of the total cost of breakdowns, Ca is the
average cost per breakdown, and Cpm is the total cost of the PM system.
• Identify and select areas: Identify and select one or two important areas on which to concen-
trate the initial PM effort. The main objective of this step is to obtain good results in highly
visible areas.
• Highlight PM requirements: Define the PM needs and develop a schedule for two types of
tasks: daily PM inspections and periodic PM assignments.
6
• Determine assignment frequency: Establish the frequency of assignments and review the
item or equipment records and conditions. The frequency depends on factors such as vendor
recommendations, the experience of personnel familiar with the equipment or item under con-
sideration, and engineers’ recommendations.
• Prepare PM assignments: Prepare the daily and periodic assignments in an effective manner
and get them approved.
• Schedule PM assignments: Schedule the defined PM assignments over a 12-month period.
• Expand the PM program as appropriate: Expand the PM program to other areas on the basis
of experience gained from the pilot PM projects.
Opmal
maintenance
zone
Cost
Insufficient
prevenve
maintenance Excessive
prevenve
maintenance
FIGURE 1.3 Optimal amount of preventive maintenance occurs when the cost of corrective and preventive
maintenance meet (UpKeep, 2020(b)).
7
Information in Maintenance 7
• Perform PM only when the cost of maintenance is less than the cost of the failure. A conse-
quence of failure analysis can be used to determine this.
• Perform PM based on the condition (i.e., condition monitoring (CM)) of the equipment; how-
ever, RTF is acceptable when it is the most cost-effective maintenance procedure.
• Develop and document a standard corrective maintenance procedure.
1.2
CMMS AND IT SYSTEMS SUPPORTING MAINTENANCE FUNCTION
1.2.1 Computer Maintenance Management Systems (CMMS)
Many organizations use specialized software, a CMMS, to support management and to track
operations and maintenance (O&M) activities.
• Do you have an effective way to generate and track work orders? How do you verify the work
was done efficiently and correctly? What is the notification function upon completion?
• Are you able to access historical information on the last time a system was serviced, by whom,
and for what condition?
• How are your spare parts inventories managed and controlled? Do you have excess inventory
or are you consistently waiting for parts to arrive?
• Do you have an organized system to store documents (electronically) related to O&M
procedures, equipment manuals, and warranty information?
• When service staff are in the field, what assurances do you have that they are compliant with
health and safety issues and are using the right tools/equipment for the task?
• How are your assets, that is, equipment and systems, tracked for reporting and planning?
If the answers to these questions are not well defined or even entirely lacking, it may be worth inves-
tigating the benefits of a well implemented CMMS (Sullivan et al., 2010).
Many CMMS can now interface with existing energy management and control systems, as well as
enterprise resource planning or even supervisory control and data acquisition (SCADA) systems for
interaction from the managerial to the shop-floor level in maintenance decisions. Coupling these
capabilities allows condition-based monitoring and the creation of component use profiles (Sullivan
et al., 2010).
While CMMS can go a long way toward automating and improving the efficiency of most mainten-
ance programs, there are some common pitfalls (Sullivan et al., 2010). These include the following:
CMMS is an information repository whose outcomes strongly depend on the quality of the data
stored in the system. The data in the CMMS could be wrong, or there may be no data at all. If man-
agers do not have the right kind of data or enough of it, they must make decisions without all the
facts. The wrong decision could be costly.
To maximize CMMS use, managers need to continually revisit the system’s features and
functions, review and upgrade training for users, and reinforce among users the need for a steady
stream of reliable maintenance and repair data (Hounsell, 2008).
Information in Maintenance 9
on a variety of platforms. State-of-the-art systems give users the comprehensive ability to facilitate
the flow of maintenance information and to check the health of the maintenance organization at a
glance.
As the maintenance, repair, and operations software market continues to expand, vendors have
developed solutions that focus on specific segments of asset and work management. Systems
described as enterprise asset management, asset life cycle management, asset performance man-
agement, asset or enterprise reliability management, and CM are all focused on achieving the same
goals: increasing equipment availability and performance, increasing product quality, and redu-
cing maintenance expense. When a CMMS is implemented to facilitate an established process and
standards, those goals can be realized (Crain, 2003).
• Engage users in the process of collecting and “owning” the information entered into the
system.
• Take advantage of assemblies like safety or tailgate meetings to provide updates on the pro-
gress of the implementation.
• Explain how the system will benefit maintenance users, management, and the company as
a whole.
• Ask senior management to address the employees on the significance of the system and the
importance of their contributions.
• Promote question and answer sessions, possibly even short product demonstrations.
• Emphasize that the goal of the system is not to monitor employees, but to move the organiza-
tion from a reactive to a proactive mode (Crain, 2003).
1. Can staff remove themselves enough from the existing way of business to evaluate ways to
make the old processes more efficient?
2. Does someone on the maintenance staff have enterprise software implementation experience?
If not, does the project schedule allow for the learning curve?
3. Do maintenance and information technology (IT) staff members have the time to add this pro-
ject to their existing workload?
If the answer is no to any of these questions, or if the system needs to be fully functional in 6–
12 months, a consultant may be helpful. As well as contributing years of lessons learned to the
implementation team, a seasoned consultant will help develop a project plan and guide teams
through the implementation process.
10
For a turnkey type implementation, it is critical to evaluate and approve key deliverables, such
as business processes, nomenclature, customizations, data configuration, and the final system con-
figuration. Regardless of the consultant’s role, during testing, training, and going live, the existing
resources should visibly lead the effort (Crain, 2003).
1.2.1.6.3 Training
Training is where the “rubber meets the road”. End users are getting their first hands-on experience
with the system and developing first impressions. “Canned” data should never be used for training.
Instead, the training environment should mirror the production database. The format of training
should be role and process based. A work requestor is going to be overly confused if he/she is trained
on all the functionalities of a work order instead of on the one screen and few fields required to
create a work request. The following techniques are also useful:
• Make sure that one or two implementation team members are available to help users who fall
behind during training.
• Follow up with those users during the “go-live” period and provide auxiliary training.
• Provide a “play” environment where users can practice and reinforce what they learned in
training without corrupting production data (Crain, 2003).
Information in Maintenance 11
1.2.1.6.5 Post-Implementation
Following “go live” but before the transition to normal operation, review all the defined requirements
and determine whether they have been fulfilled. Ensure that the mechanisms to facilitate and
measure business processes are in place. If a vendor or consultant has been used, make sure all his/
her deliverables have been completed.
Develop a support and issues escalation path. Have users go to their supervisor or resident “power
user” with issues. If the issue can’t be resolved, escalate it to a site or company administrator. The
company administrator should be the sole point of contact between the organization and the vendor.
The administrator should be responsible for logging and tracking all open issues and enhancement
requests (Crain, 2003).
• System updates allow companies to conduct the necessary changes to stay at the top of their
game –not only technological aspects but also legal ones, including the privacy and security
of users, as business patterns and government rules may change.
1.3
SENSORS FOR HEALTH MONITORING AND SCADA
SYSTEMS: OPERATIONAL TECHNOLOGIES (OTS)
1.3.1 Health Monitoring
Health monitoring is a non-destructive technique that allows the integrity of systems or structures
to be actively monitored during operation and/or throughout their lives to prevent failure and
reduce maintenance costs. Health monitoring tracks an aspect of a structure’s health using reliably
measured data and analytical simulations in conjunction with heuristic experience so that the current
and expected future performance of the composite part, for at least the most critical limit events,
can be described in a proactive manner. Minimum standards are required for analytical modeling
to ensure reliable computer simulations, and measurements, loads, and tests must be designed and
implemented in conjunction with the analytical simulations (Koncar, 2019).
Health monitoring systems were introduced in the 1970s in such applications as computer integrated
coal monitoring systems, a production monitoring system for the weaving industry, the MINOS long-
distance monitoring system, a wireless interceptor with channel scan and a priority channel monitoring
system, an electric network in-time monitoring system, and a diesel engine CM system.
Current health monitoring technologies include intelligent monitoring, wireless monitoring, long-
distance monitoring, real-time monitoring, and embedded monitoring (Xu & Xu, 2017), both online
monitoring techniques and off-line non-destructive detection and inspection techniques. In general,
online monitoring keeps track of the key parameters of the products or systems. For example, in an
aero engine, the vibration parameters, high and low compression rotation speeds, turbine exhaust
temperatures, fuel flux, and metal content in the oil need to be monitored, and baseline methods are
commonly used to determine the threshold values. Off-line non-destructive abnormal detection or
inspection techniques include hole detection, fluoroscopy, X-ray isotopes, borescope with remote
control, ultrasonic detection, oil physics/chemical analysis, and oil debris analysis techniques such
as iron spectrum, scanning electron microscopy energy spectrum, automated granule counts, and
spectroscopic analysis.
Information in Maintenance 13
life-supporting implants, and can enable bedside and remote monitoring of vital signs and other
health factors. New and different types of medical equipment being developed include sensors used
inside both equipment and patients’ bodies. Healthcare organizations want real-time, reliable, and
accurate diagnostic results provided by devices that can be monitored remotely, whether the patient
is in a hospital, clinic, or at home.
Figure 1.4 shows pressure, temperature, flow and image sensors; accelerometers; biosensors;
superconducting quantum interference devices (SQUIDs); and encoders used in medical applications
(Thusu, 2011). These are discussed in the following sections.
TABLE 1.1
Biosensors and Their Corresponding Biosignals
Type of Sensor Type of Sensor Measured Data
Temperature Body or skin temperature Ability of body to generate and get rid of heat
Pulse oximeter Oxygen saturation Quantity of oxygen carried by human blood
Accelerometer Movements of body Measurement of acceleration forces in 3D
spaces
Chest or skin electrodes Electrocardiogram (ECG) Measurement of waveform showing
contraction and relaxation of cardiac cycle
Piezoresistive or Respiration rate Rate of breathing per unit time
piezoelectric sensor
Phonocardiograph Heart sound Measurement of heart sound by use of
stethoscope
Information in Maintenance 15
Local
Alarm Signals
signals Database
Medical
Central Node Center
(PDA,
Graphical Central Wireless
smartphone, Wireless
µC , etc.)
User Processing Transmission Transmission, GSM,
Interface Unit Module UMTS, WLAN, etc.
Ambulance
FIGURE 1.5 Architecture of a wearable health-monitoring system (Pantelopoulos & Bourbakis, 2010).
Wearable systems for health monitoring may comprise various types of miniature sensors, wear-
able or even implantable. These biosensors are capable of measuring significant physiological
parameters like heart rate, blood pressure, body and skin temperature, oxygen saturation, respir-
ation rate, and ECG. The obtained measurements are communicated either via a wireless or a
wired link to a central node, for example, a PDA or a microcontroller board, which may display the
relevant information on a user interface or transmit the aggregated vital signs to a medical center.
A wearable medical system includes a wide variety of components: sensors, wearable materials,
smart textiles, actuators, power supplies, wireless communication modules and links, control and
processing units, interface for the user, software, and advanced algorithms for data extraction and
decision-making.
A general WHMS architecture is depicted in Figure 1.5, including the system’s functionality
and components. However, this should not be perceived as the standard system design, as different
systems may adopt significantly different architectural approaches (e.g., biosignals may be trans-
mitted in analog form and without pre-processing to the central node, and bi-directional communi-
cation between sensors and central node may not exist) (Pantelopoulos & Bourbakis, 2010).
Wearable systems for health monitoring need to satisfy strict medical criteria while operating
under several ergonomic constraints and significant hardware resource limitations. More specific-
ally, a wearable health-monitoring system design needs to take wearability criteria into account, for
instance, the weight and the size need to be small, and the system should not hinder any of the user’s
movements or actions. Radiation concerns and possible aesthetic issues also need to be accounted
for. In addition, the security and privacy of the collected personal medical data must be guaranteed
by the system, while power consumption needs to be minimized to increase the system’s operational
lifetime. Finally, such systems need to be affordable to ensure wide public access to low-cost ubi-
quitous health-monitoring services.
16
Designing such a system is a very challenging task, as many highly constraining and often
conflicting requirements have to be considered by designers. It is understandable that there is
no single ideal design for such systems; rather, the various “antagonizing” parameters should be
balanced based on the specific area of application (Pantelopoulos & Bourbakis, 2010).
1.3.3 SCADA Systems
SCADA is a software system that works with coded signals over communication channels. These
coded signals are combined with a data acquisition system via communication channels to get
information on the status of remote equipment for display or for recording functions (Boys, 2009).
The SCADA system is mainly used to control the production of power, infrastructure processes, and
facility processes, including water treatment, distribution space stations, energy consumption, and
siren systems in civil defense.
Shalini and Kumar (2013) reported on the real time implementation of SCADA in power distri-
bution networks. The electrical distribution system consists of several substations with multiple
controllers, sensors, and operator-interface points, as shown in Figure 1.6 (Arora, 2016). The issues
faced by SCADA include integration, functionality, communication, network technology, security,
and reliability. Having SCADA in the power system network increases the system’s reliability and
stability for integrated grid operation.
PLC
(Programmable Logic
Controllers)
Industrial Equipments