Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 26

A1 Healthcare Analytics

HEALTHCARE ANALYTICS
Quite simply, analytics is the science of analysis (Turban, Sharda, & Denlen, 2011). Analytics is akin to
the central nervous system of the body, where information is processed and is interpreted (analyzed)
by the brain. Analytics is the systematic use of data and related business insights developed through
applied analytical disciplines (e.g. statistical, contextual, quantitative, predictive, cognitive, other
[including emerging] models) to drive fact-based decision-making for planning, management,
measurement, and learning. Analytics may be descriptive, predictive, or prescriptive (Cortada, Gordon,
& Lenihan, 2012). When applied to healthcare, analytics incorporates business intelligence (BI)
information from a variety of  systems including electronic health records (EHRs); financial systems
(e.g., general ledger and billing systems); stand-alone analytical applications; admission, discharge, and
transfer (ADT) systems; enterprise resource planning (ERP); time and attendance; scheduling systems;
stand-alone clinical department applications, such as laboratory, radiology, OB/GYN, other electronic
systems. Analytic applications are able to apply statistics, formulas, modeling, and are expressed with
visual analytical tools such as score cards and dashboards with drill down capabilities.
ANALYTICS IN HEALTHCARE
The use of analytics, clinical and BI, performance management, predictive modeling and simulation and
the use of enterprise data warehouses (EDWs) in healthcare are not new. Health insurance plans (also
called “payers”) have been using EDWs and data marts for 20 years. While seeming new to healthcare,
analytics have been used by hospital chief financial officers (CFOs) for well over 10 years. The finance
department of hospitals is routinely monitoring and tracking a variety of financially oriented
scorecards and dashboards. Years ago the payer community realized that having multiple best-of-
breed departmental electronic systems, each of which has its own analytical applications for reporting,
caused problems when data from different systems needed to be combined and reported together.
Because each proprietary best-of-breed department system has different ways of describing, entering,
formatting, storing, handling, and reporting data when multiple diverse systems are compared it is the
equivalent of the “Tower of Babel” with many different languages and no way to normalize or
standardize data.
An EDW takes data from multiple source systems, extracts the data, transforms and loads (ETL) the
data into one standardized and normalized format. Once the data are normalized and standardized it
comprises “one version of the truth” (i.e., a reliable and trusted source of data). Descriptions of the
data, including the data source; formats; descriptions; calculations; etc., are stored in the meta-data.
(Meta-data is data about the data.) The middle ware, or presentation layer, enables users to access the
data. Common-off-theshelf (COTS) query tools are listed below. In addition, visual application
dashboard tools such as Qlikview, Tableau, Spotfire, Microsoft Visual Stack, etc., can be used for
dashboards.
Common “middleware” or reporting/analytics tools used in hospitals are:
 Crystal reports
 Business Objects
 Excel: Used for analysis, reporting, and manual integration of data
 SSRS: SQL Server Reporting Services is used as way of extracting and manipulating data
 SPSS: Used for statistical analysis in social science
 R: A free software programing language and software environment for statistical computing
and graphics. It is used at many academic medical centers
 RDL: Report Definition Language used for data reporting
 Microsoft—Access Database: Used for storing and manipulating data obtained from other
systems
 PORG: Personal Option Report Generator used by some financial and payroll systems for
reporting and data extraction
 SQL: Standard Query Language used to run reports and extract data from relational
databases
 Promodol: Tool for data simulation

BASIC HEALTHCARE ANALYTICS DEFINITIONS


Analytics falls under the decision support systems (DSS). Often clinicians find this difficult to
understand. They are used to limiting their thinking to DSS as being only alerts and reminders that are
built into EHRs.
A  broader understanding of DSS is called for. Actually, DSSs are an overarching and all-encompassing
term used to describe and computerized system that enables the process of decision-making (Turban
et al., 2011).

  
THE DIFFERENCE BETWEEN REPORTS AND ANALYTICS
There are often misconceptions associated with analytics and reports. Both reports and analytics are
considered to be forms of business intelligence (BI). (Remember, any data or information generated
from electronic systems is considered to be BI.)
Reports or analytics can be for:

 Quality of care, such as how many pressure ulcers have occurred in the hospital, when they
occurred, and what was the patient’s age, diagnosis, sex, etc., in order to reduce the number
of pressure ulcers
 Service Line analysis, such as for low-birth-weight babies, what was the age of the mother,
ethnic group, the presence or absence of prenatal care, economic status, education, zip
code, etc., to set the stage for outreach to populations at risk for low-birth-weight babies
 Clinical effectiveness, such as evaluating the time taken in a colonoscopy and correlating
the total time of the procedure to incidents of missed diagnoses of colon cancer to lower
the rate of patients whose colon cancer was missed during a colonoscopy procedure
 Variation in practice (cost, quality, length of stay, and post-operative complications) when
comparing bowel resections performed by general surgeons versus gastroenterology
specialists to consult and coach physicians with outcomes that consistently deviate from the
outcomes of the top performers
 Operational effectiveness, such as evaluating the time from arrival in the emergency
department to admission to a patient floor, to reduce patient wait times
 Improve financial performance, such evaluating the number and reasons for claims denials
to lower the number of denials The greatest shortcoming of reports is that the ability to
analyze them is often limited.

That is, frequently, when a single report is studied, additional questions arise, such as
(1) is this a single incident;
(2) is it isolated or is it happening elsewhere;
(3) how long has it been occurring;
(4) is it occurring all the time or is it occurring on a specific day of the week or time of day;
(5) is it universal or does it pertain to specific providers;
(6) is it occurring at all locations or specific locations;
(6) is it seasonal or not;
(7) has it happened before?
To answer these type of questions additional reports may need to be created. In contrast, analytics
often offers the ability to drill down to the lowest level of granularity. Additionally, with analytics,
multiple electronic systems can be incorporated to generate a more detailed understanding of the
situation and the ability to uncover a problem that otherwise might not be evident. For example, a
static report may show that cardiac ventriculograms have increased, while an analytical analysis may
show that 60% of these studies were not reimbursed by the commercial health insurance payers
because these tests were considered medically excessive and unnecessary because an echocardiogram
was the recommended test for the diagnosis code listed.
Another important element in understanding data and information is the data source and how the
reporting applications determine the values used in the reporting functions. Each analytical application
has its own vocabulary or meta-data (data about the data). For example, one system may calculate
length of stay (LOS) as an occupied hospital bed at midnight. Another system may include observation
beds in its LOS calculation or calculate at a different time of day. When a variety of systems have
different lexicons, obviously the results are different as the numerators and denominators differ.
Physicians, nurses, and other clinical providers often do not understand that variations in how data are
captured, organized, stored, and reported depends on how the definitions are structured and that if
the same question were asked and reported on by 10 different analytical applications that have
different formats the reports would have dissimilar results. The fact that a clinician or executive can
ask one question and receive many diverse answers tends to confound clinicians who immediately
doubt the veracity and validity of the results. In the world of enterprise data warehouses (EDWs), if
clinicians doubt that the results are a true reflection of the data they quickly discount the value of the
EDW diminishes. Once doubts of the data’s veracity are raised the use of the EDW diminishes.
It is important that clinical informatics professionals constantly educate the end users as to how
reporting and analytics work and how various conflicting reports can be legitimate based on how the
data were entered, compiled, the date they were reported, and the exceptions that were extracted
before the report was created and how often the data were refreshed. It is important to understand
the systems where reports originate.
That is, is the system a real-time transactional system or is it an analytical application that has a cutoff
point for collecting and reporting the data? Most data warehouses and data marts are not real-time
transactional systems. Variation in results reported by different systems (and even in the same system)
can occur because of the time the report is run. It is important to understand the type of reporting. For
example, if an ad hoc report is run at eight o’clock in the morning and the data are refreshed at noon a
report run an hour later could differ.

  
UNDERSTANDING HOW DECISION SUPPORT, BI, AND PERFORMANCE MANAGEMENT FIT
In hospitals and healthcare organizations there are a variety of systems. It is important to understand
these systems and how they are used.
The major systems are:

 Transactional processing systems (TPSs): A TPS is used for operational management. They
tend to be real-time systems, such as ADT. For example, a transactional system would have
minute-to-minute census information. They are used for operations management.
 Decision Support Systems: These systems are not transactional, which means that they are
not real time. These systems are used for tactical or strategic management decision. Data
marts and data warehouses receive data feeds from transactional systems or operational
data stores (ODS) at pre-defined intervals. The frequency of data feeds to a data warehouse
or a data mart often depends on the decisions that are being made and how current the
data need to be (data latency). For example, a CFO may need to see accounts receivable
information daily, while an analytics analysis to calculate financial reserves may only need to
be run on a monthly basis. Another example may be a chief quality officer who wants
information on the occurrence of hospital acquired conditions (HACs) on a daily dashboard.
Business Intelligence systems are a subset of DSS.
 Management Information Systems (MIS): A MIS consists of transactional data that process
patient data. It includes document management software, health information management
systems (HIS), microfiche viewing systems, and telephone systems.
 Data Visualization Tools: Dashboards and scorecards are examples of how data and
information can be displayed. It is important to determine the type of dashboard, such as
financial, clinical outcomes, departmental specific, patient flow, service line, etc. When
designing dashboards it is important to select the best format.

◦ The format of the dashboard can be pie charts, graphs, tables, bubble charts, scatter diagrams, run
charts (where information is displayed over time), and control charts (similar to run charts with the
addition of upper and lower thresholds). Clinical informatics professionals often need to counsel and
discuss with end users which visual application viewing format would work best. Presenting several
prototypes and discussing the advantages and disadvantages of each prototype can be valuable. Some
BI experts dislike using pie charts because of the limitations associated. Indeed, the pie chart is the
least effective visual analytical display (Few, 2007). Vendors of data warehouses, data marts, and
analytical applications will have many options for choosing visualization options.
HEALTHCARE ANALYTICS AND THE RAPID-CYCLE IMPROVEMENT PROCESS
Business Performance Management (BPM), also called performance management, focuses more on
strategy and objectives. Business Intelligence (BI) is an important element of performance
management. Think of BI as the technology and performance management as the additional layer that
enables planning, monitoring, and analyzing. Using the core BI applications and tools performance
management extends the monitoring and measurement for tracking a variety of metrics such as core
measures, accountable care organization (ACO) metrics, value-based purchasing (VBP), hospital-
acquired conditions (HACs), and a long list of other quality and outcomes metrics. Each of the metrics
would be described as a key performance indicator (KPI). An example of a KPI would be having 100%
of vital signs recorded in an EHR. The goal of performance management is to change behavior.
Changing behavior and culture is aided by rapidcycle improvement processes. Rapid-cycle
improvement processes are a mechanism to promote rapid behavioral changes leading to cultural
changes.
The most common rapid-cycle improvement processes used in hospitals today are:

 Plan-Do-Check Act (PDCA)


 Six Sigma (Define, Measure, Analyze, Improve, and Control)
 Lean (Voice of the customer, value stream mapping, determination of the value-added and
nonvalue-added steps, create or revise processes)
 Root Cause Analysis (RCA), (Retrospective analysis of a problem or event—asks a series of
why questions to determine the root cause)
 Failure Mode Effects Analysis (FMEA), (Process maps current state, identify process failures,
estimate effect of identified failures, identify top reasons for failure, and take risk mitigation
steps) Rapid-cycle improvement processes are combined with visual analytical tools, such as
dashboards to allow the teams to track progress toward the goals over time. Dashboards
are an easy means to determine if the action is working or not and to spark additional
questions, which the team can respond to as they work toward achieving the KPIs. A use
case scenario follows, which integrates the concepts of business intelligence monitoring of
performance management to ensure that the Meaningful Use Smoking Cessation metric is
achieved.

USE CASE SCENARIO: PERFORMANCE MANAGEMENT FOR MEANINGFUL USE SMOKING


STATUS
There are several steps involved in performance management efforts. This vignette will break down
the process into steps. Remember that performance management is essentially a strategic process that
is intended to achieve results, produce outcomes, and change behavior.
The steps are:

 Strategic planning situation analysis: To begin our Meaningful Use smoking status
performance management project we start by capturing our baseline. At hospital “A,” of a
multi-hospital system, smoking status is recorded on 20% of the patients. We find that
smoking status resides in the EHR under the “social history” category. In order to comply
with Stage 1 of meaningful use (MU) the threshold is to have smoking status recorded on
50% patients who are 13 years of age or older. The problem with the compliance at hospital
“A” is that the nurses are charting smoking status in the wrong cell on their EHR data entry
screen. The nurses need to enter it under social history or it will not be captured in the
vendor’s operational data store (ODS). The EDW is programmed to read and extract this
data up from the ODS.
 Planning horizon: We have one month to the start of the 90-day reporting period. We have
to act quickly to resolve the issue of nurses not entering data in the appropriate field to
capture that smoking status was evaluated and charted.
 Environmental scan: Our environmental scan indicates huge variability in hospital
compliance. Using our dashboards we see that hospital “A” is at 20% and hospital “B” is at
90%. This is a strengths, weaknesses, opportunities, threats (SWOT) analysis. We find huge
variability with nurse recording of smoking status and other MU measures. Naturally one
asks why the nurses at hospital “A” are at 20% compliance while the nurses at hospital “B”
are at 90% for the very same smoking status recording requirement. Are the nurses at
hospital “B” much better nurses? Why the variance? This is where research and observation
come into play. Upon investigation it is found that the nurses at hospital “A” are focused on
smoking cessation rather than smoking status. These are two different variations of
smoking. One is reporting if the patient smokes and the other is focused on an outcome of
ceasing the smoking behavior. Smoking cessation is an old requirement that patients receive
smoking cessation education. The nurses at hospital “A” provide every patient with smoking
cessation education. Since they give all patients (whether they smoke or not) cessation
education they believe that it is redundant to record smoking status if they record they have
educated the patient on smoking. That is, the nurses felt it is not necessary for them to
complete the smoking status information. The environmental scan indicated that education
was required to change behavior.
 Gap Analysis: Since hospital A’s recording of smoking status is at 20% and they need to be
over 50% the gap they need to overcome is 30%.
 Strategic vision: The strategic vision is that all hospitals in the system will have 100%
compliance with the Meaningful Use thresholds and will in fact exceed them.
 Business strategy: The strategies will be education and compliance monitoring to ensure
compliance.
 Identify goals and objectives: Create an education plan, meet with hospital CNO, hospital
clinical informatics professionals, nurse managers, and compliance officers and review each
of the MU measures. Inform them of the importance of having nurses enter data
appropriately and show where in the EHR product build the data entry cells are located.
Explain the monitoring and dashboard systems and explain who will monitor and how often.
 Communication: The team agreed that education and communication were the key
elements. All nurses were needed to be familiar with meaningful use and how where they
entered data in the EHR made the difference. The patient care nurses also needed to see
and understand the key performance indicators (threshold attained for each MU measure)
and the drill down capabilities.
 Rewards and incentives: It is advantageous when pay and performance are linked. At this
multihospital system the executives and the CNO have a business performance scorecard.
Achieving the MU thresholds and receiving the CMS EHR incentive funds is tied to their
bonuses. • Focus: The MU threshold compliance dashboard is viewed at all levels—
corporate, region, hospital, compliance office, and individual nursing unit.
 Resources: Each hospital selects one individual who is ultimately responsible. Most students
have experienced situations where a committee is responsible and oftentimes the result is
that no one is responsible. At these hospitals the chief operating officer (COO) assumed
ultimate responsibility.
 Key Performance Indicators (KPIs): The KPIs are the MU 14 Core, 10 Menu, and 15 quality
measures. The goal is that the thresholds set by the hospitals, which in most cases exceeds
the CMS thresholds, are met. For example, CMS has a 50% threshold for smoking status
recording while this hospital system established an internal threshold of 80% that they
expect their hospitals to meet.
 Dashboard: The hospital system opted for a dashboard to display both the progress bar with
red and green to show if they are below (red) or above (green) the CMS threshold. The
progress bar allows anyone to view it to see progress toward the goal. The minimum
threshold is the line of demarcation between displaying red or green. It also enables users
click on one of the measures and see a graphical representation below. The hospital used a
control chart as its visual tool. Control charts have upper and lower limits. The upper limit is
a line showing the hospital’s self-imposed threshold. The lower line of the control chart is
the CMS minimum threshold. The control chart also serves as a run chart. You can see that
by the end of the reporting period, hospital “A” has achieved 100% compliance with the
hospitals’ standard of 100% compliance with recording vital signs.

Use Case 1. Performance Management Example for Meaningful Use Smoking Status Metric
Applying Analytics in the Workplace.
Meaningful use, accountable care organizations and value-based purchasing (VBP) are areas where
healthcare has a need to monitor and measure performance against criteria. When the CMS
Meaningful Use final rules were published by CMS in July 2010 it caused pressure on eligible hospitals
and critical access hospitals as well as eligible professionals (EPS). It brought with it pressure to have
an EHR that would be capable of meeting the 14 Core Measures, five of the 10 Menu Measures, and
the 15 quality measures. Eligible hospitals and eligible professionals quickly discovered that they
needed computerized support to assist with monitoring and reporting the measures—especially for the
90-day continuous reporting period. Meaningful Use can be seen as a paradigm shift in clinical
decision-making. For many years, decision-making was more of an art than a science. Decisions were
based on gutreactions, intuition, or experience (Turban et al., 2011).
This type of decision-making was often the best possible option available to executives because the
analytical data upon which to make decisions were often lacking or marginal. Meaningful Use
reporting required strict discipline in entering data correctly and reporting and analyzing data.
Healthcare organizations have recognized that technological tools are necessary to support highly
complex care delivery and ensure adherence to best practices. Knowledge management systems
(KMS) support decision-making by providing access to knowledge assets and sharing them throughout
the organization. Intellectual assets are also known as intellectual capital. The use of KMS contributes
to high efficiency and fosters a culture of collaboration. KMS include groupware systems,
communication tools, collaborative management tools, content management, and document
management. Use Case: Clinical example: Intermountain Health Care’s clinical knowledge repository
uses validation architecture to ensure documents are valid—Intermountain Health Care’s Clinical
Knowledge Repository (CKR) provides a central location to store clinical documents.
The CKR contains over 8000 unique documents ranging from Clinical Provider Order Entry (CPOE)
order sets to clinical protocols. Most of the content in the CKR is represented in eXtensible Markup
Language (XML). The authoring tool used to create CKR XML documents is from a third-party XML
editor. The authoring tool allows users to create and edit XML documents without requiring them to
have knowledge of XML. The XML editor is capable of performing standard XML validations, including
verification that the document is well formed and that it is a valid instance of the appropriate XML
Schema. In order for the CKR XML documents to operate within their target clinical systems,
extensive validation of the content and structure of the XML documents becomes mandatory. A
knowledge management validation tool is used. It alerts document authors if they need to fix a
document. The result of having the user-friendly error messages is that less than 1% of all documents
in Intermountain’s knowledge repository are invalid (Hanna, Rocha, & Hulse 2005).
Use Case 2: Use Case Scenario to Ensure Data Are Valid for Knowledge management
Analytics and Governance.
The failure rate for enterprise data warehouses (EDWs) is estimated to be as high as 70% to 80%
(Kernochan, 2011). Failure is due to a variety of issues. Most importantly the data need to be reliable,
well organized, and designed with the end users’ needs in mind. Organizations that are embarking on
an EDW for the first time usually require expert assistance. Working with an EDW is iterative. It is
truly a process in which you learn as you progress. One way to greatly reduce the risk of failure is to
use an Agile development approach. Agile is a rigorous, disciplined approach to systems and software
development (Douglass, 2012).
Healthcare EDW Readiness Assessment.
Before starting a readiness assessment it is important to ensure that the following attributes are in
place. The lack of these items can be red flags that can put a project in jeopardy.
Success factors for an EDW are having:

 Clearly stated mission and vision as well as objectives, such as KPIs. That is, everyone
knows and can articulate the need and value of the EDW.
 Highly stable organization with low volatility (turnover) of key leaders.

Because a long-term commitment to an EDW is paramount to success, if the chief executive officer
(CEO), CFO, or other senior executive positions are in a state of flux there is greater risk to the EDW
project’s success. Indeed, when senior leaders change, often the organization’s priorities change with
them. An EDW is a major commitment and requires stable, solid, and consistent support over the
duration of the project.

 Solid, unified, and unwavering support and leadership from the senior executives (i.e., the
“C-suite”—CEO, CNO, CFO, CMO, etc.). If there are only occasional pockets of support, the
nurse informatics professional is advised to work closely with the C-suite stakeholders to
build cohesive support before proceeding. If the organization is prone to “turf wars” where
different departments operate in silos and compete with each other for scarce dollars for
project funding the risk of failure increases. In this situation the nurse informatics
professional may want to create a value-realization chart that identifies how the EDW can
benefit each of the various departments. When each department clearly understands and
agrees with how the EDW is in their best interest the EDW has a greater chance of a high
degree of success.
 Strong and reliable source systems such that data integrity issues are minimized. Most data
sources have some integrity issues. The IT term, “garbage in, garbage out” (GIGO) refers to
introducing messy data sources. When building an EDW if the data integrity of any one of
the core systems is questionable the builders will identify the data problems and provide
data to assist the upstream system administrators to improve the data quality. Oftentimes
data integrity issues are unknown prior to the start of the EDW building process.
 Appropriate short- and long-term budget allocation. An EDW is an investment. Once the
EDW is established the true value is in the ongoing use of the data as they are transformed
to information, knowledge, and intelligence to make decisions and change behavior as part
of the performance management process. This type of work requires teams and the
appropriate funding for staffing teams. If the EDW budget is not protected and allocated
over the long term there will be shortterm gains that may not be able to be sustained.
 Appropriately skilled and dedicated staff who are able to remain on the project. When
corners are cut on staffing and when important resources are pulled off the EDW project
and assigned to other work efforts continuity suffers and can delay the project. Too many
missed deadlines and delays contribute to a perception of a troubled project. It is sometimes
hard to recover from perceptions that the project is not going well.
 Clear communication of the expected functionality at each stage of the project. Setting
clear expectations is a critical success factor. Clinicians who have been involved in the role
out of an EHR usually understand this principle. It is important to strive for quick wins that
impact the strategic or clinical quality priorities. Once success is declared chances are end
users see the value and eagerly clamor for additional KPIs that support their specific
departments. It is also important to avoid “scope creep.” Since EDW projects, by their very
nature, are iterative, it is wise to build in a buffer to support a small amount of changes. In
the governance structure it is advisable to have a change control board (CCB) that can
discuss and approve appropriate changes in scope and to deny or protect the integrity of
the project such that major changes to the scope are limited.
 Cooperative vendors that support the project. The lack of vendor cooperation and using
data as a weapon for commercial gain are inappropriate. Oftentimes vendors believe their
products or platform can compete or approximate the capabilities and functionality of an
EDW. Some of these vendors “hold the data hostage” and deny access to data feeds from
source systems. This makes it difficult to obtain the data extracts that feed the data
warehouse or the data marts. It may be appropriate to ensure that appropriate legal
language is included in contracts that stipulates who is the owner of the data—the vendor or
the organization. Uncooperative vendors that impose limitations to the organization’s
access to their data sources and block interoperability should not be tolerated.
 Strong clinical and technical collaboration approach versus a technical development effort.
Weekly reporting of tasks, activities, deliverables, and milestones to the appropriate
committees and sub-committees is important. An oversight team of stakeholders needs to
attend the weekly team meetings. The project manager and clinical sponsor should be
hosting these meetings. An Agile development approach versus a traditional “waterfall”
approach to the EDW development— where end users are involved throughout the project
contributes to meeting expectations—is more effective.

Business Intelligence and Analytics Data Governance.


It is important to have a stakeholder group responsible for overseeing the EDW. It is advisable to have
an analytics competency center (ACC), which is also often referred to as a BI competency center
(BICC). The role of the ACC is to champion the BI investments, facilitate data standardization, as well
as coordinate and oversee all BI activities, ensuring that they are aligned and prioritized in accordance
with the overall strategic goals of the organization. Establishing an ACC is a complex undertaking and
before launching one, one must assess both the challenges, including an assessment of the current BI
state, as well as the readiness to work as a team in addressing the BI initiatives of the organization.
There are various approaches to creating an ACC, such as a centralized, consulting, functional, center
of excellence and a decentralized structure. The various approaches revolve around how the
organizations’ data analysts are arranged as well as their reporting structure. In a centralized model the
analysts are combined into one department. The consulting model also organizes analysts in one unit
and the various departments who have analytical requests “contract” with the analyst group.
Functional models situate the analysts in specific departments that routinely use their services, yet in
this model consulting services are provided to other departments when required.
Center of excellence models are decentralized and analysts reside in their specific department of
expertise, yet have a dotted-line arrangement to a program office to respond to requests from various
other groups or teams as needed. The decentralized model places analysts in the departments they are
assigned with no connection structure to other analysts (Davenport, Harris, & Morrison, 2010).
Several committees are recommended for an ACC:

 Executive Oversight Committee: The executive oversight committee comprises senior


executive who meet regularly to set strategic analytic direction for the organization.
 ACC Steering Committee: The ACC steering committee monitors the EDW development and
ensures that the BI and analytics requirements are being met. The steering committee
members represent their respective departments and prioritize the KPIs. They evaluate the
various electronic and legacy systems to determine which one is to be “the source of truth.”
That is, often data reside in more than one system. In these situations, the committee must
decide which system is the one that will feed the EDW. This group also decides on
definitions and data lexicons, such as determining a consistent definition on how length of
stay is measured and calculated. They also make decisions and document business rules that
govern the data as well as logic, data consistency, quality, accuracy, data utilization, and
data access. Occasionally disputes can arise because of conflicting priorities. This group
works to resolve conflicts and establish a change control process. Another expectation of
the committee is to track issues and monitor progress toward resolution. Team members
consist of a technical director, an analytics director, analysts, a database administrator,
directors, and subject matter experts from across the organization.

In closing, analytics and BI can help organizations achieve the triple aim and make care safer. The true
beneficiaries of efforts to become an analytics-focused organization are both individual patients and
populations of patients.

A2 Planning, Design, and


Implementation of Information
Technology in Complex Healthcare
Systems
GENERAL SYSTEMS THEORY
 System can be defined as a set of interacting units or elements that form an integrated whole
intended to perform some function (Skyttner, 2001). An example of a system in healthcare might be
the organized network of providers (nurses, physicians, and support staff), equipment, and materials
necessary to accomplish a specific purpose such as the prevention, diagnosis, and treatment of illness.
General systems theory (GST) provides a general language which ties together various areas in inter-
disciplinary communication.
It endeavors toward a universal science that joins together multiple fields with common concepts,
principles, and properties. Although healthcare is a discipline in and of itself, it integrates with many
other systems from fields as diverse as biology, economics, and the physical sciences. Systems can be
closed, isolated, or open. A closed system can only exchange energy across its borders. For example, a
greenhouse can exchange heat (energy) but not physical matter with its environment. Isolated systems
cannot exchange any form of heat, energy, or matter across their borders.
Examples of closed and isolated systems are generally restricted to physical sciences such as physics
and chemistry. An open system is always dependent on its environment for the exchange of matter,
energy, and information. Healthcare systems are considered open and are continuously exchanging
information and resources throughout their many integrating sub-systems. Some fundamental
properties collectively comprise a general systems theory of open systems.
Open Systems
All open systems are goal seeking. Goals may be as fundamental as survival and reproduction (living
systems) to optimizing the flow of information across computer networks. To achieve goals, systems
must transform inputs into outputs. An input might include the entry of information into a computer
where it is transformed into a series of binary digits (outputs).
These digits can then be routed across networks to end users more efficiently than other forms of
communication. As simple systems interact, they are synthesized into a hierarchy of increasingly
complex systems, from sub-atomic particles to entire civilizations (Skyttner, 2001). Levels within the
hierarchy have novel characteristics that apply universally upward to more complex levels but not
downward to simpler levels. In other words, at lower or less complex levels, systems from different
fields share some characteristics in common.
For example, both computer and biological systems communicate information by encoding it, either
through computer programs or via the genome respectively. It is the inter-relationships and inter-
dependencies between and across levels in hierarchies that create the concept of a holistic system
separate and distinct from its individual components. As commonly noted, a system is greater than
simply the sum of its individual parts.
Complex Systems System behavior can be related to the concept of complexity. As systems transition
from simple to complex it is important for nurse informaticists to understand that system behavior also
changes. This is especially important in the management of data and information for clinical and
administrative decision-making. As healthcare systems become increasingly complex it becomes
progressively more difficult to predict how changes in provider workflow are impacted by new
information technology. This is especially important for those individuals responsible for planning,
designing, and implementing information systems in the healthcare environment of today. The
evolution of information systems in healthcare is replete with failed implementations, cost overruns,
and dissatisfied providers.
It is important to understand, however, that healthcare systems of today are some of the most
complex in the world. The unpredictable nature of complex healthcare systems results from both
structural and temporal factors. The structure of complex systems is composed of numerous elements
connected through a rich social network. In hospitals, those interactive elements include both internal
entities such as patients, nurses, physicians, technicians, and external entities, such as other hospitals,
insurance payers, and regulatory agencies. The way information flows through this complex Web of
channels depends on the technology used, organizational design, and the nature of tasks and
relationships (Clancy & Delaney, 2005). As systems evolve from simple to complex systems a hierarchy
of levels emerges within the organization. From a theoretical standpoint, complex systems form a
“fuzzy,” tiered structure of macro-, intermediary, and microsystems (Clancy & Delaney, 2005). In
hospital environments, for example, nursing units can be understood as micro-systems within the
health system. When nursing units are combined into divisions (e.g., critical care, medical-surgical, or
maternal-child), the nursing units’ internal processes expand beyond their boundaries (for example, the
medication usage process) to create intermediary or meso-systems.
Major clinical and administrative systems emerge at the macro-level through the aggregation of
interactions occurring at lower levels. At the micro-system level, the relationship between cause and
effect is somewhat predictable. However, as the organizational structure expands to include meso-
and macro-system levels, causal relationships between stimulus and response become blurred. With
increasing levels of complexity, the rich network of interactions between nurses, physicians,
pharmacists, and other providers at multiple levels within the organization quickly expands the
variability and range of potential states a system might exhibit. This is the driving force behind the
unpredictability in complex system behavior.
Complex Adaptive Systems
Complex adaptive systems (CAS) are special cases of complex systems; the key difference being that a
CAS can learn and adapt over time (Clancy, Effken, & Persut, 2008). From biologic to man-made
systems, CAS’s are ubiquitous. The human body, comprised of its many subsystems (cardiovascular,
respiratory, neurologic), is a CAS and continuously adapts to short- and long-term changes in the
environment. The same principles can be applied to social organizations such as clinics or hospitals.
Organizational learning is a form of adaptation and it has the capacity to change culture. New
reporting relationships can alter the structure of a social network and act as the catalyst for adaptation
to environmental change. For example, the movement from hierarchical organizational structures to
quasi-networked reporting relationships can improve communication both vertically and horizontally.
Complex adaptive systems exhibit specific characteristics that are different than simple systems and
the terminology used to define them may be unfamiliar to healthcare providers. Each term in the table
builds on the previous term to demonstrate how system behavior becomes unpredictable. As
healthcare systems become increasing complex, those individuals challenged with planning, designing,
and implementing information technology must be knowledgeable of complex adaptive system
principles as well as the tools and methods to analyze their behavior.

    

  
IT IS ALL ABOUT FLOW: FAST AND LONG, SHORT AND SLOW
A key concept to understand is that all systems, whether they are natural (rivers, trees, weather) or
man-made (stock markets, the Intranet, health systems), have currents that flow through them.
Oxygen and blood flow through animate systems such as trees or humans while water and electricity
flow in currents through inanimate systems such as rivers or lightening. Patients flow through
hospitals and clinics while information flows through computer networks and electronic health
records. Without flow, a system cannot exist and to survive, it must evolve in a direction that improves
its access to designs that improve the flow of currents through them.
In nature, the branching pattern in trees is not accidental. Rather the configuration of branches is the
result of natural selection and the search for optimal designs to transport water and oxygen from the
ground back to the atmosphere. This treelike branching pattern is so efficient that it is also seen in
other structures such as river basins, the human respiratory system, computer networks, and road
systems. Just as water flows fast for long distances through wide river channels, digitized information
flows fast for long distances through large cable networks connecting institutions. As rivers fan out to
create deltas, smaller channels form and flow slows and travels a shorter distance. And as computer
networks branch out to users within organizations, the flow of digital information also slows and
travels shorter distances as it supplies multiple users.
To optimize and maintain flow, fast and long flow must equal short and slow flow. In natural systems
such as rivers, the ratio of channels that are slow and short make up about 80% of total channels,
while the remaining 20% are fast and long. And this ratio shows up not only in rivers but in tree
branches, the human circulatory system, and computer networks! Although flow is faster in large river
channels than in smaller tributaries, the total flow is maintained by the creation of numerous
tributaries that as a whole are equal to the flow of the larger channels (Bejan & Zane, 2012). This
fundamental concept can also be applied to the flow of electronic data and the design of information
systems. When information is delayed, either as a result of poorly designed systems and cumbersome
processes and policies, it is usually a mismatch between “short and slow” and “fast and long”
information flow.
If it is overly time consuming to enter provider orders into an EHR (short and slow) then it delays those
orders from being communicated in a timely manner through the health systems computer network to
the pharmacy (long and fast). As a result, orders stack up (develop queues) and delay the flow of
information. Whenever, delays occur, nurse informaticists should look for imbalances between “short
and slow” and “fast and long” information flow.
INFORMATION SYSTEM PLANNING
The inter-relatedness of sub-systems characteristic of complex healthcare processes requires
participation by many different stakeholders in the planning cycle. In complex systems, outputs from
one process often become inputs to many others. A new admission on a nursing unit (input) can
generate orders (output) to pharmacy, radiology, respiratory care, as well as many other departments.
As various levels of care (clinic, hospital, long-term care) become increasingly connected,
interoperability of computer systems becomes crucial. In addition, as clinical and administrative areas
become more specialized, it is essential that domain experts participate in the planning process.
Wicked Problems Problems encountered in complex systems are commonly described as “wicked.” A
wicked problem is difficult or impossible to solve because of incomplete, contradictory, and changing
requirements that are often difficult to recognize (Rittel, Horst, & Melvil, 1973).
Wicked problem are:

 Very difficult to define or formulate


 Not described as true or false, but as better or worse
 Have an enumerable or exhaustive set of alternative solutions
 Inaccessible to trial and error; solutions are a “one shot” deal
 Unique and often a symptom of another problem Provider order entry (POE) systems often
exhibit wicked problem behavior because stakeholders have incomplete, contradictory, and
changing requirements.

For example, although POE systems have been shown to reduce cycle time for the entire medication
management process on a global level, individual providers may spend more time locally, having to
enter orders via a keyboard. Here global and local benefits contradict each other. In addition,
commercial vendors cannot allow “test driving” their application before purchase (trial and error).
Because of the enormous expense associated in implementation of POE applications, once a system
has been installed it is very difficult and costly to remove it (a one shot deal). Finally, because practice
patterns vary, each provider wants the system customized to their workflow (each problem is unique).
But because there are many providers there are an enumerable number of alternative solutions.
Wicked problems cannot be solved by the common rubric of defining the problem, analyzing solutions,
and making a recommendation in sequential steps; the reason being, there is no clear definition of the
problem. By engaging all stakeholders in problem solving, those people most affected participate in the
planning process and a common, agreed approach can be formulated. Of utmost importance is
understanding that different problem solutions require different approaches. For example, bar-coded
medication management is one strategy for reducing medication errors. However, to be effective, this
strategy must be implemented as part of an overall strategy for reducing medical errors. This strategy
might be adopting concepts and principles from high-reliability organizations (HRO). A high-reliability
organization assumes that accidents (or errors) will occur if multiple failures interact within tightly
coupled, complex systems. Much of the research regarding HRO has been fueled by well-known
catastrophes such as the Three Mile Island nuclear incident, the space shuttle Challenger explosion,
and the Cuban Missile Crisis (Weick & Sutcliffe, 2001).
High-Reliability Organizations
Key attributes of HROs include a flexible organizational structure, an emphasis on reliability rather
than efficiency, aligning rewards with appropriate behavior, a perception that risk is always present,
sense making (an understanding of what is happening around you), heedfulness (an mutual
understanding of roles), redundancy (ensuring there is sufficient flex in the system), mitigating
decisions (decision-making that migrates to experts), and formal rules and procedures that are explicit.
Employing information technology, such as bar-coded medication administration, must be
implemented in the context of an overall strategy to become an HRO. For example, BCMA emphasizes
reliability over efficiency and integrates formal rules and procedures into the medication management
process. Collectively, these features reduce the probability of multiple failures converging
simultaneously.
On the other hand, planning for clinical decision support (CDS) applications may require a different
approach than implementation of BCMA. CDS systems link health observations with health knowledge
to influence health choices by clinicians for improved healthcare (Garg et al., 2005). These applications
cover a broad range of systems from simple allergy alerts to sophisticated algorithms for diagnosing
disease conditions. The cognitive sciences inform and shape the design, development, and assessment
of information systems and CDS technology. The sub-field of medical cognition focuses on
understanding the knowledge structures and mental processes of clinicians during such activities as
decision-making and problem solving (Patel, Arocha, & Kaufman, 2001). Optimizing the capabilities of
CDS systems to allow better decision-making requires an understanding of the structural and
processing patterns in human information processing.
For example, knowledge can be described as conceptual or procedural. Conceptual knowledge refers
to a clinicians’ understanding of specific concepts within a domain while procedural knowledge is the
“how to” of an activity. Conceptual knowledge is learned through mindful engagement while
procedural knowledge is developed through deliberate practice. If CDS planners are not careful, they
may inadvertently design a system that transforms a routine task such as checking a lab value
(procedural knowledge) into a cumbersome series of computer entries. If the clinician is simultaneously
processing conceptual knowledge (problem solving and decision-making) and complex procedural
tasks, it will place an unnecessary burden on working memory and create frustration. In summary, the
frequent occurrence of wicked problems in complex healthcare systems highlights the challenges
faced by information system planners today. Successful planning for the introduction of information
technology requires participation by a diverse group of stakeholders and experts. There is no “cookie
cutter” solution in system planning. Each application must be aligned with an overall organizational
strategy which drives the implementation approach.
INFORMATION SYSTEM DESIGN
The characteristic behavior of complex systems and the emergence of wicked problems have
prompted system planners to search for new methods of system design and implementation. The
introduction of new information technology in healthcare organizations involves the integration of
both new applications and incumbent legacy systems. For example, a new POE application will need to
interface with existing pharmacy, radiology, and lab systems. Although, an initial “starter set of orders”
usually accompanies the POE application, the task of interfacing it with appropriate departmental
systems, creating new order sets, and developing CDS generally falls to an internal implementation
team with vendor support. Over time, this development team will design and build a POE system that
is much different than the original application.
Structured Design
Historically design and implementation of healthcare information systems relied heavily on structured
methods. The most common structured design and implementation method used is the System Design
Life Cycle (SDLC). The SDLC acts as a framework for both software development, and implementation
and testing of the system. Table 37.3 presents the broad steps in the SDLC (Kendall & Kendall, 2006)
An SDLC approach prescribes the entire design, testing, and implementation of new software
applications as one project with multiple sub-projects. Project deadlines can extend over many months
and in some cases years. The method encourages the use of standardization (for example,
programming tools, software languages, data dictionaries, data flow diagrams, and so forth). Extensive
data gathering (interviews, questionnaires, observations, flowcharting) occurs before the start of the
project in an effort to predict overall system behavior, after full implementation of the application.
Agile Design
As healthcare systems have become increasingly complex, the SDLC approach has come under fire as
being overly rigid. Behavior, characteristic of complex systems (sensitivity to initial conditions, non-
linearity, and wicked problems), is often unpredictable, especially after the introduction of information
technology in clinical workflow. Equally challenging is the difficulty of trialing the impact of new
information systems before having to actually purchase them. Site visits to observe a successful
application in one facility are no guarantee of success in another. Many healthcare organizations have
spent countless millions in failed system implementations.
To overcome these problems, healthcare organizations are turning to “agile” methods for design and
implementation of information systems (Kendall & Kendall, 2006). Agile design is less prescriptive than
structured methods and allows for frequent trial and error. Rather than mapping the entire project
plan up front, agile methods clearly define future milestones but focus on short-term successes. To do
so the implementation team and software developers collaboratively evaluate and prioritize the
sequence of task to implement first. Priority is assigned to tasks that can be accomplished within a
short time frame, with a minimum of cost while maintaining high quality standards.
This “timeboxing” of projects forces the team to search for simple, but elegant solutions. Developers
often work in pairs to cross-check each other’s work and distribute the workload. Communication is
free flowing between developers and the implementation team. Once a solution is developed it is
rapidly tested in the field and then continuously modified and improved. There is a philosophy that the
interval between testing and feedback be as close as possible to sustain momentum in the project.
Agile design is one example of how to manage information technology projects in complex systems.
Rather than trying to progress along a rigid project schedule, simple elegant solutions are rapidly
designed, coded, and tested on a continuous basis. Over time, layer upon layer of elegant solutions
evolve into a highly integrated, complex information system. Ironically, this is the same process that
many natural systems have used to evolve into their current state.
THE SYSTEMS ANALYST TOOLBOX
Whether structured or agile design methods are used during project implementation, systems analysts
rely heavily on various forms of modeling software. Modeling applications visually display the
interaction and flow of entities (patients, providers, information) before and after implementation of
information technology. Models link data dictionaries with clinical and administrative workflow
through logical and physical data flow diagrams. Models can be connected through a hierarchy of
parent and child diagrams to analyze system behavior locally (at the user level) and globally
(management reports). There are many commercial modeling tools available.

  
Discrete Event Simulation
Two software applications that are well suited for the analysis of complex systems are discrete event
simulation (DSA) and network analysis (NA). DSA is a software application that allows analysts to flow
chart processes on a computer and then simulate entities (people, patients, information) as they move
through individual steps. Simulation allows analyst to quickly communicate the flow of entities within
a process and compare differences in workflow after the introduction of new technology.
DSA applications contain statistical packages that allow analysts to fit empirical data (process times,
inter-arrival rates) to theoretical probability distributions to create life-like models of real systems. Pre-
and post-implementation models can be compared for differences in cycle time, queuing, resource
consumption, cost, and complexity. DSA is ideal for agile design projects because processes can be
quickly modeled and analyzed prior to testing. Unanticipated bottlenecks, design problems, and bugs
can be solved ahead of time to expedite the agile design process.
Network Analysis
Network analysis applications plot the pattern of relationships that exist between entities and the
information that flows between them. Entities are represented as network nodes and can identify
people or things (computers on nursing units, handheld devices, servers). The information that flows
between nodes is represented as a tie and the resulting network pattern can provide analysts with
insights into how data, information, and knowledge move throughout the organization. Network
analysis tools measure information centrality, density, speed, and connectedness and can provide an
overall method for measuring the accessibility of information to providers. NA graphs can uncover
power laws in the distribution of hubs in the organizations network of computers and servers. This can
be beneficial in investigating the robustness of the computer network in the event a key hub crashes.
ORGANIZATIONAL FRAGILITY
The potential impact of growing healthcare complexity is enormous. Beyond a certain level,
organizational complexity can decrease both the quality and financial performance of a health system.
In his book, Antifragile, Taleb describes the concept of fragility in complex systems such as healthcare
(Taleb, 2012). As certain systems become increasingly complex, unexpected events can create an
exponentially negative impact. For example, the death of a patient, as the result of a medication error
propagating across multiple inter-dependent hospital departments, has a much greater impact on a
large hospital than on a smaller one. It takes significantly more resources to investigate and make
changes in larger hospitals than smaller ones because of the number of people, departments, types of
equipment, and so forth that are involved.
Thus increasing size (or complexity) promotes “fragile” behavior in health systems: they break easily.
One can get a sense of the impact of size and complexity on the fragility of organizations by scanning
The Centers for Medicare and Medicaid’s Hospitals Consumer Assessment of Healthcare Providers
(HCAHPS) survey scores (HCAHPS Online). The HCAHPS provide a standardized survey instrument
and data collection methodology for measuring patients' perspectives on hospital care. The HCAHPS
is administered to a random sample of patients continuously throughout the year in hospitals.
From “Communication with Nurses” to “Pain Management,” of the 10 hospital characteristics
publically reported, scores decrease with an increase in hospital bed size in nearly every category. Just
as an elephant falling five feet suffers significantly more damage than a mouse, large, complex
organizations suffer exponentially greater harm than smaller ones for similar events. In contrast,
certain systems are “anti-fragile” to negative events and actually become stronger as they are stressed.
A good example, among many, is the human body’s immune system, which creates new antibodies
when it is exposed to antigens. The concepts illustrated from these biologic “learning systems” can be
translated into man-made systems through continuous learning organizations. By learning from
stressors (unanticipated events) and then implementing and revising policies and procedures to
prevent them from occurring again the health system actually becomes stronger. To illustrate one
example of the concept of fragility the following actual use case is described.
DESIGNING SYSTEMS WITHIN AN INCREASING COMPLEX WORLD—A CASE SCENARIO
In 2005, a large health system located in the Southwest initiated a plan to implement a single vendor,
enterprise wide EHR across its 10 hospitals. The system contained one large academic medical center
of 800 beds and 9 hospitals ranging in size from 25 to 475 beds. These hospitals were distributed
throughout the state where 50% were in large urban areas and the remainder in rural locations.
Although the network of hospitals acted as an integrated delivery system, each implemented their
EHR according to their own process and timeline.
Thus the success of various EHR implementation strategies used by each hospital could be assessed
by measuring specific performance outcomes post-implementation. Complete implementation of the
EHR occurred over a three-year period with one hospital at a time going “live” according to a master
schedule. One year post implementation each hospital then compared pre- and post-performance
metrics to assess how well strategic objectives were met.
Result of the one-year post-study varied considerably between hospitals and it is beyond the scope of
this chapter to review all of the results. However, it is important to note that one hospital significantly
outperformed the remaining nine hospitals on nearly all performance metrics. This high-performing,
community hospital had a bed size of 475 and utilized an “agile” design strategy as compared to the
other hospitals that followed a more traditional systems development lifecycle (SDLC). As previously
described, traditional EHR implementation strategies have followed the SDLC) using a structured
design process.
The SDLC typically maps out the complete planning, design, and implementation of projects in
advance. This prescriptive process attempts to anticipate all possible future scenarios and then designs
and build the system to accommodate them. Nursing, pharmacy, radiology, and other clinical areas
then work with commercial vendors to create flowcharts that describe their existing workflows and
revise them to align with the EHR’s new workflow. There are a number of disadvantages of the SDLC
that can lead to an increase in organizational fragility in complex systems such as healthcare.
These include:

 It is virtually impossible to anticipate all of the different scenarios that may arise in advance.
 User needs and technology may change during the implementation phase especially if the
timeline is delayed.
 Workflows are often designed in silos (nursing, pharmacy, radiology) and do not
communicate well when integrated in an overall workflow.

Of the nine hospitals that used the SDLC many of these issues emerged during implementation of the
EHR. For example, because of its size and complexity, the 800- bed medical center had a protracted
EHR implementation that spanned three years. Because the entire system was fully designed in
advance, it was difficult to accommodate changes in technology and new service requirements that
occurred during the three-year implementation.
The system could not readily adapt to the emergence of wireless mobile devices (tablets, phones, and
so forth), consumer informatics (patient portals), and Web-based services. Although clinical
documentation was now electronically stored it was not standardized in a way that accommodated
reporting requirements for the Accountable Care Act and “meaningful use.” And because workflow
design had been developed through individual departments (nursing, radiology, pharmacy, and so
forth) communication was slow and cumbersome. Provider order entry systems, clinical
documentation, and medication administration all took longer than previous paper systems. The
aforementioned use case is an example of a fragile organizational design. When the system was
stressed, such as in a sudden increase in admissions, it quickly became overwhelmed and “broke.”
Medical errors, system downtime, resources, and cost all exponentially increased rather than
decreased in response to unanticipated events. In contrast to other hospitals, the high-performing
hospital utilized an agile design process when implementing their EHR. One team of clinicians, vendor
consultants, administrators, and hospital programmers developed a project timeline that “loosely”
guided the system over a one-year period. This allowed flexibility for unanticipated events that
occurred along the way and did not tie the team down to a ridged plan. Rather than implementing the
EHR on all units simultaneously, the same team went from unit to unit and designed the system along
the way. The team designed the system in a modular format rather than building the entire system in
advance. The modular format allowed system designers to break workflow up into natural use cases.
For example, one nursing module might be “New Patient Assessment” while another might be “Patient
Discharge.” By using the modular format, designers could make changes to one part of the EHR
without having to go back and make multiple changes to other areas.
As the design team went from one department to the next it used a rapid design and implementation
strategy. The team would meet with key departmental content experts, review the existing workflow,
and then quickly design a new workflow. The new workflow would be rapidly piloted on the unit over
a one- to two-week period while continuously making changes based on feedback. This process
allowed the team to build upon the success of previous units and make recommendations on best
practices. In other words, the team was continuously learning from their mistakes and correcting them
as they advanced through the organization. The agile design process used by this hospital is an
example of an “anti-fragile” organization. Stressors or unanticipated events actually strengthened the
system as the design evolved. That is because an organization that uses feedback to continuously
learn, just as in biologic systems, adapts more readily to change and becomes stronger. In high-
performing organizations, the design phase never actually ends. It is continuous and always adapting
and learning from unanticipated events.

A3 The Quality Spectrum in Informatics


  
DRIVERS OF QUALITY MEASUREMENT USING INFORMATICS
The Medicare and Medicaid EHR Incentive Programs provide financial incentives for continuously
measuring and reporting of clinical quality measures to help ensure that the healthcare delivery system
can deliver effective, safe, efficient, patient-centered, equitable, and timely care (Centers for Medicare
& Medicaid Services, n.d.). The financial incentives are based on Meaningful Use of EHRs. This is
guided by the National Quality Strategy (NQS) (AHRQ, n.d.), adopted by the Department of Health and
Human Services, focused on three aims of better care and better health, at lower costs. Achieving the
NQS requires a national infrastructure for electronic quality measurement. Electronic clinical quality
measures use a wide variety of data gathered as a by-product of patient care delivery. When data
necessary for quality measurement are captured as a by-product of care delivery, and when those data
are easily shared between health information technology (IT) systems, care can be better coordinated,
and is safer, more efficient, and of higher quality. Ideally, the nation will achieve the NQS by taking full
advantage of the EHR to capture high-quality clinical data for quality reporting and performance
improvement. For instance, using a diagnosis from an EHR problem list captured during care is a more
valid source for quality measurement than administrative claims (Tang, Ralston, Arrigotti, Qureshi, &
Graham, 2007).
QUALITY MEASUREMENT
The quality of care provided in healthcare is often defined as “the degree to which health care services
for individuals and populations increase the likelihood of desired health outcomes and are consistent
with current professional knowledge” (Institute of Medicine, 1990). One of the best ways to determine
whether the care provided was effective in improving the desired health outcomes in an
evidencebased manner is through measuring or quantifying the performance of an individual or
organization. Performance measurement in healthcare is not a new concept as many organizations
have been developing and using measures to assess the quality of care for several decades. In recent
years, the focus has evolved from improving the quality of care at the local level to determining and
paying for the care provided based on the level of quality and cost. This evolution has led to increased
rigor in the development and specification of performance measures and leveraging data that are
collected at the point of care. These data can be aggregated easily, making the use of electronic data
the obvious option. Performance of the healthcare provider or institution is analyzed through the
following constructs or types: structure, process, and outcomes (Donabedian, 1980). These constructs
serve as a methodology by which clinical care can be examined to show where gaps in care may be.
Structure  is defined as “the relatively stable characteristics of the providers of care, of the tools and
resources they have at their disposal, and of the physical and organizational setting in which they
work” (Donabedian, 1980, p. 81). Structural measures include nurse staffing ratios, the existence of
systems such as the use of a registry, and other aspects that would be considered necessary
components to ensure appropriate and high-quality care is delivered.
Process  is “a set of activities that go on within and between practitioners and patients” (Donabedian,
1980, p. 79). These measures assess whether an action or step that is needed for patient care is
provided. They are reliant on evidence-based guidelines or research and most often show what
percentage of the time a provider or institution is compliant with an assessment or intervention.
Outcomes are “changes in a patient’s current and future health status that can be attributed to
antecedent health care” (Donabedian,1980, p. 83).
Outcome measures are focused on the success or lack thereof to improving a patient’s health status.
These may either be short term (typically within the 12 months or less) or long term. This type of
measure often requires risk adjustment to account for differences in patient characteristics that are
outside of the control of the entity being measured (National Quality Measures Clearing House, n.d.,
Allowance for patient or population factors section, para. 6). These characteristics vary depending on
the intent of the measure and can include items such as severity of illness or co-morbidities.
In order to develop a clearly defined, precisely specified measure, it is critical that the intent of the
measure is well outlined. For example, defining the intent of the measure as the quality of care
provided to patients with a diagnosis of diabetes would be too broad; rather, if the measure intent was
to determine the number of patients with diabetes whose blood pressure is within acceptable targets,
it lends itself to developing and specifying a measure.
Once the intent of the measure is clearly defined, the different components of the measure should
then be specified.
These components include:

 Initial patient population: defines the total population of interest; this may be focused on
patients with a specific diagnosis or on procedures that are performed for example.
 Denominator: “The lower part of the fraction used to calculate a rate or ratio defining the
total population of interest for a quality measure” (NQMC, n.d., Denominator section, para.
1); often the same as initial patient population but also may be a subset or more narrow
focus.
 Numerator: “The upper part of the fraction used to calculate a rate or ratio defining the
subset of the population of interest that meets a quality measure's criterion of quality”
(NQMC, n.d., Numerator section, para. 1). This focuses on the structure, process, or
outcome of interest.
 Exclusions (also called exceptions): “Characteristics defined during the delivery of care that
would mean that care specified in the numerator was contraindicated, refused by the
patient, or not possible for some other compelling and particular circumstance of this case”
(NQMC, n.d., Exclusions/exceptions section, para. 1).
Within each of these components, specific definitions should be included such as who is being
measured (patient population, diagnoses, age, gender if applicable, measurement period, or
timeframe), what is being measured (the outcome, process, or structure of interest), and what patients,
treatments, or other aspects are not appropriate to include. Performance measures are usually
calculated as counts (“the number of times the unit of analysis for a measure occurs”), ratios (“a score
that may have a value of zero or greater that is derived by dividing a count of one type of data by a
count of another type of data”), or rates/proportions (“A score derived by dividing the number of cases
that meet a criterion for quality (the numerator) by the number of eligible cases within a given time
frame (the denominator) where the numerator cases are a subset of the denominator cases”) (NQMC,
n.d., Scoring of the Measure section, para. 3–9). Within performance measurement, there is an
increasing push to design measures to be broadly applicable with the ability to drill down into specific
age groups, conditions, etc., as desired. This type of measurement is more feasible using electronic
data.
WHAT DO WE MEASURE?
Performance measures typically focus on one of the following areas:

 Patient experience: measures that capture how care was provided through the eyes of the
patient and/ or caregiver
 Quality of care delivered: measures that capture whether care was received in accordance
with clinical evidence and guidelines; these measures typically look at overuse,
inappropriate use/ unnecessary care, or underuse
 Access: measures that examine whether a service or treatment was provided and available
in a timely manner
 Cost: measures that examine the amount (usually in dollars) spent to provide services,
procedures, treatments, etc.

While most measures look at one aspect of care such as overuse of services, more and more measures
are developed that take an increasingly comprehensive view to reflect how patients truly receive care.
These composites are groupings of two or more measures that include processes and/or outcomes of
care into a single score. For example, a composite of the quality of care a patient with asthma might
look at whether the patient is receiving the appropriate treatments as well as the number of
emergency department visits he or she may have had over the last year. Composites can be weighted
where the score of one or more measures is given priority over others or calculated as an all or none.
In light of performance measurement moving to better reflect the care a patient receives, developers,
payers, and others continue to explore ways to combine more than one aspect of care. For example,
there is a growing desire to measure the efficiency of the provider or entity by combining the quality
of care received (preferably outcomes) with the resources and the associated cost used to provide that
care. Developers, payers, and others have taken this concept a step further to measure value
(Bankowitz, Bechtel, Corrigan, DeVore, Fisher, & Nelson, 2013). By combining quality (outcomes) and
patient experience over the cost or expenditures to provide that care, it then allows patients,
purchasers, employers, and others to make informed decisions on providers or institutions with whom
they might want to partner or from whom they want to receive care.
WHO DO WE MEASURE?
Quality, cost, and other aspects of care are measured at specific levels of analysis and settings of care.
In order to ensure that a measure can be precisely specified, the measure developer will determine
which individual or entity to be measured. This decision is usually made based on the goal of the
program or activity for which the measure will be used. Levels of analysis for measure specification
include the individual level (e.g., nurse, physician), group (e.g., practice, clinic), entity (e.g. hospital,
health plan) or the population level (e.g., measuring across a community or geographic region).
Understanding what settings of care should be included is critical, particularly when one begins to
specify the measure. Setting for which the measure is applicable is determined based on the evidence
supporting the measure and the intended use of the measure. Settings of care vary from ambulatory
(e.g., clinic, pharmacy, laboratory), inpatient (e.g., hospital, inpatient rehabilitation facility), and other
locations including the patient’s home.
HOW DO WE MEASURE?
Performance measures are derived from a variety of sources but are also related to the level of
analysis and setting of care for which the measure is intended. For example, when measures were
created two decades ago, paper medical records were the primary source of information in hospitals
and practices. Most measures developed for those groups were specified using that data source. For
measures that seek to look at the quality of care provided by a home health agency, the data source
may well be the Outcome and Assessment Information Set (OASIS). There are various data sources
including clinical data (i.e., cost, procedures, pharmacy, lab, imaging), medical records (either paper or
electronic), patient-reported data (i.e., surveys, portals), registry, and others.
Medical records (either paper or electronic) have been considered the gold standard when measuring a
provider’s performance but this thinking is evolving, particularly when the patient’s experience or
perspective is desired, leading to the development of patient-reported data, primarily through surveys.
As patient care and quality activities continue to evolve, the emphasis is now on providing access to
data that are real time, can be tracked longitudinally, and can be communicated across systems and
providers to enhance the coordination of care for patients. This emphasis has driven much of the
adoption and implementation of electronic data sources in healthcare. Given the increasing
implementation of electronic health record systems in practices, hospitals, and other entities,
measures must now be specified using this data source, requiring additional granularity in the
specifications, coding, and other aspects, which are outlined further in this chapter.
WHO DEVELOPS MEASURES, WHY, AND IN WHAT ENVIRONMENTS?
Performance measurement has evolved over the last 10 years. Initial efforts to capture and use data to
define the quality of care provided to patients included groups focused on a providers’ quality of care
such as The Joint Commission’s (TJC) ORYX® program, the National Committee for Quality
Assurance’s Healthcare Effectiveness Data and Information Set (HEDIS®) program, the AMA-
convened Physician Consortium for Performance Improvement (PCPI®) or on a specific condition or
disease like the Diabetes Quality Improvement Program. These programs and associated measures
were developed to enable providers and patients determine the quality of care that hospitals provided
through the TJC program, the PCPI for physicians, and health plans with the HEDIS program.
Within the nursing community, the American Nurses Association (ANA) established the National
Database of Nursing Quality Indicators to enable comparison of valid and reliable nursing-sensitive
indicators at the national, regional, and state level for comparison down to the unit level (Montalvo &
Dunton, 2007). Many of these measures were developed to enable quality improvement activities at
the point of care— meaning real-time data and feedback to facilitate changes to the care provided to
patients. As the focus shifted to use performance measures for accountability purposes (i.e., pay for
reporting and/or performance, public reporting), the healthcare community identified the need for an
entity to vet and endorse national standards, which then became the National Quality Forum (NQF).
This shift created a focus on measures that were more rigorously specified and shown to be reliable
and valid to enable patients, employers, purchasers, and others to make informed choices on which
entity was provided the highest level of care. Performance measures are now actively used in federal
and state programs to enable patients to identify who provides high-quality care and purchasers and
employers to select and partner with them. While these developers and programs were focused on
public reporting and paying for performance, other smaller groups continued to develop measures for
use in national, state, and private programs. Today, medical specialty societies or accreditors are no
longer the primary developers of measures; rather, quality improvement collaboratives serve as a key
contributor to measure development and federal agencies such as the Centers for Medicare and
Medicaid Services (CMS) and the Office of the National Coordinator for Health Information
Technology (ONC) are funding and encouraging groups such as regional collaboratives, and health
systems to develop measures that can be derived from electronic sources (i.e., EHRs, registries) and
are patient centered.
At the end of the day, we cannot improve without knowing where care is lacking. Performance
measurement provides a method by which we can assess where we need to improve our processes
that lead to improved patient outcomes. A goal of measurement is that it be a by-product of care
delivery to the greatest extent possible to enable us to track improvement real time and longitudinally.
By integrating these measures into electronic systems, we will be furthering that goal. The integration
of quality measures within electronic systems requires use of data standards.
WHAT ARE THE QUALITY REPORTING PROGRAMS?
Quality reporting programs have been in existence in healthcare for over a century. Collecting and
reporting quality data have become more organized and focused over the past 50 years with a major
growth in the past 10–20 years. Organizations such as the Centers for Medicare and Medicaid
Services (CMS), The Joint Commission, the American College of Surgeons, other professional
organizations, and state and local quality initiatives have been major players in moving healthcare
quality reporting programs to where we are today. Many point to the Institute of Medicine’s sentinel
report To Err is Human as a turning point for the rapid growth of quality reporting programs since
2000.
Hospitals, physician offices, pharmacies, ambulatory centers, and others that provide healthcare
services to consumers (e.g. laboratories) may be required to report quality data. Reporting programs
have evolved over the past decade but the goal of these programs is to improve patient safety through
the use of electronic health records. Nursing informatics professionals can support these efforts
through data analytics, program knowledge, and a keen understanding of the data collection and
storage in an EHR. Historically, claims data have been used to track and monitor patient services. The
data were used to reimburse providers for care. It also provided data to track quality of care as the
information reflected the procedures completed for the patient. For example, if a claim was submitted
for a patient for an influenza vaccine the provider would get reimbursed and the insurer could track
patients who received the preventive service.
A standard format for reporting claims data has made it easier to track quality data. However, claims
data do not tell the full story of the patient’s visit, services provided, and outcomes to adequately
describe the quality of clinical care. Using the example above, many consumers receive their influenza
vaccine at the local pharmacy or grocery store, their church or a community-sponsored flu clinic. Most
flu clinics, if not free, require a nominal cash payment and the service may not be reported to a
provider or an insurer through the claims payment process. Although it may be relatively easy to
report the number of doses administered we may not know the ages, gender, race, etc., of the
population receiving the vaccine.
This data could be collected in an EHR by providers who collect the data during a patient visit.
Informatics nurses (IN) and informatics nurse specialists (INS) must be aware of quality programs used
at their facility/institution, particularly those programs that rely on clinical data that can be collected
and retrieved from the EHR. INs and INSs must understand the intent of the quality programs, data
requirements, reporting timelines, staff education requirements for data input (data collection) or
timing of data input, querying the data, organizing and interpreting the data, and understanding what
the results mean and how to apply it. The INs and INSs must work with organization staff and senior
management to understand the results and to develop a strategy to improve or maintain the quality of
care based on the quality programs results. The sections below will discuss at a high level a few of the
current federal quality initiatives, the role of The Joint Commission in quality, and the National
Database for Nursing Quality Indicators (NDNQI).
Each program has its own criteria. As EHRs mature and evolve and their use becomes more
widespread, quality program criteria will change. However, the common goal of quality of care to
ensure patient safety will remain steadfast.
Federal Programs to Support Quality Reporting
Since the early 2000s, and in conjunction with the growth of electronic health records in the inpatient
and ambulatory care settings, there has been a movement to collect and report measures using clinical
data. The process to convert was slow until incentive programs started by the federal government
boosted the movement. Included in the American Recovery and Reinvestment Act (ARRA) is the
Health Information Technology for Economic and Clinical Health Act (HITECH). HITECH provisions
incentive payments by the Centers for Medicare & Medicaid Services (CMS) for reporting of
“meaningful use” of certified EHR technology by eligible professionals, hospitals, and critical access
hospitals.
The Medicare and Medicaid EHR Incentive Program, more commonly known as Meaningful Use (MU),
is the most notable of the current quality reporting programs. The MU program is open to eligible
professionals (EPs) (e.g. individual physicians, nurse practitioners), eligible hospitals (e.g. acute care
facilities but not children’s hospitals), and critical access hospitals (CAH).
Providers must demonstrate meaningful use of a certified electronic health system (EHR) while
providing patient care. The Medicare incentive program is administered by CMS, while the Medicaid
incentive programs are administered voluntarily by states and territories. The Medicare MU program
for EPs started in 2011 and continues through 2016 with 2014 being the last year for initial
participation in the program. The Medicaid program runs through 2021 with 2016 being the final year
to enroll. Providers with a certified EHR system can enroll and participate in either the Medicare or
Medicaid MU program but not both.
Eligible hospitals can participate in the MU hospital incentive programs. Participation started in FY
2011 with payment incentives continuing until FY 2016. CMS requires that providers participating in
the MU program show they are using their EHR meaningfully. In other words, using an EHR is
positively affecting the care they are providing. Providers show meaningful use by meeting all criteria
established by CMS for each stage of the program.
Meaningful Use had three stages:

 Stage 1: Data capture and sharing data with patients or other healthcare providers
 Stage 2: Advance clinical processes
 Stage 3: Improved outcomes Providers must satisfy Stage 1requirements to obtain financial
incentives and move to Stage 2. Stage 1 of MU for eligible providers (e.g., individual physicians)
requires reporting data for a consecutive 90-day period during a calendar year.

Stage 1 serves as the foundation for Stages 2 and 3. Stage 2 requires providers to submit quality data
for one consecutive year for core measures, menu measures, and clinical quality measures (CQM).
These criteria have been defined as through federal regulations. As of the writing of this chapter, CMS
in conjunction with the Office of the National Coordinator (ONC) have proposed extending the end of
Stage 2 to 2016 and delaying the start of Stage 3. Federal rule will determine the revised start date.
Notice of proposed rulemaking for revisions to Meaningful Use Stage 2 and 3 was issued in Fall 2014.
Hospitals enrolled in MU must also submit data on core and menu objectives as well as CQMs. The
core and menu objectives and CQMs for hospitals vary from the EPs.
For more information on the core and menu objectives it is suggested the reader visit the CMS Web
site (www.cms.gov). During each of these stages, providers must meet specific criteria set forth by the
program. INs and INSs working at facilities enrolled in this program must be aware of the requirements
and the timelines for data submission. For example, if data are being submitted for height, weight, (list
them out) then the IN/INS should be aware of how the data are being entered, that the data are being
correctly entered by end users and periodically during the reporting period querying the EHR to
ensure the information is reportable. Any deviation from what is expected should trigger a discussion
for a plan to correct how the end user is entering the data. A second CMS quality program is the
Physician Quality Reporting System, or PQRS, for eligible professionals.
The PQRS program uses both financial incentive payments and payment adjustments to promote
quality reporting. Information reported is for covered Physician Fee Schedule (PFS) services. Claims
data and registry data are used to report quality measures for this program. PQRS participation is
voluntary and open to individual providers and group practices. Participation in PQRS does not satisfy
enrollment in CMS’s MU program. Program requirement changes INs and INSs working with providers
enrolled in PQRS are encouraged to follow program requirements and changes on the CMS Web site
(www.cms.gov). The Hospital Inpatient Quality Reporting (Hospital IQR formerly RHADAPU) program
is also a CMS-sponsored quality program.
RHADAPU is a Medicare run program. Measure data required by this program mirror common
diagnoses seen among hospitalized Medicare patients. The program looks at clinical and claims data. It
was developed with the goal of providing consumers with information about the quality of care at
hospitals. With this comparison consumers can then make more informed decisions about where to
seek care. Consumers can view this information on the Hospital Compare Web site. The Hospital IQR
program reports and information about how the hospital compares to others facilities provide an
opportunity to improve clinical care. Measure data collected for the Hospital IQR program can also be
used for other quality reporting programs such as those of the Joint Commission. National Database
of Nursing Quality Indicators (NDNQI) is a quality improvement program of the American Nurses
Association (ANA). The program focuses on nursing measures. It provides nurses and their leadership
with data about where nursing is meeting standards and where it needs to improve patient outcomes.
It also provides healthcare facilities with the ability to compare themselves to similar size and type
facilities at the local, regional, and national levels (NDNQI, n.d.).
Unlike other quality reporting programs, the NDNQI uses unit level data. Information collected and
reported quarterly at the unit level includes, but is not limited to, fall/injury fall rate, nursing hours per
patient day, nursing skill mix, Peripheral IV infiltration rate, RN Education/ Certification, nurse
turnover orate, central-line-associated bloodstream infection rate, and pain assessment/intervention/
reassessment cycles completed. The Joint Commission accredits healthcare organizations including,
but not limited to, hospitals, physician offices, nursing homes, ambulatory health centers, and home
care. The Joint Commission’s ORYX® program, according to their Web site, “…integrates outcomes
and other performance measure data into the accreditation process.” The Joint Commission believes
performance measurement data are crucial to accreditation.
They work in concert with the CMS to align measures common to both organizations quality programs
in an effort to decrease the burden and cost of data collection and reporting for healthcare
organizations. ORYX data are published by the Joint Commission on their Web site. The data provide
data comparison at the state and national levels. In addition to using quality measures to accredit
organizations, the Joint Commission develops quality measures and quality tools. The Healthcare
Effectiveness Data and Information Set (HEDIS®) is a tool providing measure performance data on
care and services based on 80 measures across five domains. According to the National Committee on
Quality Assurance (NCQA), which maintains the program/runs HEDIS, the information is used by 90%
of America’s health plans as it provides a picture of care provided.
Examples of measure data collected for HEDIS include medication management for asthma, blood
pressure control, breast cancer screening, and childhood and adolescent immunizations. As part of the
HEDIS life cycle, measures are evaluated at least every three years to determine if they continue to
benefit the program. Measures that are no longer of value are retired. HEDIS data also allow
employers and others who provide healthcare plans to analyze performance of the plans so they may
select the plans deemed best for their employees. The Quality Compass tool, available through the
NCQA Web site, allows employers to access information and makes comparisons of health plans.
HEDIS data are available in the State of Health Care Quality report. This report provides a thorough,
in-depth picture of the nation’s healthcare system. HEDIS data are also used in the “report cards” of
health plans found in the news media.
The Agency for Healthcare Research and Quality (AHRQ), part of the Department of Health and
Human Services (DHHS), states on their Web site: “AHRQ’s mission is to produce evidence to make
healthcare safer, higher quality, more accessible, equitable, and affordable, and to work with the U.S.
Department of Health and Human Services (HHS) and other partners to make sure that the evidence is
understood and used.” To this end, one aspect of AHRQ’s work is in the area of quality. AHRQ
supports many programs to improve quality in the inpatient and ambulatory settings. The AHRQ
Quality Indicators (QIs) program uses hospital inpatient administrative data. Currently, AHRQ QI™
modules include Prevention Quality Indicators, Inpatient Quality Indicators, Patient safety Indicators,
and Pediatric Quality Indicators. AHRQ also has an extensive portfolio of quality tools to assist
physicians, nurses, and other healthcare providers to improve quality of care. For example, through
AHRQ’s Comprehensive Unit-based Safety Programs (CUSP) hospitals can learn how to reduce the
incidence of catheter-associated urinary tract infections (CAUTI), or central-line-associated blood
stream infections (CABSI). Reduction in the incidence of both these infections will improve patient
safety and quality of care. It is suggested that the reader refer to AHRQ’s Web site for current
information on quality programs currently in progress at the Agency. AHRQ provides to the public
multiple quality reports. Among those available on their Web site are the National Healthcare Quality
report (NHQR) and the National Healthcare Disparities Report (NHDR). The reports, published
annually, “…measure trends in effectiveness of care, patient safety, timeliness of care, patient
centeredness, and efficiency of care” per the AHRQ Web site.
The National Quality Measures Clearinghouse (NQMC) is an AHRQ initiative. NQMC’s mission is to “…
provide practitioners, health care providers, health plans, integrated delivery systems, purchasers and
others an accessible mechanism for obtaining detailed information on quality measures, and to further
their dissemination, implementation, and use in order to inform health care decisions.” As quality
reporting programs shift from reporting claims data to reporting clinical data to meet program
requirements, INs and INSs must be knowledgeable about the local, state, and national programs their
organization participates in. Keeping up with the rapid changes of each program can be a challenge.
Working with your organizations quality department, checking program Web sites, participating in
publically available Webinars and seminars, etc., will help keep you informed of the most current
program requirements. As INs and INSs it may be your responsibility to ensure the data elements are
in the EHR, the data are being entered in discrete fields in a timely manner, are extracted periodically
to review, understand reporting program deadlines, being able to interpret the data, and recommend
to management processes to improve quality of care and patient safety.
WHY ARE STANDARDS IMPORTANT?
Data collected in an EHR are often identified by local, idiosyncratic, and sometimes redundant and/or
ambiguous names rather than unique, well-organized codes from standard code systems. Without
standardized dialog boxes, drop-down lists, or other pre-defined menu options, there is no consistency
in the content captured in free-text fields. As such, comparisons of entries across patients, providers,
or other systems are exceedingly challenging and even with the use of sophisticated coding tools and
analytics to interpret the underlying content may be inaccurate. Hospitals, clinics, and other healthcare
entities must devote significant resources to document, abstract, and report these mandated
performance measures. It is conservatively estimated that hospitals spend 22.2 minutes per heart
failure case to abstract the data, which in aggregate amount to more than 400,000 person-hours spent
each year by U.S. hospitals (Fonarow, Srikanthan, Costanzo, Cintron, Lopatin, 2007).
Retrieving data for quality measurement is challenging. The process is mostly retrospective. Data are
stored in different sources in the EHR and there are different kinds of data that do not map. In
essence, humans are creating the data after analyzing different terms and codes in the EHR. Data and
information standards are needed to solve these problems. Overview of Standards Standards can be
defined as any norm, convention, or requirement that is universally agreed upon by participating
entities (Business Dictionary, 2013). Many of us are familiar with standards even though we may not
realize it. Many may remember the days prior to the proliferation of information technology in
banking: one had to plan ahead for an international trip and purchase travelers checks. Today, one can
hop on a plane on a whim and be able to purchase dinner in Kiev on a credit card and withdraw cash
from an ATM with a debit card—doing so securely and being assured a fair exchange rate. With
standards that are applied across an industry, for instance, the use of SWIFT standards in banking, one
can easily see the advantages to standardization (Infosys, 2013).
The ability to access money in a checking account from India on Tuesday and Italy on Thursday is an
excellent example. Information technology has been a huge catalyst that has both enabled and
enforced this standardization within banking. We are all familiar with areas, beyond healthcare, that
are lacking in standardization. The simple act of purchasing shoes or clothing made outside of the
United States can be a frustrating experience. If one does not know their European size or have access
to a chart to convert from United States to other sizes, the deal may not go through. This simple
example highlights the need for standardization. As we look to healthcare and its current struggle to
standardized, it is easier to understand why this issue is so important. In our clothing example, the
chances that someone may be injured or die as a result of a lack of standardization are highly unlikely.
In healthcare, the risk is eminent. There are many organizations that seek to help with this
standardization in healthcare. One of the most important standardization organizations in electronic
quality measurement is Health Level 7 (HL7), which will be described in the next section. Overview of
Health Level 7 (HL7) The Health Level 7 (commonly referred to as HL7) organization came into
existence in 1987 to begin solving standards issues in healthcare.
HL7 is a non-profit organization that includes both corporate and individual members from all domains
of healthcare. It is an American National Standards Institute (ANSI) accredited standards developing
organization that focuses on creating and maintaining a framework of related standards for the
exchange, integration, sharing, and retrieval of electronic health information (Health Level 7, 2014).
HL7 operates through volunteer work groups of professionals and one such group, the Clinical Quality
Information Workgroup, is working closely with ONC and CMS to create and maintain information
technology standards in support of improving healthcare quality, including clinical care, and to foster
collaboration between quality measurement, outcomes, and improvement stakeholders. To this end,
several projects are currently underway to help streamline the quality measurement development
process.
There are three key projects:

1. Harmonization of a common data model (sourced from many of the current models in use like the
Virtual Medical Record [vMR] and the Quality Data Model [QDM] for quality that can be used for
quality measures and clinical decision support).
2. Harmonization of an expression language that will allow measure developers to express certain
mathematical and time-related concepts that are important in measurement in a way that is
easily understood and carried out by implementers.
3. Development of a common set of meta-data (with definitions) that can be used in measure
development.

HL7 Source Data Standards HL7 has several primary standards that have led to the development of
data standards around quality measurement. The Reference Information Model (RIM) is the
foundation of much of HL7’s development work. The RIM is a large, pictorial representation of the
HL7 clinical data domains and identifies the life cycle of information. HL7 standards pertain to four
key areas of health information exchange: conceptual standards like the RIM, document standards
(e.g., HL7 Consolidated Document Architecture), application standards, and messaging standards (e.g.,
HL7 v2.x and v3.0). This section will focus on document and messaging standards as they pertain to
quality measurement.
From the RIM, many of HL7’s primary standards have been developed. Important among those are the
Version 2 (v2) and Version 3 (v3) Product Suites. The v2-messaging standard is considered the most
widely adopted and used standard in the HL7 product line. This standard is the foundation of
electronic data exchange and has been widely implemented in 95% of U.S. healthcare organizations to
date (Health Level 7, 2014). With the release of v3, HL7 attempted to move toward a more
standardized programming language with the use of extensible markup language (XML).
This use of XML makes v3 more human readable while maintained machine readability with the same
data set. Unfortunately, there is no backward compatibility between v2 and v3, which has created
challenges as v2 has been widely adopted in the United States and v3 has been adopted
internationally. The Clinical Document Architecture (CDA®) is a document markup standard that
specifies the structure and semantics of clinical documents for the purpose of exchange between
healthcare providers and patients.
CDA states that a clinical document must contain certain key characteristics:
1. Persistence
2. Stewardship
3. Potential for authentication
4. Context
5. Wholeness
6. Human readability

HL7 Data Standards in Quality Measurement


From these source standards within HL7, specifically CDA, two important document standards have
been developed to enable quality measurement. The Health Quality Measure Format or HQMF is the
document standard for coding and representing an eMeasure. The HQMF is used by measure
developers to define a clinical quality measure that is consumable by both a computer and a human
(Health Level 7, n.d.). The Quality Reporting Document Architecture or QRDA is the document
standard for reporting the data that are determined to have met the criteria of the eMeasure. The
QRDA standard has three levels of reporting capability: category I, II, and III. QRDA Category I is a
patient-level report that contains the clinical information specified in the measure for one patient. A
QRDA level II will have multi-patient information across a defined population that may or may not
identify individual patient data within the summary. And lastly, a QRDA level III report is an aggregate
quality report with a result for a given population and period of time. One can think of the QRDA as
the result of a clinical quality measure from the EHR. It is then generally submitted to CMS or any
other requesting or reporting agency. The QRDA standard is referenced in the certification criteria for
EHRs set forth by ONC. As part of the meaningful use legislation, providers and hospitals must use
EHR technology that is certified by the federal government to meet certain standards and criteria. The
QRDA standard is one of many HL7 standards that are referenced in these criteria.
Value Sets and Value Set Authority Center
Value sets can be defined lists of specific values (terms and their codes) derived from single or multiple
standard vocabularies used to define clinical concepts (e.g., patients with diabetes, clinical visit,
reportable diseases) used in clinical quality measures and to support effective health information
exchange (National Library of Medicine, 2014b). Some of the standard vocabularies (or terminologies)
used to define these value sets will be discussed in the next section.
Value sets are developed by measure developers and are ideally shared across developers. Each value
set is represented by a unique object identifier (OID) that is registered on the HL7 Web site. Until
October 2013, value sets were created and maintained in the Measure Authoring Tool (MAT). Since
late October 2013, the value sets for the 2014 CQM’s were migrated to the new Value Set Authority
Center (VSAC), which can be accessed at www.vsac.nlm.nih.gov. The Value Set Authority Center
(VSAC) serves as the official repository of the value sets for the Meaningful Use 2014 Clinical Quality
Measures (CQM’s) and is maintained by the National Library of Medicine (NLM), in collaboration with
the ONC and CMS. Search, retrieval, and download capabilities are provided through a Web interface
and APIs. The VSAC also provides authoring tools for users to create value sets related to quality
measures.
Terminology Standards
There are many terminologies currently in wide use within healthcare to represent clinical aspects or
data types. All of these terminologies require a common format or standard to ensure that
communication and transmission of data in the electronic realm are as clear as possible due to the
increasing focus on collecting and sharing data across systems and for quality improvement and
accountability purposes. There are several terminologies that play key roles in the representation of
clinical care within EHRs and there are many mappings across the various terminologies to enable data
sharing and valid representations of these data across settings and providers.
The Systemized Nomenclature of Medicine (SNOMED) is comprehensive, multi-lingual healthcare
terminology (International Health Terminology Standards Development Organizations, 2014).
SNOMED is important in EHR development and use as it allows for the recording of clinical data in
ways that enable meaning-based retrieval, is mapped to other code sets that may be used in specific
healthcare settings (e.g., LOINC), and certification and other standard setting organizations are
encouraging the use of this terminology in performance measures.
Other code systems that are frequently used in performance measures include:

 LOINC® or the Logical Observation Identifiers Names and Codes: a universal code system
for identifying laboratory and clinical observations (Regenstrief Institute, 2014)
 RxNorm: pertains specifically to medications and acts as a normalized naming system for
generic and branded drugs (National Library of Medicine, 2014a)
 International Classification of Diseases (ICD): the classification used to code and classify
diseases and other health characteristics (World Health Organization, n.d., Centers for
Disease Control, 2013)
 Current Procedural Terminology (CPT): a code system that reports procedures, office,
emergency departments and other visits, and other services provided by healthcare
professionals (American Medical Association, 2014)
 The International Classification of Diseases (ICD): a code system used to classify diseases
and other health problems for health records and other vital records.

The structure enables storage and retrieval of diagnostic information for quality, clinical, and other
purposes (World Health Organization, n.d.).
Quality Data Model (QDM)
Once a measure developer has determined the clinical concepts in a quality measure and has built the
associated value sets, use of the Quality Data Model or QDM will enable expression of performance
measurement data for electronic measurement. The QDM can be thought of as a backbone for
representation of quality criteria used in clinical guidelines, quality measures, and clinical decision
support (CDS) and is used by stakeholders involved in electronic quality measurement and reporting,
such as measure developers, federal agencies, health IT vendors, standards organizations, informatics
experts, providers, and researchers. The QDM also makes up the logical portion of the Measure
Authoring Tool or MAT. The QDM is currently being harmonized, through efforts in HL7, with other
logical models to enhance its capabilities and expressiveness.
eCQM Life Cycle (From Measurement to Improvement)
Medicare and Medicaid EHR Incentive Programs provide financial incentives for the “Meaningful Use”
of certified EHR technology to improve patient care (Cipriano, Bowles, Dailey, Dykes, Lamb, & Naylor,
2013). Achieving meaningful use of EHRs for quality measurement, reporting, and improvement
requires the use of standards to integrate quality measures within EHRs for real-time quality reporting.
This involves a series of steps as depicted in the Quality Management Life Cycle. The steps in the
Quality Management Life Cycle will be described by applying the data standards presented in this
chapter to an eMeasure from the MU incentive program. The quality measure was developed by the
Joint Commission and assesses “the number of patients diagnosed with confirmed VTE who received
an overlap of Parenteral (intravenous [IV] or subcutaneous anticoagulation and warfarin therapy.”
This measure involves all members of the clinical team and has implications for nursing practice. The
life cycle of a quality measure from standardized representation of measure concepts and logic to
integration within the EHR and generation of quality reports will be described. This description will be
extended to show how the EHR can leverage data standards to support total quality management for
venous thromboembolism care as it pertains to nursing practice.
The eMeasure life cycle includes the following steps:
Measure Title: Venous Thromboembolism Patients with Anti-coagulant Overlap Therapy, Steward:
The Joint Commission

1. First, the intent of the measure is clearly and specifically defined as follows:
2. “For patients who received less than five days of overlap therapy, they should be discharged
on both medications or have a reason for discontinuation of overlap therapy. Overlap
therapy should be administered for at least five days with an international normalized ratio
(INR) greater than or equal to 2 prior to discontinuation of the parenteral anti-coagulation
therapy, discharged on both medications or have a reason for discontinuation of overlap
therapy.” (http:// cms.gov/Regulations-and-Guidance/Legislation/
EHRIncentivePrograms/eCQM_Library.html).
3. Once the measure intent is defined, the different components of the measure are then
specified— “who” is being measured and “what” is being measured. The components of this
measure include the:
4. Initial patient population of interest: Patients with a diagnosis code for venous
thromboembolism (VTE) with a patient age greater than or equal to 18 years, and a length
of stay less than or equal to 120 days
5. Denominator: Patients with confirmed VTE who received warfarin
6. Numerator: Patients who received overlap therapy (warfarin or parenteral anti-coagulation):

◦ Five or more days, with an INR greater than or equal to 2 prior to discontinuation or parenteral therapy, or
◦ Five or more days, with an INR less than 2 and discharged on overlap therapy, or
◦ Less than five days and discharged on overlap therapy, or
◦ With documentation of reason for discontinuation of overlap therapy, or
◦ With documentation of a reason for no overlap therapy d. Denominator Exclusions:
◦ Patients less than 18 years of age ◦ Patients who have a length of stay greater than 120 days
◦ Patients with Comfort Measures Only documented
◦ Patients enrolled in clinical trials
◦ Patients discharged to a healthcare facility for hospice care
◦ Patients discharged to home for hospice care ◦ Patients who expired
◦ Patients who left against medical advice
◦ Patients discharged to another hospital
◦ Patients without warfarin therapy during hospitalization
◦ Patients without VTE confirmed by diagnostic testing The level of granularity provides useful information
for EHR designers, so they can start mapping quality measure concepts, such as “documentation of a reason
for discontinuation of overlap therapy,” to EHR functionality such as “addition of reason for discontinuation
of overlap therapy.” Furthermore, some of the measure components relate to nursing documentation such
as “documentation of patients who are receiving comfort measures only.”

3. After the measure is specified, indicating “who” is being measured and “what” is being
measured, the measure needs to be further defined using standardized codes (QDM,
SNOMED, LOINC, RxNorm, and CPT) for concepts contained within the measure
(confirmed diagnosis of VTE, comfort measures, etc.), as well as standardized expression
language (HTML and XML) to represent the logic contained within the measure (patients
who received less than five days of overlap therapy should be discharged on both
medications or have a reason for discontinuation of overlap therapy). This is accomplished
with HQMF, which defines a representation of the quality measure that is consumable by
both a human and computer.

This expression can be described in three ways:

1. A human readable format (HTML) allows users to understand how the elements are defined and
the underlying logic used to calculate the measure. This is useful when EHR designers, quality
experts, clinicians, and the leadership team are making decisions about how the measures will be
integrated within electronic systems. The human readable format provides a common language
for all professionals involved in healthcare
2. A computer readable format (XML) enables the creation of automated queries against electronic
systems for quality reporting (QRDA). Ideally electronic systems read the XML and will generate
the QRDA for reporting
3. A list of the specific codes (SNOMED, QDM, RxNorm, ICD10, CPT) used by developers of
electronic systems to ensure that accurate, valid, and reliable data capture can be supported
(diagnoses, medications). The list of codes is referred to as “value sets.”

These value sets are contained within dictionaries (or concept repositories) of electronic health
systems. The applications contained within electronic systems use the codes to process logic
contained within the quality measures to generate quality reports (QRDA) necessary for MU With this
infrastructure, a vast array of healthcare data can be reported and analyzed, thereby turning data into
knowledge that can be put to immediate use. This quality measurement infrastructure can help make
advances toward becoming a learning health system. In summary, the eMeasure supports quality
reporting for MU but can also support real-time decision-making, performance improvement, and
overall quality management.
Two examples follow:

1. The eMeasure content can be used to guide clinical decision support through automatic
alerts and reminders. For instance, this measure states that patients who receive less than
five days of overlap therapy should be discharged on both medications or have a reason for
discontinuation of overlap therapy. Electronic systems can use this content to alert the
clinician at time of discharge to act in accordance with the measure.
2. EHR designers can integrate the measure rationale within EHRS to facilitate education and
clinical decision support (CDS), please see box below. Clinicians can access the rationale
from within the EHR.

This measure has implications for nursing practice. Some of the data necessary for calculation of the
measure are derived from documentation performed by nurses. Nurses, in providing care to patients
with VTE, document additional data related to the prevention and management of VTE. The
infrastructure related to MU code sets, such as SNOMED (which includes recognized nursing
terminologies), can be used to support coding of nursing data. The clinical quality measures represent
conditions reflective of nursing practice. For each of conditions contained within the clinical quality
measures, nursing care is provided and frequently represented within the care plan. The care plan can
be structured within the EHR using standardized terminology such as Clinical Care Classification
This shows how nursing practice and documentation supports reporting on conditions represented
within the eCQM for MU. The value of using the data infrastructure as described above includes:

1. Enhanced decision-making (use the above measure to alert a nurse that patients who receive less
than five days of overlap therapy should be discharged on both medications or have a reason for
discontinuation of overlap therapy). Once the medications are ordered, the electronic system can
populate the patient care plan with interventions.
2. Creation of nursing dashboards to answer such questions as: “How many patients received
discharge education?” The electronic dashboards can be used for a total quality management
program to enhance quality and safety.
3. Streamlined measurement and comprehensive exchange between systems so nursing impact on
outcomes can be compared across settings, the nation, and internationally.
4. Generation of new knowledge to advance nursing practice.
5. Effective use of electronic health records to support both care delivery and performance
improvement.

  

You might also like