Chapter 5 - Understanding Evidence-Based Practice

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Medical Univ of SC Library

Access Provided by:

Foundations of Clinical Research: Applications to Evidence­Based Practice, 4e

Chapter 5: Understanding Evidence­Based Practice

Introduction
Evidence­based practice (EBP) is about clinical decision­making—how we find and use information and how we integrate knowledge, experience,
and judgment to address clinical problems. EBP requires a mindset that values evidence as an important component of quality care and a skill set in
searching the literature, critical appraisal, synthesis, and reasoning to determine the applicability of evidence to current issues. The ultimate goal is to
create a culture of inquiry and rigorous evaluation that provides a basis for balancing quality with the uncertainty that is found in practice—all in the
effort to improve patient care.

The purpose of this chapter is to clarify how evidence contributes to clinical decision­making, describe the process of EBP and the types of evidence
that are meaningful, and discuss the barriers that often limit successful implementation. This discussion will continue throughout the text in relation
to specific elements of research design and analysis.

Why Is Evidence­Based Practice Important?

In a landmark 2001 report, Crossing the Quality Chasm,1 the Institute of Medicine (IOM) documented a significant gap between what we know and what
we do, between the care people should receive and the care they actually receive, between published evidence and healthcare practice. The IOM
estimates that one­third of healthcare spending is for therapies that do not improve health.2 Another review suggests that 50% of healthcare practices
are of unknown effectiveness and 15% are potentially harmful or unlikely to be beneficial.3,4 Consider, for example, the change in recommendations
for infant sleep position, for many years preferred on the stomach, to reduce the possibility of spitting up and choking. In 1992, this recommendation
was changed to sleeping on the back, which improved breathing and drastically changed the incidence of sudden infant death syndrome (SIDS).5

Clinicians tackle questions every day in practice, constantly faced with the task of interpreting results of diagnostic tests, determining the efficacy of
therapeutic and preventive regimens, the potential harm associated with different treatments, the course and prognosis of specific disorders, costs of
tests or interventions, and whether guidelines are sound. Despite having these questions, however, and even with an emphasis on EBP across
healthcare, practitioners in most professional disciplines do not seek answers or they express a lack of confidence in evidence, with a higher value
placed on experience, collegial advice, or anecdotal evidence than on research to support clinical decisions.6,7

From an evidence­based standpoint, research has continued to document escalating healthcare costs, disparities in access to healthcare, and
unwarranted variations in accepted practice—with geography, ethnicity, socioeconomic status, and clinical setting often cited as major determinants.8–
11 Addressing these issues requires understanding how evidence informs our choices to support quality care.

Illustration courtesy of Sidney Harris. Used with permission from ScienceCartoonsPlus.com.

Downloaded 2023­9­12 8:1 A Your IP is


Chapter 5: Understanding Evidence­Based Practice, Page 1 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
unwarranted variations in accepted practice—with geography, ethnicity, socioeconomic status, and clinical setting often cited as major determinants. 8–
Medical Univ of SC Library
11 Addressing these issues requires understanding how evidence informs our choices to support quality care.
Access Provided by:

Illustration courtesy of Sidney Harris. Used with permission from ScienceCartoonsPlus.com.

Where Are the Gaps?

As discussed in Chapter 2, the focus on translational research has highlighted the need for research to provide practical solutions to clinical problems
in a timely way, further fueling the urgency for a more evidence­based framework for healthcare.

Generalizing results from randomized controlled trials (RCTs) is often difficult because patients and circumstances in the real world do not match
experimental conditions. Clinicians often contend with reimbursement policies that can interfere with decisions regarding effective treatment choices
(see Focus on Evidence 5­1). They may also be faced with changes in recommendations, as research uncovers inconsistencies or errors in previous
studies and recommendations are reversed.12

Focus on Evidence 5–1

The Direct Line from Evidence to Practice

Treatment to improve healing of chronic wounds remains a serious healthcare challenge.14 Standard care includes optimization of nutritional
status, débridement, dressings to maintain granulation tissue, and treatment to resolve infections.15 Electrical stimulation (ES) has been used as an
alternative therapy since the 1960s, especially for chronic wounds, with numerous clinical reports showing accelerated healing.16,17

Despite several decades of successful use of the modality and Medicare coverage for such treatment since 1980, in May 1997, the Health Care
Financing Administration (HCFA, now the Centers for Medicare and Medicaid Services, CMS) announced that it would no longer reimburse for the
use of ES for wound healing. The agency claimed that a review of literature determined there was insufficient evidence to support such coverage
and that ES did not appear to be superior to conventional therapies.18

In July 1997, the American Physical Therapy Association (APTA) filed a lawsuit along with six Medicare beneficiaries seeking a temporary injunction
against HCFA from enforcing its decision.15 The APTA included several supporting documents, including a CPG for pressure ulcers issued by the
Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality, AHRQ). Although several clinical trials were cited,
supportive evidence was not considered strong. Therapists who opposed HCFA’s decision cited their experience and informal data, insisting that
the treatment made an important difference for many patients.18

This prompted the gathering of more formal evidence and further analysis to show that ES was indeed effective for reducing healing time for
chronic wounds. 15 Based on this information, in 2002 CMS reversed the decision and coverage was approved, but only for use with chronic ulcers,
Downloaded 2023­9­12 8:1 A Your IP is
Chapter
defined as ulcers that haveEvidence­Based
5: Understanding not healed withinPractice,
30 days of occurrence. Page 2 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
One of the continuing issues with research in this area is the lack of consistency across trials in parameters of ES application, including dosage,
waveforms, duration of treatment, and the delivery system. Interpretations are further complicated by varied protocols, types of wounds studied,
Generalizing results from randomized controlled trials (RCTs) is often difficult because patients and circumstances in the real world do not match
Medical treatment
experimental conditions. Clinicians often contend with reimbursement policies that can interfere with decisions regarding effective Univ of SCchoices
Library
Access Provided by:
(see Focus on Evidence 5­1). They may also be faced with changes in recommendations, as research uncovers inconsistencies or errors in previous
studies and recommendations are reversed.12

Focus on Evidence 5–1

The Direct Line from Evidence to Practice

Treatment to improve healing of chronic wounds remains a serious healthcare challenge.14 Standard care includes optimization of nutritional
status, débridement, dressings to maintain granulation tissue, and treatment to resolve infections.15 Electrical stimulation (ES) has been used as an
alternative therapy since the 1960s, especially for chronic wounds, with numerous clinical reports showing accelerated healing.16,17

Despite several decades of successful use of the modality and Medicare coverage for such treatment since 1980, in May 1997, the Health Care
Financing Administration (HCFA, now the Centers for Medicare and Medicaid Services, CMS) announced that it would no longer reimburse for the
use of ES for wound healing. The agency claimed that a review of literature determined there was insufficient evidence to support such coverage
and that ES did not appear to be superior to conventional therapies.18

In July 1997, the American Physical Therapy Association (APTA) filed a lawsuit along with six Medicare beneficiaries seeking a temporary injunction
against HCFA from enforcing its decision.15 The APTA included several supporting documents, including a CPG for pressure ulcers issued by the
Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality, AHRQ). Although several clinical trials were cited,
supportive evidence was not considered strong. Therapists who opposed HCFA’s decision cited their experience and informal data, insisting that
the treatment made an important difference for many patients.18

This prompted the gathering of more formal evidence and further analysis to show that ES was indeed effective for reducing healing time for
chronic wounds.15 Based on this information, in 2002 CMS reversed the decision and coverage was approved, but only for use with chronic ulcers,
defined as ulcers that have not healed within 30 days of occurrence.

One of the continuing issues with research in this area is the lack of consistency across trials in parameters of ES application, including dosage,
waveforms, duration of treatment, and the delivery system. Interpretations are further complicated by varied protocols, types of wounds studied,
generally small samples, and different types of comparisons within the trials.14,19 This type of variability in design still threatens the weight of
evidence, in this and many other areas of clinical inquiry. This is not an isolated experience.

This story emphasizes why the use of evidence in practice is not optional. “Insisting” that a treatment works is not sufficient. Our ability to provide
appropriate care, change policy, and be reimbursed for services rests on how we have justified interventions and diagnostic procedures through
valid research—not just for our own decision­making, but for those who influence how our care is provided and paid for and, of course, ultimately
for the benefit of our patients.

Three primary issues drive this discussion, all related to the quality of care: overuse of procedures without justification, underuse of established
procedures, and misuse of available procedures.13 Table 5­1 provides examples of each of these quality concerns. These issues are exacerbated by a
lack of attention to research findings as well as difficulties in applying results to practice.

Table 5­1
Quality Issues That Can Be Addressed Through EBP

QUALITY
DEFINITION CONSEQUENCES EXAMPLE
ISSUE

Overuse Occurs when treatment or tests are given without Overuse contributes to high Even with evidence that 80% of childhood ear
medical justification. At best, this can result in wasted costs, with some estimates infections will resolve within 3 days without
resources; at worst, it can result in harm. attributing up to 30% of U.S. medication, antibiotics are still prescribed
Downloaded 2023­9­12 8:1 A Your IP is health spending to overuse.20 most of the time, despite potential risks of
Chapter 5: Understanding Evidence­Based Practice, side effects and antibiotic resistance.21 Page 3 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility

Underuse Occurs when clinicians fail to provide necessary care or It has been estimated that Almost 200,000 people get pneumococcal
Medical Univ of SC Library
Three primary issues drive this discussion, all related to the quality of care: overuse of procedures without justification, underuse of established
Access Provided by:
procedures, and misuse of available procedures.13 Table 5­1 provides examples of each of these quality concerns. These issues are exacerbated by a
lack of attention to research findings as well as difficulties in applying results to practice.

Table 5­1
Quality Issues That Can Be Addressed Through EBP

QUALITY
DEFINITION CONSEQUENCES EXAMPLE
ISSUE

Overuse Occurs when treatment or tests are given without Overuse contributes to high Even with evidence that 80% of childhood ear
medical justification. At best, this can result in wasted costs, with some estimates infections will resolve within 3 days without
resources; at worst, it can result in harm. attributing up to 30% of U.S. medication, antibiotics are still prescribed
health spending to overuse.20 most of the time, despite potential risks of
side effects and antibiotic resistance.21

Underuse Occurs when clinicians fail to provide necessary care or It has been estimated that Almost 200,000 people get pneumococcal
tests, employ preventive strategies, or follow practice children and adults in the pneumonia each year, from which 3% die,
guidelines. Underuse may also be a factor of United States receive less than despite the availability of a vaccine.24
nonadherence on the part of patients, lack of referral 50% of recommended
to proper services, or implementation barriers. healthcare and preventive
services.4,22,23

Misuse Inappropriate use of interventions or tests, often Updated estimates suggest that Despite common use for deep heating
resulting in medical errors that reduce the benefit of preventable harm in hospitals therapy and potential risks, several
treatment or cause harm. is responsible for over 200,000 systematic reviews have shown that
deaths per year25 and one­third ultrasound is not effective for that

of healthcare spending.26 purpose.27,28

How Do We Know Things?


How do we decide which test to perform, which intervention to apply, or which patients have the best chance of responding positively to a given
treatment? As far back as ancient times, healthcare providers have relied on three typical sources of knowledge (see Fig. 5­1).

Tradition

“That’s the way it has always been done.”

Conventional wisdom in each era supports theories of the day as “given.” We inherit knowledge and accept precedent without further validation.
Something is thought to be true simply because people have always known it to be true.

Authority

“That’s what the experts say.”

When an authority states that something is true, we often accept it. Influential leaders can set longtime standards. We may also find ourselves
committed to one approach over others based on what we were taught, relying on that information even years later, without finding out if
knowledge has expanded or changed.

Figure 5–1

Ways of knowing.

Downloaded 2023­9­12 8:1 A Your IP is


Chapter 5: Understanding Evidence­Based Practice, Page 4 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
knowledge has expanded or changed.
Medical Univ of SC Library
Figure 5–1
Access Provided by:

Ways of knowing.

HISTORICAL NOTE

Galen of Pergamon (130–200 A.D.), the ancient Greek physician, was the authority on medical practices for centuries. His authority held such weight
that centuries later, when scientists performed autopsies and could not corroborate his teachings with their physical findings, his followers
commented that, if the new findings did not agree with his teachings, “the discrepancy should be attributed to the fact that nature had changed.”29

Experience

“It’s worked for me before.”

Sometimes a product of trial and error, experience is a powerful teacher. Occasionally, it will be the experience of a colleague that is shared. The
more experienced the clinician, the stronger the belief in that experience!

FUN FACT

Although somewhat tongue­in­cheek, unyielding trust in authority has also been called “eminence­based medicine,” the persistent reliance on the
experience of those considered the experts.30 Such faith has been defined as “making the same mistakes with increasing confidence over an
impressive number of years.”31

These “ways of knowing” can be efficient and they may actually achieve positive outcomes. But over time they are more likely to result in a lack of
progress, limited understanding of current knowledge, or resistance to change—even in the face of evidence. With a greater understanding of the
scientific method, logic, and evidence, however, “knowing” has taken on a more stringent emphasis (see Chapter 4). Seeking evidence can substantiate
or refute experience, authority, and tradition, and can thereby strengthen foundations to consider new, more valid methods.

What Is Evidence­Based
Downloaded 2023­9­12 8:1 A Your IP Practice?
is
Chapter 5: Understanding Evidence­Based Practice, Page 5 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
Evidence­based practice is an approach to decision­making that incorporates scientific information with other sources of knowledge. The widely
accepted definition of EBP incorporates several important elements:
These “ways of knowing” can be efficient and they may actually achieve positive outcomes. But over time they are more likely to result in a lack of
Medical Univ of SC Library
progress, limited understanding of current knowledge, or resistance to change—even in the face of evidence. With a greater understanding of the
Access Provided by:
scientific method, logic, and evidence, however, “knowing” has taken on a more stringent emphasis (see Chapter 4). Seeking evidence can substantiate
or refute experience, authority, and tradition, and can thereby strengthen foundations to consider new, more valid methods.

What Is Evidence­Based Practice?


Evidence­based practice is an approach to decision­making that incorporates scientific information with other sources of knowledge. The widely
accepted definition of EBP incorporates several important elements:

Evidence­based practice is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual
patients.32

But it is also …

the integration of best research evidence with clinical expertise and the patient’s unique values and circumstances.33

This evidence may be related to accuracy of diagnostic tests, prognostic factors, or the effectiveness and safety of therapies and preventive strategies.

HISTORICAL NOTE

The idea of using evidence as a basis for clinical decision­making is not new. Healthcare providers have understood the importance of applying
knowledge to practice since the time of Hippocrates, but their sources of knowledge were heavily reliant on ancient theory and observation. Gordon
Guyatt and colleagues from McMaster University first coined the term “evidence­based medicine” (EBM) in 1991, promoting an approach to clinical
decision­making that did not rely solely on authority, tradition, or experience.34 They proposed that the medical community needed to stress the
importance of using published research as the foundation for practice. Although their work was geared toward physicians, the relevance of this
approach for all health­care fields has now been widely adopted under the terminology of evidence­based practice.

Misconceptions

When the EBP concept was first introduced, there were many misconceptions about what it meant.32 Proponents argued that this was an essential and
responsible way to make clinical decisions, tying practice to scientific standards that would allow metrics of efficacy, greater consistency, and better­
informed patients and clinicians. Detractors argued that it would lead to “cook book” medicine, requiring practitioners to follow certain procedures
for all patients, taking away the “art” of providing care, and removing the ability to make judgments about individual cases. They also argued, with
some validity, that there was not sufficient evidence to make the process viable. But these objections reflect a misunderstanding of the intent of EBP as
part of a larger process that includes experience and judgment, beginning and ending with the patient. Let’s look closely at the definition of EBP to
clarify this process (see Fig. 5­2).

Figure 5–2

The components of EBP as a framework for clinical decision­making.

Downloaded 2023­9­12 8:1 A Your IP is


Chapter 5: Understanding Evidence­Based Practice, Page 6 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
clarify this process (see Fig. 5­2).
Medical Univ of SC Library
Figure 5–2
Access Provided by:

The components of EBP as a framework for clinical decision­making.

Components of EBP

… the integration of best research evidence

The bottom line of EBP is the use of published evidence to support clinical decisions whenever possible. This requires three primary things—the ability
to search the literature to find the evidence, the skill to critically appraise the evidence to determine its quality and applicability, and the availability of
such evidence to be found! Stipulating “current” best evidence implies that we understand that knowledge and acceptable treatment methods will
change over time. The best we can do is keep up with today’s knowledge—no small feat given the volume of information being added every day to
professional literature.

There are estimates that more than 2,000 citations are being added to databases every day! In 2018, PubMed included over 28 million citations.

The concept of “best” evidence implies that it might not be a perfect fit to answer our questions. Sometimes we have to refer to basic science research
to establish a theoretical premise, when no other direct evidence is available. It is also important to know that not everything that is published is of
sufficient quality or relevance to warrant application. This means we have to be able to apply skills of critical appraisal to make those judgments.

… with clinical expertise

EBP makes no2023­9­12


Downloaded attempt to8:1
stifle
A the essential
Your IP is elements of judgment, experience, skill, and expertise in the identification of patient problems and the
Chapter
individual5:risks
Understanding
and benefitsEvidence­Based Practice,
of particular therapies. Page 7 / role
Clinical skill and exposure to a variety of patients takes time to develop and plays an important 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
in building expertise and the application of published evidence to practice. As much as we know the scientific method is not perfect, clinical decisions
are constantly being made under conditions of uncertainty and variability—that is the “art” of clinical practice. But we cannot dissociate the art from
32
The concept of “best” evidence implies that it might not be a perfect fit to answer our questions. Sometimes we have to refer to basic science research
to establish a theoretical premise, when no other direct evidence is available. It is also important to know that not everythingMedical
that is published
Univ of SCis Library
of
sufficient quality or relevance to warrant application. This means we have to be able to apply skills of critical appraisal to make those judgments.
Access Provided by:

… with clinical expertise

EBP makes no attempt to stifle the essential elements of judgment, experience, skill, and expertise in the identification of patient problems and the
individual risks and benefits of particular therapies. Clinical skill and exposure to a variety of patients takes time to develop and plays an important role
in building expertise and the application of published evidence to practice. As much as we know the scientific method is not perfect, clinical decisions
are constantly being made under conditions of uncertainty and variability—that is the “art” of clinical practice. But we cannot dissociate the art from
the science that supports it. Sackett32 implores us to recognize that:

… without clinical expertise, practice risks being tyrannized by evidence, for even excellent external advice may be inapplicable to or inappropriate
for an individual patient. Without current best evidence, practice risks becoming rapidly out of date, to the detriment of patients.

… the patient’s unique values

A key element of the EBP definition is the inclusion of the patient in decision­making. It is not just about finding evidence but finding evidence that
matters to patients. Patients have personal values, cultural traits, belief systems, family norms and expectations, and preferences that influence
choices; these must all be weighed against the evidence. Patients may opt for one treatment over another because of comfort, cost, convenience, or
other beliefs. Patients understand the importance of evidence, but also see value in personalized choices and clinical judgment.35 This is especially
important when evidence is not conclusive and multiple options exist. This implies, however, that the clinician discusses the evidence with the patient
in an understandable way so that decisions are considered collaboratively. Evidence supports better outcomes with patient participation in decision­
making.36–38

… and circumstances

This last element of the EBP definition is a critical one. The final decisions about care must also take into account the organizational context within
which care is delivered, including available resources in the community, costs, clinical culture and constraints, accessibility of the environment, and the
nature of the healthcare system. All of these can have a direct influence on what is possible or preferable. For instance, evidence may clearly support a
particular treatment approach, but your clinic may not be able to afford the necessary equipment, you may not have the skills to perform techniques
adequately, an institutional policy may preclude a certain approach, or the patient’s insurance coverage may be inadequate. Choices must always be
made within pragmatic reality.

It is essential to appreciate all the components of EBP and the fact that evidence does not make a decision. The clinician and patient will
do that together—with all the relevant and available evidence to inform them for optimal shared decision­making.

The Process of Evidence­Based Practice


The process of EBP is generally described in five phases—the 5 A’s—all revolving around the patient’s condition and care needs (see Fig. 5­3). This can
best be illustrated with an example.

Figure 5–3

The five steps in the EBP process: 1) A s k a clinical question, 2) Acquire relevant literature, 3) Appraise the literature, 4) Apply findings to clinical
decision­making, 5) Assess success of the process.

Downloaded 2023­9­12 8:1 A Your IP is


Chapter 5: Understanding Evidence­Based Practice, Page 8 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
Figure 5–3
Medical Univ of SC Library
Access Provided by:
The five steps in the EBP process: 1) A s k a clinical question, 2) Acquire relevant literature, 3) Appraise the literature, 4) Apply findings to clinical
decision­making, 5) Assess success of the process.

➤ CASE IN POINT

Mrs. H. is 67 years old with a 10­year history of type 2 diabetes, which she considers essentially asymptomatic. She has mild peripheral neuropathies
in both feet and poor balance, which has resulted in two noninjurious falls. She is relatively sedentary and overweight, and although she knows she
should be exercising more, she admits she “hates to exercise.”

Her HbA1c has remained around 7.5 for the past 5 years, slightly above the recommended level of less than 7.0. She is on several oral medications
and just began using a noninsulin injectable once daily. She claims to be “relatively compliant” with her medications and complains of periodic
gastrointestinal problems, which she attributes to side effects.

Three months ago, Mrs. H started complaining of pain and limited range of motion (ROM) in her right shoulder. She had an MRI and was diagnosed
with adhesive capsulitis (frozen shoulder), which has continued to get worse. She considers this her primary health concern right now and is asking
about treatment options.

STEP 1: Ask a Clinical Question

The first step in EBP is asking a clear clinical question that is relevant to a patient’s problem and structured to provide direction for searching for the
answer. This can be a challenging task because it represents an uncertainty about a patient’s care.

Being an evidence­based practitioner does not mean that every patient encounter requires asking a question. Often, clinicians will find themselves in
familiar territory, being knowledgeable about the approaches that are appropriate for their patient’s condition. But there will also be many instances
in which questions arise out of uncertainty about new therapies or technologies or a patient’s curiosity about a new treatment. Sometimes a new
treatment is not as effective as expected and you need to consider alternatives. Perhaps the patient presents with a unique combination of problems,
or has a condition that is rarely seen, or a standard treatment has not been effective. These are the uncertainties that occur in everyday practice and
the reasons for needing to understand how to apply evidence.

There are two general types of questions that are asked as part of the search for evidence—questions that refer to background information and those
that focus on patient management decisions.

Downloaded
Background2023­9­12
Questions8:1 A Your IP is
Chapter 5: Understanding Evidence­Based Practice, Page 9 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
A background question is related to etiology or general knowledge about a patient’s condition, referring to the cause of a disease or condition, its
natural history, its signs and symptoms, or the anatomical or physiological mechanisms that relate to pathophysiology. This type of question may also
or has a condition that is rarely seen, or a standard treatment has not been effective. These are the uncertainties that occur in everyday practice and
the reasons for needing to understand how to apply evidence. Medical Univ of SC Library
Access Provided by:
There are two general types of questions that are asked as part of the search for evidence—questions that refer to background information and those
that focus on patient management decisions.

Background Questions

A background question is related to etiology or general knowledge about a patient’s condition, referring to the cause of a disease or condition, its
natural history, its signs and symptoms, or the anatomical or physiological mechanisms that relate to pathophysiology. This type of question may also
focus on the physiology of a treatment or test in order to understand how they work. These questions will generally have a “who, what, where, when,
how, or why” root.

Background questions are commonly asked by patients in an effort to understand their own conditions. They may also be more frequent for clinicians
who have less experience with a given condition. For Mrs. H, for example, we may ask:

Is frozen shoulder a common condition in patients with type 2 diabetes?

Basic research studies may help with this type of question, but they can often be answered easily using textbooks or Web­based resources—with the
proviso that the clinician is wary of the degree to which that content is updated and valid. Over time, clinicians tend to ask fewer background questions,
as they become more experienced with various types of patient conditions.

➤ Several studies have documented an increased risk of shoulder adhesive capsulitis in individuals with type 2 diabetes.39 A meta­analysis has
demonstrated that patients with diabetes are five times more likely to have the condition than individuals who do not have diabetes, with an
estimated prevalence of diabetes in those with adhesive capsulitis of 30%.40

Another source of questions is the need for continuous professional development, staying current in your area of practice. Clinicians who have
specialized areas of practice, or who see certain types of patients more often, will find keeping up with the literature helpful in making everyday
decisions.

Foreground Questions

The more common type of clinical question for EBP is one that focuses on specific knowledge to inform decisions about patient management, called a
foreground question. Foreground questions are stated using four components:

Population or Problem

Intervention (exposure or test)

Comparison (if relevant)

Outcome

These components are described using the acronym PICO (see Fig. 5­4). The phrasing of a complete question with these four elements is most
important for finding an answer—to identify search terms for the acquisition of literature and to interpret findings to determine whether they help with
the problem under consideration.

Figure 5–4

Four components of a clinical question using the PICO format.

Downloaded 2023­9­12 8:1 A Your IP is


Chapter 5: Understanding Evidence­Based Practice, Page 10 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
the problem under consideration.
Medical Univ of SC Library
Figure 5–4
Access Provided by:

Four components of a clinical question using the PICO format.

Other formats have been proposed to structure EBP questions, including PICOT (adding Time frame),41 PICOTS (adding Time frame and
Setting),42 PICOS (adding Study design),43 and PESICO (adding Environment and Stakeholders).44 The term PIO has also been used for
observational or descriptive studies that do not include comparisons.

A strategy for qualitative research has been proposed using SPIDER45 (Sample, Phenomenon of Interest, Design, Evaluation, Research type).

The PICO structure was introduced in Chapter 3 as a basis for framing a research question—and as promised, here it is again!

Sources of Clinical Questions

Foreground questions can focus on four main areas of evidence:

Diagnosis and measurement

Prognosis

Intervention

Patient experiences

Questions may include outcomes related to changes in impairments, function, or participation, as well as cost and resource use. The questions may
also incorporate qualitative outcomes, such as understanding the impact of health conditions on a patient’s experience of care.

Applying PICO

Many different questions can arise for a single patient, depending on the clinician’s and patient’s concerns and preferences. Let’s use one as an
example.

➤ Mrs. H is very concerned about her shoulder, which is painful and severely limits her function. She has been told that she needs therapy but has
also read that corticosteroid injections can be helpful and asks about that option.

Downloaded 2023­9­12
How can we address this 8:1 A Your
concern IP PICO?
using is
Chapter 5: Understanding Evidence­Based Practice, Page 11 / 29
©2023
P : F.A.
TypeDavis Company.
2 diabetes, frozenAll Rights Reserved.
shoulder Terms of age
(adhesive capsulitis), Use65,
• Privacy
femalePolicy • Notice • Accessibility

I : Physical therapy
Medical Univ of SC Library
➤ Mrs. H is very concerned about her shoulder, which is painful and severely limits her function. She has been told that she needs therapy but has
Access Provided by:
also read that corticosteroid injections can be helpful and asks about that option.

How can we address this concern using PICO?

P : Type 2 diabetes, frozen shoulder (adhesive capsulitis), age 65, female

I : Physical therapy

C : Corticosteroid injection

O : Reduced shoulder disability, decreased pain, and increased ROM

We can then formulate a question to guide a search for evidence:

In a patient with type 2 diabetes who has adhesive capsulitis, is a corticosteroid injection more effective than physical therapy for reducing
shoulder disability and pain and for increasing range of motion?

Table 5­2 shows how PICO components are used to phrase other questions related to Mrs. H.

Table 5­2
Sources of Foreground Clinical Questions

CLINICAL ISSUE SAMPLE QUESTIONS (PICO)

Diagnosis and Measurement

Diagnostic tests: Which diagnostic tests should be Are measures of peripheral sensation valid for assessing peripheral neuropathies in patients
performed, considering their accuracy and expense? Are with type 2 diabetes?46,47
there clinical prediction rules that can be applied? In patients with type 2 diabetes who have peripheral neuropathy, which measure of balance
History and physical examination: Which tools,
will provide the best assessment to predict falls?48
assessments, or measurements should be used, and how
should they be interpreted? Are measures reliable and
valid?

Prognosis

What is the likely course of the condition over time or the Are patients with type 2 diabetes and an HbA1C of 7.5 likely to develop peripheral
potential complications? Which factors should help to neuropathies?49
understand risks and benefits of care?

Intervention

Therapy: Which treatments are likely to be most In patients with type 2 diabetes who experience a frozen shoulder, will a corticosteroid
effective? Are they worth the efforts of using them? What injection be more effective than physical therapy to improve shoulder function?50 Can
is the balance between potential benefit and harm? personal leisure activities be developed to encourage exercise in sedentary individuals with
Prevention: Which steps can be taken to reduce the
diabetes?51
chance of a condition developing or worsening? What are
potential mediating risk factors and screening
opportunities?

Patient Experiences

How do patient preferences, values, or goals influence In patients with type 2 diabetes, which factors contribute to a lack of adherence to a
choices and progress? medication regimen and exercise?52
Downloaded 2023­9­12 8:1 A Your IP is
Chapter 5: Understanding Evidence­Based Practice, Page 12 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
STEP 2: Acquire Relevant Literature
In a patient with type 2 diabetes who has adhesive capsulitis, is a corticosteroid injection more effective than physical therapy
Medical forUniv
reducing
of SC Library
shoulder disability and pain and for increasing range of motion? Access Provided by:

Table 5­2 shows how PICO components are used to phrase other questions related to Mrs. H.

Table 5­2
Sources of Foreground Clinical Questions

CLINICAL ISSUE SAMPLE QUESTIONS (PICO)

Diagnosis and Measurement

Diagnostic tests: Which diagnostic tests should be Are measures of peripheral sensation valid for assessing peripheral neuropathies in patients
performed, considering their accuracy and expense? Are with type 2 diabetes?46,47
there clinical prediction rules that can be applied? In patients with type 2 diabetes who have peripheral neuropathy, which measure of balance
History and physical examination: Which tools,
will provide the best assessment to predict falls?48
assessments, or measurements should be used, and how
should they be interpreted? Are measures reliable and
valid?

Prognosis

What is the likely course of the condition over time or the Are patients with type 2 diabetes and an HbA1C of 7.5 likely to develop peripheral
potential complications? Which factors should help to neuropathies?49
understand risks and benefits of care?

Intervention

Therapy: Which treatments are likely to be most In patients with type 2 diabetes who experience a frozen shoulder, will a corticosteroid
effective? Are they worth the efforts of using them? What injection be more effective than physical therapy to improve shoulder function?50 Can
is the balance between potential benefit and harm? personal leisure activities be developed to encourage exercise in sedentary individuals with
Prevention: Which steps can be taken to reduce the
diabetes?51
chance of a condition developing or worsening? What are
potential mediating risk factors and screening
opportunities?

Patient Experiences

How do patient preferences, values, or goals influence In patients with type 2 diabetes, which factors contribute to a lack of adherence to a
choices and progress? medication regimen and exercise?52

STEP 2: Acquire Relevant Literature

The question leads to a search for the best evidence that can contribute to a decision about the patient’s care, guided by the PICO terms. Searching is
not always a straightforward process and finding relevant literature can be a challenge. How the question is phrased can make a huge difference in
being able to find an answer.

The search process must be thorough yet focused to be effective and efficient. It is important to use a variety of resources and to consider literature
across disciplines to improve the likelihood of finding relevant information (see Box 5­1). The help of a research librarian can be invaluable in creating
a comprehensive search strategy.

Box 5­1 We Are in This Together

The mandates for interprofessional practice have become a central focus of healthcare today, touting the need to move out of our professional silos
Downloaded 2023­9­12 8:1 A Your IP is
to support quality and safety in the provision of patient­centered care.1,54 And yet, when it comes to EBP, the silos may be perpetuated with
Chapter 5: Understanding Evidence­Based Practice, Page 13 / 29
resources
©2023 F.A.being
Davisdeveloped
Company.forAll“evidence­based Terms of33Use
Rights Reserved.medicine,” “evidence­based occupational
• Privacy Policy therapy,”55 “evidence­based physical therapy,”56
• Notice • Accessibility
“evidence­based nursing,”57 and so on.
being able to find an answer.
Medical Univ of SC Library
The search process must be thorough yet focused to be effective and efficient. It is important to use a variety of resources and to consider literature
Access Provided by:
across disciplines to improve the likelihood of finding relevant information (see Box 5­1). The help of a research librarian can be invaluable in creating
a comprehensive search strategy.

Box 5­1 We Are in This Together

The mandates for interprofessional practice have become a central focus of healthcare today, touting the need to move out of our professional silos
to support quality and safety in the provision of patient­centered care.1,54 And yet, when it comes to EBP, the silos may be perpetuated with
resources being developed for “evidence­based medicine,”33 “evidence­based occupational therapy,”55 “evidence­based physical therapy,”56
“evidence­based nursing,”57 and so on.

By thinking about “disciplinary evidence,” we risk becoming short­sighted about how we ask questions, look for evidence, or use information to
make clinical decisions. Although each discipline surely has its unique knowledge base (albeit with overlaps), we must be sure to think
interprofessonally to phrase questions and gather evidence broadly enough to reflect the best care for a particular patient, including understanding
how such evidence can inform our role as a member of the team responsible for the patient’s total management.

Different types of studies will be explored, depending on the nature of the clinical question (see Table 5­2).

Diagnosis and measurement: These questions will typically be answered using methodological studies for validity and reliability,
epidemiologic methods to assess diagnostic accuracy, or RCTs to answer questions about comparison of diagnostic tests.

Prognosis: These questions will usually involve observational cohort and case­control studies, exploring relationships to determine which
factors are relevant to predicting outcomes.

Intervention: The RCT is considered the standard but pragmatic trials may also be used. Single­subject and observational designs can also be
applied, especially with conditions for which RCTs are less feasible.

Patient experiences: Qualitative studies provide a rich source of information about patient preferences. Descriptive surveys may also be useful
to gather this information.

Finding relevant literature requires knowledge of search parameters and access to appropriate databases. Sometimes searches are not successful in
locating relevant resources to answer a particular question. This may be due to a lack of sophistication in searching or it may be because studies
addressing a particular question have not been done!

Comprehensive information about searching the literature to answer this question will be presented in Chapter 6.

Evidence Reviews

One of most useful contributions to EBP has been the development of critical reviews of literature that summarize and appraise current literature on a
particular topic. For clinicians who do not have the time or skill to read and assess large numbers of studies, these reviews provide a comprehensive
summary of available evidence. Reviews are not necessarily available for every clinical question but, whenever possible, such reviews should be sought
as a first source of information. Many excellent resources provide evidence­based reviews on particular topics (see Chapter 37).

Systematic reviews are studies in which the authors carry out an extensive and focused search for research on a clinical topic, followed by appraisal
and summaries of findings, usually to answer a specific question.

➤ Blanchard et al53 compiled a systematic review to look at the effectiveness of corticosteroid injections compared to physical therapy intervention
for adhesive capsulitis. They evaluated six randomized trials, with varied quality and sample sizes. They found that the corticosteroid injections had
a greater effect on mobility in the short term (6–7 weeks) compared to physical therapy, but with no difference in pain. At 1 year, there was no
difference in shoulder disability between the two groups, with a small benefit in terms of pain for those who got the injection.

Meta­analyses are systematic reviews that use quantitative methods to summarize the results from multiple studies. Summary reviews are also used
to establish clinical practice guidelines (CPGs) for specific conditions, translating critical analysis of the literature into a set of recommendations.
Downloaded 2023­9­12 8:1 A Your IP is
Chapter 5: Understanding
Another type Evidence­Based
of knowledge synthesis is calledPractice, Page 14 / 29
a scoping review, which also includes a review of literature but with a broader focus on a topic of
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
interest to provide direction for future research and policy considerations. 58
a greater effect on mobility in the short term (6–7 weeks) compared to physical therapy, but with no difference in pain. At 1 year, there was no
Medical Univ of SC Library
difference in shoulder disability between the two groups, with a small benefit in terms of pain for those who got the injection.
Access Provided by:

Meta­analyses are systematic reviews that use quantitative methods to summarize the results from multiple studies. Summary reviews are also used
to establish clinical practice guidelines (CPGs) for specific conditions, translating critical analysis of the literature into a set of recommendations.

Another type of knowledge synthesis is called a scoping review, which also includes a review of literature but with a broader focus on a topic of
interest to provide direction for future research and policy considerations.58

STEP 3: Appraise the Literature

Once pertinent articles are found, they need to be critically appraised to determine whether they meet quality standards and whether the findings are
important and relevant to the clinical question. Although different types of studies will have different criteria for determining validity, three major
categories should be addressed for each article:

1. Is the study valid?

2. Are the results meaningful?

3. Are results relevant to my patient?

These criteria are described in Table 5­3. Each question will require evaluations that are specific to the type of study, with an understanding of the
validity issues that are relevant to the design. Clinicians must be able to determine to what extent the results of a study inform their practice, especially
when some studies will have “significant” findings and others will not (see Chapter 36). Having a working knowledge of research design and analysis
approaches is essential for making these judgments.

Table 5­3
Outline of General Criteria for Critical Appraisal

1. Is the study valid?


This is the determination of the quality of the design and analysis, as well as the extent to which you can be confident in the
study’s findings. Various scales can be used to assess the methodological quality of a study.
Is the research question important, and is it based on a theoretical rationale? Is the study design appropriate for the research question?
Was the study sample selected appropriately? Was the sample size large enough to demonstrate meaningful effects? Were subjects followed for
a long enough time to document outcomes?
Was bias sufficiently controlled? For an intervention trial, was randomization and blinding used? For observational studies, was there control of
potential confounders?
Was the sample size large enough to demonstrate treatment effects?
What were the outcome measures? Were they valid and reliable? Were they operationally defined? Were they appropriate to the research
question?
Were data analysis procedures applied and interpreted appropriately? Do data support conclusions?

2. Are the results meaningful?


Results must be interpreted in terms of their impact on patient responses and outcomes. They may be related to primary and
secondary outcomes.
Is the sample sufficiently representative of the target population so that results can be generalized?
How large was the effect of intervention, the accuracy of the diagnostic test, or the degree of risk associated with prognostic factors?
Is the effect large enough to be clinically meaningful?

3. Are the results relevant to my patient?


Finally, you must determine whether the findings will be applicable to your patient and clinical decisions.
Were the subjects in the study sufficiently similar to my patient?
Can I apply these results to my patient’s problem? What are the potential benefits or harms?
Is the approach feasible in my setting and will it be acceptable to my patient?
Is this approach worth the effort to incorporate it into my treatment plan?
Downloaded 2023­9­12 8:1 A Your IP is
Chapter 5: Understanding Evidence­Based Practice, Page 15 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility

Appraisal is also dependent on the extent to which authors present clear and complete descriptions of their research question, study design, analysis
These criteria are described in Table 5­3. Each question will require evaluations that are specific to the type of study, with an understanding of the
Medical Univ of SC Library
validity issues that are relevant to the design. Clinicians must be able to determine to what extent the results of a study inform their practice, especially
Access Provided by:
when some studies will have “significant” findings and others will not (see Chapter 36). Having a working knowledge of research design and analysis
approaches is essential for making these judgments.

Table 5­3
Outline of General Criteria for Critical Appraisal

1. Is the study valid?


This is the determination of the quality of the design and analysis, as well as the extent to which you can be confident in the
study’s findings. Various scales can be used to assess the methodological quality of a study.
Is the research question important, and is it based on a theoretical rationale? Is the study design appropriate for the research question?
Was the study sample selected appropriately? Was the sample size large enough to demonstrate meaningful effects? Were subjects followed for
a long enough time to document outcomes?
Was bias sufficiently controlled? For an intervention trial, was randomization and blinding used? For observational studies, was there control of
potential confounders?
Was the sample size large enough to demonstrate treatment effects?
What were the outcome measures? Were they valid and reliable? Were they operationally defined? Were they appropriate to the research
question?
Were data analysis procedures applied and interpreted appropriately? Do data support conclusions?

2. Are the results meaningful?


Results must be interpreted in terms of their impact on patient responses and outcomes. They may be related to primary and
secondary outcomes.
Is the sample sufficiently representative of the target population so that results can be generalized?
How large was the effect of intervention, the accuracy of the diagnostic test, or the degree of risk associated with prognostic factors?
Is the effect large enough to be clinically meaningful?

3. Are the results relevant to my patient?


Finally, you must determine whether the findings will be applicable to your patient and clinical decisions.
Were the subjects in the study sufficiently similar to my patient?
Can I apply these results to my patient’s problem? What are the potential benefits or harms?
Is the approach feasible in my setting and will it be acceptable to my patient?
Is this approach worth the effort to incorporate it into my treatment plan?

Appraisal is also dependent on the extent to which authors present clear and complete descriptions of their research question, study design, analysis
procedures, results, and conclusions. Checklists have been developed for various types of studies that list specific content that should be included in a
written report to assure transparency (see Chapter 38).

STEP 4: Apply the Evidence

Once the literature is reviewed, analyzed, and interpreted, the clinician must then determine whether research results can be applied to a given clinical
situation, with integration of expertise, patient values, and the context of the clinical setting. This is where the “rubber meets the road,” so to speak—
putting it all together to inform a clinical decision. This conclusion may not be simple, however. Research studies may not fit your patient’s situation
exactly. For instance, in the case of Mrs. H, many studies and systematic reviews have shown results for patients with adhesive capsulitis but did not
focus specifically on people with diabetes.

And then there is the unhelpful situation where you could not find the evidence—because your search strategy was not sufficient, no one has studied
the problem, or studies have shown no positive effects (see Box 5­2). This is where the decision­making process meets uncertainty and where
judgment is needed to determine what constitutes best evidence for a specific clinical question. For Mrs. H, we can go through the thought process
shown in Figure 5­5, resulting in a decision to explore corticosteroid injection as a supplement to physical therapy.

Figure 5–5
Downloaded 2023­9­12 8:1 A Your IP is
Chapter 5: Understanding Evidence­Based Practice, Page 16 / 29
EBP framework, showing the process to decide on appropriate treatment for a patient with diabetes who has adhesive capsulitis. After considering all
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
relevant information, a decision is made to pursue a particular treatment approach.
And then there is the unhelpful situation where you could not find the evidence—because your search strategy was not sufficient, no one has studied
Medicaland
the problem, or studies have shown no positive effects (see Box 5­2). This is where the decision­making process meets uncertainty Univ of SC Library
where
Access Provided by:
judgment is needed to determine what constitutes best evidence for a specific clinical question. For Mrs. H, we can go through the thought process
shown in Figure 5­5, resulting in a decision to explore corticosteroid injection as a supplement to physical therapy.

Figure 5–5

EBP framework, showing the process to decide on appropriate treatment for a patient with diabetes who has adhesive capsulitis. After considering all
relevant information, a decision is made to pursue a particular treatment approach.

Box 5­2 An invisible unicorn is grazing in my office … prove me wrong!

This fantastical challenge carries an important message about evidence.59 Try to prove that the invisible unicorn does not exist. Come visit my office
and you will probably find no evidence that it does, but perhaps you are not looking in the right places, or carefully enough, or for the right kind of
evidence. Proving a negative is difficult, if not impossible.

Consider a hypothetical trial to investigate the effectiveness of a corticosteroid injection to improve shoulder function. Ten patients with adhesive
capsulitis are randomly assigned to two groups, one group getting the active injection and the other a placebo, with subjects and experimenters all
blinded to assignment. In the end, one person in each group improves. What is the conclusion?

The evidence shows that the drug does not work—OR—The study provides no evidence that the drug works.

This is no small
Downloaded distinction.
2023­9­12 The
8:1 A firstIP
Your conclusion
is is about the “evidence of absence,” suggesting that there is evidence to show there is no effect. The
second 5:
Chapter conclusion
Understanding the “absence of evidence,”
is aboutEvidence­Based Practice, indicating that we have not found evidence to show that it works—we just do not know yet./ 29
Page 17
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
There are many possible reasons why the study did not show an effect. Maybe if the sample were larger we would have seen more people respond
positively; maybe we could have measured function more precisely to show a difference; maybe our sample was biased; maybe the effect is just too
Medical Univ of SC Library
Access Provided by:

Box 5­2 An invisible unicorn is grazing in my office … prove me wrong!

This fantastical challenge carries an important message about evidence.59 Try to prove that the invisible unicorn does not exist. Come visit my office
and you will probably find no evidence that it does, but perhaps you are not looking in the right places, or carefully enough, or for the right kind of
evidence. Proving a negative is difficult, if not impossible.

Consider a hypothetical trial to investigate the effectiveness of a corticosteroid injection to improve shoulder function. Ten patients with adhesive
capsulitis are randomly assigned to two groups, one group getting the active injection and the other a placebo, with subjects and experimenters all
blinded to assignment. In the end, one person in each group improves. What is the conclusion?

The evidence shows that the drug does not work—OR—The study provides no evidence that the drug works.

This is no small distinction. The first conclusion is about the “evidence of absence,” suggesting that there is evidence to show there is no effect. The
second conclusion is about the “absence of evidence,” indicating that we have not found evidence to show that it works—we just do not know yet.

There are many possible reasons why the study did not show an effect. Maybe if the sample were larger we would have seen more people respond
positively; maybe we could have measured function more precisely to show a difference; maybe our sample was biased; maybe the effect is just too
small to be meaningful—or maybe the injection just does not work! Evidence is not black and white. We have to consider many questions when we
evaluate what we find. How big a difference is going to matter? What are the relevant costs, potential adverse outcomes, the patient’s preferences?
Was the analysis of data appropriate? Was the sample large enough to show a difference? Did the design control for potential biases? Are these
results confirmed in more than one study? Can we have confidence in the outcomes? Conclusions of an absent effect can be misleading. It is always
possible that the treatment is not effective, but it is also possible that the study simply was not powerful enough to show a difference. Beware of the
invisible unicorn! Absence of evidence is not evidence of absence.60

We will revisit this issue from a statistical perspective in Chapter 23.

STEP 5: Assess the Effectiveness of the Evidence

The final step in the EBP process is to determine whether the application of the evidence resulted in a positive outcome. Did Mrs. H improve as
expected? If she did not, the clinician must then reflect on why the evidence may not have been valid for the patient’s situation, whether additional
evidence is needed to answer the clinical question, or whether other factors need to be considered to find the best treatment approach for her.

This step is often ignored but in the end may be the most important, as we must continue to learn from the evidence—what works and what does not.
The process shown in Figure 5­3 is circular because, through this assessment, we may generate further questions in the search of the “best evidence.”
These questions may facilitate a new search process or they may be the basis for a research study.

Levels of Evidence
Downloaded 2023­9­12 8:1 A Your IP is
Chapter 5: Understanding
As we search Evidence­Based
for evidence, we want to find thePractice,
most accurate and valid information possible. Depending on the kind of question being asked,Pagedifferent
18 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
types of studies will provide the strongest evidence. To assist in this effort, studies can be assigned to levels of evidence, which essentially reflect a
rating system. These levels represent the expected rigor of the design and control of bias, thereby indicating the level of confidence that may be placed
evidence is needed to answer the clinical question, or whether other factors need to be considered to find the best treatment approach for her.
Medical Univ of SC Library
This step is often ignored but in the end may be the most important, as we must continue to learn from the evidence—what works and what does not.
Access Provided by:
The process shown in Figure 5­3 is circular because, through this assessment, we may generate further questions in the search of the “best evidence.”
These questions may facilitate a new search process or they may be the basis for a research study.

Levels of Evidence
As we search for evidence, we want to find the most accurate and valid information possible. Depending on the kind of question being asked, different
types of studies will provide the strongest evidence. To assist in this effort, studies can be assigned to levels of evidence, which essentially reflect a
rating system. These levels represent the expected rigor of the design and control of bias, thereby indicating the level of confidence that may be placed
in the findings.

Levels of evidence are viewed as a hierarchy, with studies at the top representing stronger evidence and those at the bottom being weaker. This
hierarchy can be used by clinicians, researchers, and patients as a guide to find the likely best evidence.61 These levels are also used as a basis for
selecting studies for incorporation into guidelines and systematic reviews.

Classification Schemes

Several classifications of evidence have been developed, with varying terminology and grading systems.62,63 The most commonly cited system has
been developed by the Oxford Centre for Evidence­Based Medicine (OCEBM), shown in Table 5­4. This system includes five study types:61

Diagnosis: Is this diagnostic or monitoring test accurate?

Questions about diagnostic tests or measurement reliability and validity are most effectively studied using cross­sectional studies applying a
consistent reference standard and blinding.

Prognosis: What will happen if we do not add a therapy?

Studies of prognosis look for relationships among predictor and outcome variables using observational designs.

Intervention: Does this intervention help?

This question is most effectively studied using an RCT, although strong observational studies may also be used. The latest version of this
classification includes the N­of­1 trial as an acceptable alternative to the standard RCT.

Harm: What are the common or rare harms?

RCTs and observational studies can identify adverse effects. Outcomes from N­of­1 trials can provide evidence specific to a particular patient.

Screening: Is this early detection test worthwhile?

RCTs are also the strongest design for this purpose to determine whether screening tests are effective by comparing outcomes for those who do
and do not get the test.

Table 5­4
Levels of Evidence

TYPE OF LEVEL 5
LEVEL 1 STRONGEST LEVEL 2 LEVEL 3 LEVEL 4
QUESTION WEAKEST

Diagnosis Systematic review of Cross­sectional Nonconsecutive Case­control Mechanistic


cross­sectional studies study studies studies reasoning
Consistently applied Consistently Studies without Nonindependent
reference standard applied reference consistent reference reference
Blinding standard standard standard
Blinding

Prognosis Systematic review of Inception cohort Cohort study Case series or n/a
Downloaded 2023­9­12 8:1 A Your IP is
Chapter 5: Understanding inception cohort
Evidence­Based Practice, studies Control arm of RCT case­control Page 19 / 29
studies
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility studies
Poor­quality
cohort study
RCTs are also the strongest design for this purpose to determine whether screening tests are effective by comparing outcomes for those who do
and do not get the test. Medical Univ of SC Library
Access Provided by:
Table 5­4
Levels of Evidence

TYPE OF LEVEL 5
LEVEL 1 STRONGEST LEVEL 2 LEVEL 3 LEVEL 4
QUESTION WEAKEST

Diagnosis Systematic review of Cross­sectional Nonconsecutive Case­control Mechanistic


cross­sectional studies study studies studies reasoning
Consistently applied Consistently Studies without Nonindependent
reference standard applied reference consistent reference reference
Blinding standard standard standard
Blinding

Prognosis Systematic review of Inception cohort Cohort study Case series or n/a
inception cohort studies Control arm of RCT case­control
studies studies
Poor­quality
cohort study

Intervention Systematic review of RCT Nonrandomized Case series Mechanistic


RCTs Observational controlled cohort Case­control reasoning
N­of­1 trial* study with Follow­up study studies
dramatic effect Historical
controls

H a r m† Systematic review of RCT Nonrandomized Case series Mechanistic


RCTs Observational controlled cohort Case­control reasoning
Systematic review of study with Follow­up study‡ study
nested case­control dramatic effect Historical
studies controls
N­of­1 trial*
Observational study
with dramatic effect

Screening Systematic review of RCT Nonrandomized Case series Mechanistic


RCTs controlled cohort Case control reasoning
Follow­up study Historical
controls

Adapted from Oxford Centre for Evidence­Based Medicine (OCEBM): http://www.cebm.net/index.aspx?o=5653.

OCEBM Levels of Evidence Working Group: The Oxford 2011 Levels of Evidence. Used with permission.

*N­of­1 trial with a patient who has raised the question.

† Harm relates to adverse effects of treatment and can be described as rare or common.

‡ Numbers must be sufficient to rule out common harm; duration of follow­up must be sufficient to show long­term harm.

The Hierarchy
Downloaded of Evidence
2023­9­12 8:1 A Your IP is
Chapter 5: Understanding Evidence­Based Practice, Page 20 / 29
©2023 F.A. are
Hierarchies Davis Company.
delineated for All
eachRights
type Reserved. Terms ofto
of research question Use • Privacythe
distinguish Policy • Notice
designs • Accessibility
that provide the strongest form of evidence for that purpose.
The OCEBM structure is based on five levels of evidence:
† Harm relates to adverse effects of treatment and can be described as rare or common. Medical Univ of SC Library
Access Provided by:
‡ Numbers must be sufficient to rule out common harm; duration of follow­up must be sufficient to show long­term harm.

The Hierarchy of Evidence

Hierarchies are delineated for each type of research question to distinguish the designs that provide the strongest form of evidence for that purpose.
The OCEBM structure is based on five levels of evidence:

Level 1 evidence comes from systematic reviews or meta­analyses that critically summarize several studies. These reviews are the best first pass
when searching for information on a specific question (see Chapter 37).

Level 2 includes individual RCTs or observational studies with strong design and outcomes (see Chapters 14 and 19).

The studies included in levels 1 and 2 are intended to reflect the strongest designs, including RCTs for assessment of interventions, harms, and
screening procedures, as well as prospective observational studies for diagnosis and prognosis.

Level 3 includes studies that do not have strong controls of bias, such as nonrandomized studies, retrospective cohorts, or diagnostic studies
that do not have a consistent reference standard (see Chapters 17, 19).

Level 4 is a relatively low level of evidence, primarily including descriptive studies such as case series or studies that use historical controls (see
Chapter 20).

Level 5 is based on mechanistic reasoning, whereby evidence­based decisions are founded on logical connections, including pathophysiological
rationale.67 This may be based on biological plausibility or basic research. Case reports that focus on describing such mechanisms in one or more
individuals can be informative at this level of evidence.

As the nature of EBP has matured, hierarchies of evidence have been modified over time. In previous versions of levels of evidence, the “bottom”
level included case reports, expert opinion, anecdotal evidence, and basic research. With the most recent OCEBM modifications in 2011, these
sources have been replaced by mechanistic reasoning, suggesting that such “opinions” must still be based on logical rationale. Also omitted are
questions about economics for which there is less consensus on what counts as good evidence.61

Qualitative and Descriptive Research Evidence

The framework for levels of evidence has been based on designs used in quantitative studies. Studies that produce qualitative or descriptive data have
been excluded from the hierarchy, and are sometimes not seen as true forms of evidence. These research approaches do, however, make significant
contributions to our knowledge base and identification of mechanisms, and will often provide an important perspective for appreciating patients’
concerns. Quantitative studies can tell us which treatment is better or the degree of correlation between variables, but qualitative inquiry can help us
understand the context of care and how the patient experience impacts outcomes—factors that can have substantial influence on clinical decisions
(see Chapter 21). Therefore, rather than thinking of qualitative study as a low level of evidence in a quantitative paradigm, a different set of criteria
need to be applied that focus on the various approaches and kinds of questions relevant to qualitative research.64,65

Qualitative research stands apart from other forms of inquiry because of its deep­rooted intent to understand the personal and social experience of
patients, families, and providers. Daly et al66 have proposed a hierarchy with four levels of evidence, emphasizing the capacity for qualitative research
to inform practice and policy.

Level 1, at the top of the hierarchy, represents generalizable studies. These are qualitative studies that are guided by theory, and include sample
selection that extends beyond one specific population to capture diversity of experiences. Findings are put into the context of previous research,
demonstrating broad application. Data collection and analysis procedures are comprehensive and clear.

Level 2 includes conceptual studies that analyze data according to conceptual themes but have narrower application based on limited diversity in
the sample. Further research is needed to confirm concepts.

Level 3 includes descriptive studies that are focused on describing participant views or experiences with narrative summaries and quotations,
Downloaded 2023­9­12
but with no detailed8:1 A Your IP is
analysis.
Chapter 5: Understanding Evidence­Based Practice, Page 21 / 29
©2023 F.A.4Davis
Level Company.
includes All Rights
case studies, Reserved.
exploring insightsTerms of Use
that have not•been
Privacy Policy •before,
addressed Notice often
• Accessibility
leading to recognition of unusual phenomena.

Quality of Evidence
selection that extends beyond one specific population to capture diversity of experiences. Findings are put into the context of previous research,
demonstrating broad application. Data collection and analysis procedures are comprehensive and clear. Medical Univ of SC Library
Access Provided by:
Level 2 includes conceptual studies that analyze data according to conceptual themes but have narrower application based on limited diversity in
the sample. Further research is needed to confirm concepts.

Level 3 includes descriptive studies that are focused on describing participant views or experiences with narrative summaries and quotations,
but with no detailed analysis.

Level 4 includes case studies, exploring insights that have not been addressed before, often leading to recognition of unusual phenomena.

Quality of Evidence

Levels of evidence are intended to serve as a guide to accessing and prioritizing the strongest possible research on a given topic. However, levels are
based on design characteristics, not quality. An RCT may have serious flaws because of bias, restricted samples, inappropriate data analysis, or
inappropriate interpretation of results. At the same time, level 4 studies may provide important relevant information that may influence decision­
making for a particular patient.

Grading Evidence Up or Down

Hierarchies used to classify quality of published research should serve as a guide for searching and identifying higher quality studies, but are not
absolute. By evaluating the quality of the evidence presented, we may find that a higher­quality study at a lower level of evidence may be more helpful
than one at a higher level that has design or analysis flaws (see Chapter 37). For instance, a well­designed cohort study may be more applicable than a
poorly designed RCT. The level of evidence for a strong observational study may be graded up if there is a substantial treatment effect. An RCT can be
graded down based on poor study quality, lack of agreement between studied variables and the PICO question, inconsistency between studies, or
because the absolute effect size is very small.

Levels of evidence are based on the assumption that systematic reviews and meta­analyses provide stronger evidence than individual trials
in all categories. The content of reviews, however, is not always sufficiently detailed to provide applicable information for clinical decision­making.
For example, reviews often lack specifics on patient characteristics, operational definitions about interventions, or adverse effects, which can vary
across studies. Individual references may still need to be consulted to inform clinical decisions.

Implementing Evidence­Based Practice


Although the concepts and elements of EBP were introduced more than 20 years ago, the integration of EBP into everyday practice has continued to be
a challenge in all healthcare disciplines. As we explore evidence, it is important to keep a perspective regarding its translation to practice. The ultimate
purpose of EBP is to improve patient care, but this cannot happen by only using published research. The clinician must be able to make the
connections between study results, sound judgment, patient needs, and clinical resources.

The literature is replete with studies that have examined the degree to which clinicians and administrators have been able to implement EBP as part of
their practice. It is a concern that appears to affect all health professions and practice settings, as well as healthcare cultures across many countries.
For the most part, research has shown that issues do not stem from devaluing evidence­based practice. Although healthcare providers appreciate their
professional responsibility to work in an evidence­based way, they also recognize the difficulties in achieving that goal from personal and
organizational perspectives, particularly related to availability of time and resources, as well as establishing an evidence­based clinical culture.67

Several strategies can be incorporated to build an EBP clinical environment. See the Chapter 5 Supplement for further discussion.

Knowledge Translation

As we come to better appreciate the importance of EBP, we must also appreciate the journey we face in getting from evidence to practice.
Implementation studies can help us see how evidence can be integrated into practice. But then there is still one more hurdle—securing the knowledge
Downloaded 2023­9­12 8:1 A Your IP is
that facilitates actual utilization of evidence.
Chapter 5: Understanding Evidence­Based Practice, Page 22 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
Knowledge translation (KT) is a relatively new term that has been used to describe the longstanding problem of underutilization of evidence in
healthcare.68 It describes the process of accelerating the application of knowledge to improve outcomes and change behavior for all those involved in
Medical Univ of SC Library
Access Provided by:
Knowledge Translation

As we come to better appreciate the importance of EBP, we must also appreciate the journey we face in getting from evidence to practice.
Implementation studies can help us see how evidence can be integrated into practice. But then there is still one more hurdle—securing the knowledge
that facilitates actual utilization of evidence.

Knowledge translation (KT) is a relatively new term that has been used to describe the longstanding problem of underutilization of evidence in
healthcare.68 It describes the process of accelerating the application of knowledge to improve outcomes and change behavior for all those involved in
providing care. It is a process that needs interprofessional support, including disciplines such as informatics, public policy, organizational theory, and
educational and social psychology to close the gap between what we know and what we do (see Chapter 2).69

Although the term KT has been widely used, its definition has been ambiguous, often used interchangeably with other terms such as knowledge
transfer, knowledge exchange, research utilization, diffusion, implementation, dissemination, and evidence implementation.70,71 The Canadian
Institutes of Health Research (CIHR) have offered the following definition:

Knowledge translation is a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically­sound application of
knowledge to improve health, … provide more effective health services, and products and strengthen the health care system.72

Knowledge­to­Action Framework

This definition characterizes KT as a multidimensional concept involving the adaptation of quality research into relevant priorities, recognizing that KT
goes beyond dissemination and continuing education.70 KT includes both creation and application of knowledge—what has been called the
knowledge­to­action framework.71

Knowledge creation addresses specific needs and priorities, with three phases:

1. Inquiry—including completion of primary research.

2. Synthesis—bringing disparate findings together, typically in systematic reviews, to reflect the totality of evidence on a topic.

3. Development of tools—further distilling knowledge into creation of decision­making tools such as clinical guidelines.

The action cycle begins with identification of a problem, reflecting the current state of knowledge. It then leads to application and evaluation of the
process for achieving stable outcomes. Integral to the framework is the input of stakeholders, including patients, clinicians, managers, and policy
makers (see Focus on Evidence 5­2).71

Focus on Evidence 5–2

From Knowledge to Action

Although there is substantial evidence for the importance of physical activity for children, school and community programs often do not achieve
their full potential, either because of poor quality of programming or lack of engaging activities. Based on a review of relevant data and several
theories related to activity and goal achievement, a set of principles was designed to address documented limitations: Supportive, Active,
Autonomous, Fair, Enjoyable (SAAFE).75 The purpose of SAAFE was to guide planning, delivery, and evaluation of organized physical activity. The
process involved teachers, students, community leaders, and after­school staff, and was tested in several different environments to document
changes in behavior and programming, including adoption of teacher training strategies, development of research protocols for evaluation, and
dissemination of findings across settings.

This is an example of the knowledge­to­action cycle, requiring multiple inputs, recognition of barriers, and iterations of contributions to understand
how evidence can be implemented in real­world environments.

Knowledge, Implementation, and Evidence


Downloaded 2023­9­12 8:1 A Your IP is
Chapter 5: Understanding Evidence­Based Practice, Page 23 / 29
There
©2023isF.A.
an obvious overlap ofAll
Davis Company. knowledge translation,Terms
Rights Reserved. implementation research,
of Use • Privacy and• EBP,
Policy Noticeall•representing
Accessibilitya process of moving research knowledge to
application. Although the definitions appear to coincide, KT has a broader meaning, encompassing the full scope of knowledge development, starting
with asking the right questions, conducting relevant research, disseminating findings that relate to a clinical context, applying that knowledge, and
Medical Univ
This is an example of the knowledge­to­action cycle, requiring multiple inputs, recognition of barriers, and iterations of contributions toof SC Library
understand
how evidence can be implemented in real­world environments. Access Provided by:

Knowledge, Implementation, and Evidence

There is an obvious overlap of knowledge translation, implementation research, and EBP, all representing a process of moving research knowledge to
application. Although the definitions appear to coincide, KT has a broader meaning, encompassing the full scope of knowledge development, starting
with asking the right questions, conducting relevant research, disseminating findings that relate to a clinical context, applying that knowledge, and
studying its impact.

KT is seen as a collaborative engagement between researchers and practitioners, reflecting together on practice needs and consequences of research
choices, with mutual accountability.75 It is an interactive, cyclical, and dynamic process that integrates an interprofessional approach to change. In
contrast, implementation science focuses on testing how interventions work in real settings, and what solutions need to be introduced into a health
system to promote sustainability.76 KT is a complementary process with a stronger emphasis on the development, exchange, and synthesis of
knowledge.

In further contrast, EBP is used more narrowly to refer to the specific part of that process involving the application of information to clinical decision
making for a particular patient, including the influence of the patient’s values and the practitioner’s expertise. In a broader framework, KT formalizes
the relationship between the researcher and the practitioner, including mechanisms to remove barriers to EBP.

COMMENTARY

Don’t accept your dog’s admiration as conclusive evidence that you are wonderful.

— Ann Landers (1918–2002)

Advice columnist

The central message of EBP is the need to establish logical connections that will strengthen our ability to make sound decisions regarding patient
care—but because we base research results on samples, we make lots of assumptions about how findings represent larger populations or
individual patients. We also know that all patients do not respond in the same way and, therefore, the causal nature of treatments is not absolute.
Several studies looking at the same questions can present different findings. Unique patient characteristics and circumstances can influence
outcomes. Studies often try to examine subgroups to determine the basis for these differences, but even those data are subject to variance that
cannot be explained or anticipated. This is why clinical decisions should be informed by multiple sources of knowledge, including theory and
experience.

In his classic paper, Things I Have Learned (So Far), Cohen77 cautions us as researchers and practitioners:

Remember that throughout the process, … it is on your informed judgment as a scientist that you must rely, and this holds as much for the
statistical aspects of the work as it does for all the others… . and that informed judgment also governs the conclusions you will draw.

So although the need for EBP seems uncontroversial, it has not yet achieved its full promise. As we shall see in upcoming chapters, uncertainty is
ever­present in practice and clinical research. Even when evidence appears strong and convincing, clinical decisions can vary based on practitioner
and patient values.78 We need to apply good principles of design and statistics with sound clinical judgment to strengthen the likelihood that our
conclusions, and therefore our actions, have merit.

References

1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century . Washington, DC: National Academy Press, 2001.
[PubMed: ] [[XSLOpenURL/]]

2. Institute of Medicine. Best care at lower cost: The path to continuously learning health care in America . Washington, DC: The National Academies
Downloaded 2023­9­12] [[XSLOpenURL/]]
Press, 2013. [PubMed: 8:1 A Your IP is
Chapter 5: Understanding Evidence­Based Practice, Page 24 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
3. BMJ Clinical Evidence: What conclusions has clinical evidence drawn about what works, what doesn’t based on randomised controlled trial
evidence? Available at http://clinicalevidence.bmj.com.ezproxy­v.musc.edu/x/set/static/cms/efficacy­categorisations.html. Accessed June 25, 2017.
References
Medical Univ of SC Library
Access Provided by:
1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century . Washington, DC: National Academy Press, 2001.
[PubMed: ] [[XSLOpenURL/]]

2. Institute of Medicine. Best care at lower cost: The path to continuously learning health care in America . Washington, DC: The National Academies
Press, 2013. [PubMed: ] [[XSLOpenURL/]]

3. BMJ Clinical Evidence: What conclusions has clinical evidence drawn about what works, what doesn’t based on randomised controlled trial
evidence? Available at http://clinicalevidence.bmj.com.ezproxy­v.musc.edu/x/set/static/cms/efficacy­categorisations.html. Accessed June 25, 2017.

4. Fox JB, Shaw FE. Relationship of income and health care coverage to receipt of recommended clinical preventive services by adults ­ United States,
2011–2012. MMWR Morb Mortal Wkly Rep 2014;63(31):666–670. [PubMed: 25102414]

5. American Academy of Pediatrics Task Force on Infant Positioning and SIDS. Positioning and sudden infant death syndrome (SIDS): update.
Pediatrics 1996;98(6 Pt 1):1216–1218. [PubMed: 8951285]

6. Schreiber J, Stern P. A review of literature on evidence­based practice in physical therapy. Internet J Allied Health Sci and Pract 2005;3(4).

7. Ely JW, Osheroff JA, Maviglia SM, Rosenbaum ME. Patient­care questions that physicians are unable to answer. J Am Med Inform Assoc
2007;14(4):407–414. [PubMed: 17460122]

8. Timmermans S, Mauck A. The promises and pitfalls of evidence­based medicine. Health Aff (Millwood) 2005;24(1):18–28. [PubMed: 15647212]

9. Mercuri M, Gafni A. Medical practice variations: what the literature tells us (or does not) about what are warranted and unwarranted variations. J
Eval Clin Pract 2011;17(4):671–677. [PubMed: 21501341]

10. Lee HY, Ahn HS, Jang JA, Lee YM, Hann HJ, Park MS, Ahn DS. Comparison of evidence­based therapeutic intervention between community­ and
hospital­based primary care clinics. Int J Clin Pract 2005;59(8):975–980. [PubMed: 16033623]

11. Fiscella K, Sanders MR. Racial and ethnic disparities in the quality of health care. Annu Rev Public Health 2016;37:375–394. [PubMed: 26789384]

12. Prasad V, Vandross A, Toomey C, Cheung M, Rho J, Quinn S, Chacko SJ, Borkar D, Gall V, Selvaraj S, et al. A decade of reversal: an analysis of
146 contradicted medical practices. Mayo Clinic Proceedings 2013;88(8):790–798. [PubMed: 23871230]

13. National Partnership for Women and Families: Overuse, underuse and misuse of medical care. Fact Sheet. Available at
http://go.nationalpartnership.org/site/DocServer/Three_Categories_of_Quality.pdf. Accessed February 17, 2017.

14. Isseroff RR, Dahle SE. Electrical stimulation therapy and wound healing: where are we now? Advances in Wound Care 2012;1(6):238–243. [PubMed:
24527312]

15. Centers for Medicare and Medicaid Services: Decision Memo for Electrostimulation for Wounds (CAG­00068N). Available at
https://www.cms.gov/medicare­coverage­database/details/nca­decision­memo.aspx?NCAId=27&fromdb=true. Accessed May 20, 2017.

16. Kloth LC. Electrical stimulation for wound healing: a review of evidence from in vitro studies, animal experiments, and clinical trials. Int J Low
Extrem Wounds 2005;4(1):23–44. [PubMed: 15860450]

17. Ennis WJ, Lee C, Gellada K, Corbiere TF, Koh TJ. Advanced technologies to improve wound healing: electrical stimulation, vibration therapy, and
ultrasound—what is the evidence? Plast Reconstr Surg 2016;138(3 Suppl):94s–104s. [PubMed: 27556780]

18. AHC Media: Association files suit over reimbursement change. Electrical stimulation funding at issue. Available at
https://www.ahcmedia.com/articles/48484­association­files­suit­over­reimbursement­change. Accessed May 20, 2017.

19. Ashrafi M, Alonso­Rasgado T, Baguneid M, Bayat A. The efficacy of electrical stimulation in lower extremity cutaneous wound healing: a systematic
review. Exp Dermatol 2017;26(2):171–178. [PubMed: 27576070]

20. Korenstein D, Falk R, Howell EA, Bishop T, Keyhani S. Overuse of health care services in the United States: an understudied problem. Arch Intern
Downloaded 2023­9­12 8:1[PubMed:
Med 2012;172(2):171–178. A Your IP is
22271125]
Chapter 5: Understanding Evidence­Based Practice, Page 25 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
21. Takata GS, Chan LS, Shekelle P, Morton SC, Mason W, Marcy SM. Evidence assessment of management of acute otitis media: I. The role of
antibiotics in treatment of uncomplicated acute otitis media. Pediatrics 2001;108(2):239–247. [PubMed: 11483783]
https://www.ahcmedia.com/articles/48484­association­files­suit­over­reimbursement­change. Accessed May 20, 2017.
Medical Univ of SC Library
Access Provided by:
19. Ashrafi M, Alonso­Rasgado T, Baguneid M, Bayat A. The efficacy of electrical stimulation in lower extremity cutaneous wound healing: a systematic
review. Exp Dermatol 2017;26(2):171–178. [PubMed: 27576070]

20. Korenstein D, Falk R, Howell EA, Bishop T, Keyhani S. Overuse of health care services in the United States: an understudied problem. Arch Intern
Med 2012;172(2):171–178. [PubMed: 22271125]

21. Takata GS, Chan LS, Shekelle P, Morton SC, Mason W, Marcy SM. Evidence assessment of management of acute otitis media: I. The role of
antibiotics in treatment of uncomplicated acute otitis media. Pediatrics 2001;108(2):239–247. [PubMed: 11483783]

22. Mangione­Smith R, DeCristofaro AH, Setodji CM, Keesey J, Klein DJ, Adams JL, Schuster MA, McGlynn EA. The quality of ambulatory care
delivered to children in the United States. N Engl J Med 2007;357(15):1515–1523. [PubMed: 17928599]

23. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA. The quality of health care delivered to adults in the United States. N
Engl J Med 2003;348(26):2635–2645. [PubMed: 12826639]

24. National Foundation for Infectious Diseases: Pneumococcal disease fact sheet for the media. Available at
http://www.nfid.org/idinfo/pneumococcal/media­factsheet.html. Accessed February 17, 2017.

25. James JT. A new, evidence­based estimate of patient harms associated with hospital care. J Patient Saf 2013;9(3):122–128. [PubMed: 23860193]

26. Pronovost PJ, Bo­Linn GW. Preventing patient harms through systems of care. JAMA 2012;308(8):769–770. [PubMed: 22910751]

27. Desmeules F, Boudreault J, Roy JS, Dionne C, Fremont P, MacDermid JC. The efficacy of therapeutic ultrasound for rotator cuff tendinopathy: a
systematic review and meta­analysis. Phys Ther Sport 2015;16(3):276–284. [PubMed: 25824429]

28. Ebadi S, Henschke N, Nakhostin Ansari N, Fallah E, van Tulder MW. Therapeutic ultrasound for chronic low­back pain. Cochrane Database Syst
Rev 2014(3):Cd009169.

29. Silverman WA. Human Experimentation: A Guided Step into the Unknown . New York: Oxford University Press, 1985.

30. Isaacs D, Fitzgerald D. Seven alternatives to evidence based medicine. BMJ 1999;319:1618. [PubMed: 10600968]

31. O’Donnell M. A Sceptic’s Medical Dictionary . London: BMJ Books, 1997.

32. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 1996;312:71–72.
[PubMed: 8555924]

33. Straus SE, Glasziou P, Richardson WS, Haynes RB, Pattani R, Veroniki AA. Evidence­Based Medicine: How to Practice and Teach It . London:
Elsevier, 2019.

34. Guyatt GH. Evidence­based medicine. ACP J Club 1991;114(2):A16.

35. Carman KL, Maurer M, Mangrum R, Yang M, Ginsburg M, Sofaer S, Gold MR, Pathak­Sen E, Gilmore D, Richmond J, et al. Understanding an
informed public’s views on the role of evidence in making health care decisions. Health Affairs 2016;35(4):566–574. [PubMed: 27044953]

36. Epstein RM, Alper BS, Quill TE. Communicating evidence for participatory decision making. JAMA 2004;291(19):2359–2366. [PubMed: 15150208]

37. Stewart M, Meredith L, Brown JB, Galajda J. The influence of older patient­physician communication on health and health­related outcomes. Clin
Geriatr Med 2000;16(1):25–36, vii–viii. [PubMed: 10723615]

38. Bastian H, Glasziou P, Chalmers I. Seventy­five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med
2010;7(9):e1000326. [PubMed: 20877712]

39. Huang YP, Fann CY, Chiu YH, Yen MF, Chen LS, Chen HH, Pan SL. Association of diabetes mellitus with the risk of developing adhesive capsulitis
of the shoulder: a longitudinal population­based followup study. Arthritis Care Res 2013;65(7):1197–1202.
Downloaded 2023­9­12 8:1 A Your IP is
Chapter
40. Zreik 5:
NH,Understanding Evidence­Based
Malik RA, Charalambous Practice,
CP. Adhesive Page 26 / 29
capsulitis of the shoulder and diabetes: a meta­analysis of prevalence. Muscles Ligaments
©2023 F.A. Davis Company. All Rights Reserved.
Tendons J 2016;6(1):26–34. [PubMed: 27331029] Terms of Use • Privacy Policy • Notice • Accessibility

41. Haynes RB, Sackett DL, Guyatt GH, Tugwell PS. Clinical Epidemiology: How To Do Clinical Practice Research . 3 ed. Philadelphia: Lippincott
Medical
38. Bastian H, Glasziou P, Chalmers I. Seventy­five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med Univ of SC Library
Access Provided by:
2010;7(9):e1000326. [PubMed: 20877712]

39. Huang YP, Fann CY, Chiu YH, Yen MF, Chen LS, Chen HH, Pan SL. Association of diabetes mellitus with the risk of developing adhesive capsulitis
of the shoulder: a longitudinal population­based followup study. Arthritis Care Res 2013;65(7):1197–1202.

40. Zreik NH, Malik RA, Charalambous CP. Adhesive capsulitis of the shoulder and diabetes: a meta­analysis of prevalence. Muscles Ligaments
Tendons J 2016;6(1):26–34. [PubMed: 27331029]

41. Haynes RB, Sackett DL, Guyatt GH, Tugwell PS. Clinical Epidemiology: How To Do Clinical Practice Research . 3 ed. Philadelphia: Lippincott
Willliams & Wilkins, 2006.

42. Samson D, Schoelles KM. Chapter 2: medical tests guidance (2) developing the topic and structuring systematic reviews of medical tests: utility of
PICOTS, analytic frameworks, decision trees, and other frameworks. J Gen Intern Med 2012;27 Suppl 1:S11–S119. [PubMed: 22648670]

43. Centre for Reviews and Dissemination. Systematic Reviews: CRD’s Guidance for Undertaking Reviews in Health Care. York: University of York, 2006.
Available at https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf. Accessed February 3, 2017.

44. Schlosser RW, Koul R, Costello J. Asking well­built questions for evidence­based practice in augmentative and alternative communication. J
Commun Disord 2007;40(3):225–238. [PubMed: 16876187]

45. Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res 2012;22(10):1435–1443. [PubMed:
22829486]

46. Perkins BA, Olaleye D, Zinman B, Bril V. Simple screening tests for peripheral neuropathy in the diabetes clinic. Diabetes Care 2001;24(2):250–256.
[PubMed: 11213874]

47. Donaghy A, DeMott T, Allet L, Kim H, Ashton­Miller J, Richardson JK. Accuracy of clinical techniques for evaluating lower limb sensorimotor
functions associated with increased fall risk. PM R 2016;8(4):331–339. [PubMed: 26409195]

48. Jernigan SD, Pohl PS, Mahnken JD, Kluding PM. Diagnostic accuracy of fall risk assessment tools in people with diabetic peripheral neuropathy.
Phys Ther 2012;92(11):1461–1470. [PubMed: 22836004]

49. Montori VM, Fernandez­Balsells M. Glycemic control in type 2 diabetes: time for an evidence­based about­face? Ann Intern Med 2009;150(11):803–
808. [PubMed: 19380837]

50. Roh YH, Yi SR, Noh JH, Lee SY, Oh JH, Gong HS, Baek GH. Intra­articular corticosteroid injection in diabetic patients with adhesive capsulitis: a
randomized controlled trial. Knee Surg Sports Traumatol Arthrosc 2012;20(10):1947–1952. [PubMed: 22113218]

51. Pai LW, Li TC, Hwu YJ, Chang SC, Chen LL, Chang PY. The effectiveness of regular leisure­time physical activities on long­term glycemic control in
people with type 2 diabetes: A systematic review and meta­analysis. Diabetes Res Clin Pract 2016;113:77–85. [PubMed: 26822261]

52. Berenguera A, Mollo­Inesta A, Mata­Cases M, Franch­Nadal J, Bolibar B, Rubinat E, Mauricio D. Understanding the physical, social, and emotional
experiences of people with uncontrolled type 2 diabetes: a qualitative study. Patient Prefer Adherence 2016;10:2323–2332. [PubMed: 27877024]

53. Blanchard V, Barr S, Cerisola FL. The effectiveness of corticosteroid injections compared with physiotherapeutic interventions for adhesive
capsulitis: a systematic review. Physiother 2010;96(2):95–107.

54. Interprofessional Education Collaborative. Core competencies for interprofessional collaborative practice: 2016 update . Washington, DC:
Interprofessional Education Collaborative, 2016.

55. Rappolt S. The role of professional expertise in evidence­based occupational therapy. Am J Occup Ther 2003;57(5):589–593. [PubMed: 14527124]

56. Fetters L, Tilson J. Evidence Based Physical Therapy. Philadelphia: FA Davis, 2012.

57. Brown SJ. Evidence­Based Nursing: The Research­Practice Connection . Norwich, VT: Jones & Bartlett, 2018.
Downloaded 2023­9­12 8:1 A Your IP is
Chapter
58. Arksey5: H,
Understanding Evidence­Based
O’Malley L. Scoping Practice,
studies: towards a methodological framework. Int J Soc Res Methodol 2005;8:19–32. Page 27 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
59. Burton M. An invisible unicorn has been grazing in my office for a month … Prove me wrong. Evidently Cochrane, October 7, 2016. Available at
http://www.evidentlycochrane.net/invisible­unicorn­grazing­office/. Accessed June 16, 2017.
Medical Univ of SC Library
55. Rappolt S. The role of professional expertise in evidence­based occupational therapy. Am J Occup Ther 2003;57(5):589–593. [PubMed: 14527124]
Access Provided by:

56. Fetters L, Tilson J. Evidence Based Physical Therapy. Philadelphia: FA Davis, 2012.

57. Brown SJ. Evidence­Based Nursing: The Research­Practice Connection . Norwich, VT: Jones & Bartlett, 2018.

58. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005;8:19–32.

59. Burton M. An invisible unicorn has been grazing in my office for a month … Prove me wrong. Evidently Cochrane, October 7, 2016. Available at
http://www.evidentlycochrane.net/invisible­unicorn­grazing­office/. Accessed June 16, 2017.

60. Alderson P. Absence of evidence is not evidence of absence. BMJ 2004;328(7438):476–477. [PubMed: 14988165]

61. Centre for Evidence­Based Medicine: OCEBM Levels of Evidence. Available at http://www.cebm.net/ocebm­levels­of­evidence/. Accessed April 28,
2017.

62. Wright JG. A practical guide to assigning levels of evidence. J Bone Joint Surg Am 2007;89(5):1128–1130. [PubMed: 17473152]

63. Guyatt G, Gutterman D, Baumann MH, Addrizzo­Harris D, Hylek EM, Phillips B, Raskob G, Lewis SZ, Schunemann H. Grading strength of
recommendations and quality of evidence in clinical guidelines: report from an American College of Chest Physicians task force. Chest
2006;129(1):174–181. [PubMed: 16424429]

64. Levin RF. Qualitative and quantitative evidence hierarchies: mixing oranges and apples. Res Theory Nurs Pract 2014;28(2):110–112. [PubMed:
25087323]

65. Giacomini MK. The rocky road: qualitative research as evidence. ACP J Club 2001;134(1):A11–13. [PubMed: 11198027]

66. Daly J, Willis K, Small R, Green J, Welch N, Kealy M, Hughes E. A hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol
2007;60(1):43–49. [PubMed: 17161753]

67. Snöljung A, Mattsson K, Gustafsson LK. The diverging perception among physiotherapists of how to work with the concept of evidence: a
phenomenographic analysis. J Eval Clin Pract 2014;20(6):759–766. [PubMed: 24815563]

68. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet 2003;362(9391):1225–1230.
[PubMed: 14568747]

69. Davis D, Evans M, Jadad A, Perrier L, Rath D, Ryan D, Sibbald G, Straus S, Rappolt S, Wowk M, et al. The case for knowledge translation:
shortening the journey from evidence to effect. BMJ 2003;327(7405):33–35. [PubMed: 12842955]

70. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, Robinson N. Lost in knowledge translation: time for a map? J Contin Educ
Health Prof 2006;26(1):13–24. [PubMed: 16557505]

71. Straus SE, Tetroe J, Graham I. Defining knowledge translation. CMAJ 2009;181(3–4):165–168. [PubMed: 19620273]

72. Candian Institutes of Health Research: Knowledge translation. Available at http://www.cihr­irsc.gc.ca/e/29418.html#2. Accessed July 14, 2017.

73. National Center for Dissemination of Disability Research: What is knowledge translation? Focus: A Technical Brief from the National Center for the
Dissemination of Disability Research . Number 10. Available at http://ktdrr.org/ktlibrary/articles_pubs/ncddrwork/focus/focus10/Focus10.pdf.
Accessed April 17, 2017.

74. Lubans DR, Lonsdale C, Cohen K, Eather N, Beauchamp MR, Morgan PJ, Sylvester BD, Smith JJ. Framework for the design and delivery of
organized physical activity sessions for children and adolescents: rationale and description of the ‘SAAFE’ teaching principles. Int J Behav Nutr Phys
Act 2017;14(1):24. [PubMed: 28231794]

75. Oborn E, Barrett M, Racko G. Knowledge Translation in Health Care: A Review of the Literature. Cambridge, UK: Cambridge Judge Business School
2010.
Downloaded 2023­9­12 8:1 A Your IP is
Chapter
76. Khalil5:H.Understanding Evidence­Based
Knowledge translation and implementation Page 28 / 29
Practice, science: what is the difference? Int J Evid Based Healthc 2016;14(2):39–40. [PubMed:
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility
27259003]

77. Cohen J. Things I have learned (so far). Am Pyschologist 1990;45(12):1304–1312.


74. Lubans DR, Lonsdale C, Cohen K, Eather N, Beauchamp MR, Morgan PJ, Sylvester BD, Smith JJ. Framework for the design and delivery of
Medical
organized physical activity sessions for children and adolescents: rationale and description of the ‘SAAFE’ teaching principles. Univ of
Int J Behav SCPhys
Nutr Library
Access Provided by:
Act 2017;14(1):24. [PubMed: 28231794]

75. Oborn E, Barrett M, Racko G. Knowledge Translation in Health Care: A Review of the Literature. Cambridge, UK: Cambridge Judge Business School
2010.

76. Khalil H. Knowledge translation and implementation science: what is the difference? Int J Evid Based Healthc 2016;14(2):39–40. [PubMed:
27259003]

77. Cohen J. Things I have learned (so far). Am Pyschologist 1990;45(12):1304–1312.

78. Rubenfeld GD. Understanding why we agree on the evidence but disagree on the medicine. Respir Care 2001;46(12):1442–1449. [PubMed:
11728303]

Downloaded 2023­9­12 8:1 A Your IP is


Chapter 5: Understanding Evidence­Based Practice, Page 29 / 29
©2023 F.A. Davis Company. All Rights Reserved. Terms of Use • Privacy Policy • Notice • Accessibility

You might also like