Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

50

Journal for Healthcare Quality

Improving Wait Times and Patient


Satisfaction in Primary Care
Melanie Michael, Susan D. Schaffer, Patricia L. Egan, Barbara B. Little, Patrick Scott Pritchard
Abstract: A strong and inverse relationship between patient satisfaction and wait times in ambulatory care settings has been
demonstrated. Despite its relevance to key medical practice outcomes, timeliness of care in primary care settings has not been
widely studied. The goal of the quality improvement project described here was to increase patient satisfaction by minimizing
wait times using the Dartmouth Microsystem Improvement Curriculum (DMIC) framework and the Plan-Do-Study-Act (PDSA) improvement process. Following completion of an initial PDSA cycle, significant reductions in mean waiting room and exam room
wait times (p = .001 and p = .047, respectively) were observed
along with a significant increase in patient satisfaction with waiting room wait time (p = .029). The results support the hypothesis
that the DMIC framework and the PDSA method can be applied to
improve wait times and patient satisfaction among primary care
patients. Furthermore, the pretestposttest preexperimental study
design employed provides a model for sequential repetitive tests
of change that can lead to meaningful improvements in the delivery of care and practice performance in a variety of ambulatory
care settings over time.

Keywords
ambulatory/physician
office
community/public
health
patient satisfaction
performance
improvement models
quality improvement

Ambulatory healthcare is the largest and most


widely used segment of the American healthcare system (Schappert & Rechtsteiner, 2008).
Office visits in these settings account for 25%
of U.S. healthcare expenditures (Centers for
Medicare and Medicaid, 2010) fueling pressure for increased accountability from consumers, employers, and payers. In Crossing
the Quality Chasm, the Institute of Medicines
(2001) Committee on Quality of Healthcare
in America defines six aims for improving
healthcare in the United States including
safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity. Despite its relevance to practice outcomes and patient satisfaction, timeliness of care in office and
other ambulatory care settings is among the
least studied (Leddy, Kaldenberg, & Becker,
2003).
A strong and inverse relationship between
patient satisfaction and wait times in primary
care and specialty care physician offices has
been demonstrated (Leddy et al., 2003; Press

Journal for Healthcare Quality


Vol. 35, No. 2, pp. 5060

The authors have disclosed they have no significant relationships with, or financial interest in, any commercial
companies pertaining to this article.


C 2013 National Association for

Healthcare Quality

Journal for Healthcare Quality

Ganey Associates, Inc. [Press Ganey], 2009).


A large proportion of these studies focused
on wait time in waiting rooms, however, the
amount of time patients spend waiting in an
exam room is also important. A large-scale survey of 2.4 million patients across the United
States conducted by Press Ganey in 2008 revealed that among a list of key variables associated with patient satisfaction, the fifth strongest
correlation was between exam room wait time and
the likelihood of recommending the practice to others (Press Ganey, 2009). Medical practices that
are continually working to minimize wait times
can expect to see significant improvement in
the overall satisfaction of their patients (Press
Ganey, 2009) and associated medical practice
outcomes (Drain, 2007; Press Ganey, 2007a;
Saxton, Finkelstein, Bavin, & Stawiski, 2008;
Stelfox, Gandhi, Orav, & Gustafson, 2005).
Furthermore, reducing wait times can lead to
improved financial performance of the practice (Drain, 2007; Garman, Garcia, & Hargreaves, 2004; Nelson et al., 2007; Press Ganey,
2007b).

Intended Improvement
The purpose of the quality improvement (QI)
pilot project described here was to increase
patient satisfaction by minimizing wait times
in a Florida county health department (CHD)
Adult Primary Care Unit (APCU) practice using
the Dartmouth Microsystem Improvement Curriculum (DMIC) framework (Nelson, Batalden,
& Godfrey, 2007) and the Plan-Do-Study-Act
(PDSA) improvement process (Institute for
Healthcare Improvement, n.d.). Key study objectives included (a) identification of factors
that contribute to long waiting room and exam
room wait times, (b) identification of opportunities for improvement, (c) implementation of
one or more process improvements using the
PDSA model for improvement, and (d) evaluation of the impact on patient wait times, patient
satisfaction with wait times, and overall satisfaction with the care experience. Project approval
was obtained from the Florida Department of
Health and the University of Florida Behavioral/NonMedical Institutional Review Boards.

Vol. 35 No. 2 March/April 2013

Methods
Setting
The CHD where the pilot was conducted is the
principal primary care safety net provider in
the community, with three practice sites and an
aggregate practice panel of more than 35,000
patients (Florida Department of Health, 2010).
The study was conducted in the APCU at the
Health Departments central practice location.
In a typical month the practice team in this unit,
consisting of two physicians and two advanced
practice nurses (APN), provides care for approximately 1,500 patients. Approximately 79%
of patients are White, 16% are Black/African
American, 2% are Asian, and 23% are Hispanic. Prevalent health problems include hypertension, diabetes, hyperlipidemia, depression, and chronic pain. Patient satisfaction
survey scores in the wait time category have historically lagged other satisfaction measures by
six to ten percentage points. Clinic managers
reported complicated visit routines involving
too many steps and delays as key obstacles to
timely patient care.

Planning the Intervention


The DMIC framework represents a systemsbased approach to clinical QI. It is based on
the premise that the clinical microsystem is the
smallest replicable healthcare unit, which is,
in turn, the essential building block of larger
health systems. Each microsystem consists of a
small group of people who work together on
a regular basis to provide care and the subpopulation of patients who receive that care
(Nelson et al., 2007, p. 233). Four key principles
for improving the performance of all microsystems are fundamental to the DMIC framework:
(a) engagement of everyone in the microsystem in continuous process and work improvement, (b) intelligent use of data, (c) establishment of an intimate understanding of the needs
of patients served by the microsystem, and (d)
development and maintenance of positive and
productive connections with other related microsystems (Nelson et al., 2008). The PDSA
model for improvement is the method of choice
within the DMIC for testing ideas for improvement that can lead to higher performance. It
represents a series of structured activities, organized cyclically in four phases, which can be
used to conduct repetitive tests of change (i.e.,
process improvements) in rapid sequence.

Establishing a functional relationship between process change and healthcare outcome variation is fundamental to the PDSA QI
methodology (Speroff & OConnor, 2004). The
pretest/posttest preexperimental design is consistent with these objectives and was selected
for this project. It is also consistent with iterative learning, which is fundamental to the PDSA
method. An eight-phase implementation plan,
based on principles and concepts of the DMIC
as described by Nelson et al. (2007), was followed. The project unfolded over a period of
6 months. Key objectives, activities, tools, and
methods used in each phase are summarized in
Table 1.
Members of the project study team and
APCU staff members met initially to complete
the tasks associated with project phases one
through three, which include defining, measuring, and analyzing drivers of patient dissatisfaction with wait times. The results are summarized in the Ishikawa diagram shown in
Figure 1. Four main categories of causes
emerged: front-end operations, back-end operations, patient work-up, and ancillary services.
Within this study design, as outlined in
Table 1, the first phase of the PDSA cycle was
launched in project phase four. Key tasks associated with this phase included selection of specific test of change strategies and collection of
baseline data for future comparison. The highly
participative multivoting method enabled the
group to establish a clear set of priorities. Using
this method, APCU team members decided to
focus the intervention on front-end operations.
In addition to tasks associated with patient registration, team members working in this area
were also responsible for performing reception
duties, answering phones, and responding to
inquiries from patients and staff. These additional tasks resulted in a continuous stream
of interruptions and delays in completion of
registration processes. Baseline data collected
during the preintervention data collection period revealed a mean waiting room wait time of
28 min and a mean exam room wait time of
14 min; initial wait time targets for each category were set at 20 and 10 min, respectively.

Implementation
In project phase five, three specific strategies
were implemented with the goal of reducing
interruptions for the front office staff and allowing them to focus on patient registration

51

52

Journal for Healthcare Quality

Table 1. Project Implementation Plan


Key Objectives
Phase 1

Phase 2

Phase 3

Phase 4

Increase knowledge of
APCU clinical
microsystem and
opportunities for
improvement
Identification and selection
of a theme for
improvement
Focus and align
improvement efforts
with improvement
theme; connect theme
to daily work processes
Plan: Define and focus
improvement activities;
connect improvement
theme and aims to daily
work processes

Phase 5

Do: Implement test of


change/improvement

Phase 6

Study: Evaluate impact of


test of change/
improvement

Phase 7

Act: Prepare for next PDSA


cycle

Phase 8

Follow through on
improvement

Key Activities

Tools/Methods

Assessment of clinical
microsystem using the 5Ps
(Nelson et al., 2007)
framework

Primary Care Practice Profile


assessment instrument

Group selection of a theme for


improvement; alignment
with the IOM (2001) six
aims and the APCU mission
Document global aim
statement (Nelson et al.,
2007); flow chart current
patient flow process

Brainstorming; multivoting

Conduct cause and effect


analysis; select and define
hypothesis and initial test
of change; define
improvement/action plan
strategies and data
management plan; assign
roles/responsibilities;
collect pretest baseline wait
time and patient
satisfaction data for future
comparison
Operationalize test of change;
collect posttest of change
wait time and patient
satisfaction data
Conduct data analysis;
compare/analyze pre- and
postimplementation wait
times and patient
satisfaction; create
summary documents and
reports; compare
hypothesis to what actually
happened
Determine next steps; modify
or abandon process
improvement; define next
test of change; plan and
prepare for next PDSA
cycle
Document/communicate
results; retain gains

Specific Aim Statement


template; Ishikawa diagram
template; data collection
instruments

Global Aim Statement template;


patient flow analysis
diagram

Data collection instruments

Microsoft Access databases;


Epi Info; data summary
tables and graphs

Based on evaluation and plan


for next PDSA cycle

Project reports; data display


tables and graphs;
storyboard/data wall

Accessible at www.clinicalmicrosystem.org.

tasks. A temporary reception station was created in the main hallway just outside the entrance to the APCU. A receptionist from the
clerical float pool was assigned to greet, assist,

and direct patients and to field questions. The


single multipatient sign-in sheet was replaced
with a simple half-page form that allowed each
patient to sign in on a separate sheet. This

Vol. 35 No. 2 March/April 2013

Figure 1. Causes and Effects: Patient Dissatisfaction With Wait Times

allowed the receptionist to deliver completed


forms to the registration team in real time
continuously throughout the day. Calls coming into the patient registration area were redirected to other staff members in the unit who
are not responsible for direct patient care or
services.

Methods of Evaluation
The evaluation plan included two key wait time
process measures: waiting room wait time and
exam room wait time. Waiting room wait time was
defined as the time elapsed between requesting
that the patient be seated in the waiting room
and the time he/she was called to be placed in
an exam room. Exam room wait time was defined as the amount of time elapsed from the
time the patient was seated in an exam room
and the time the physician or APN entered the
room.
Convenience sampling was employed. Waiting room and exam room wait time data were
collected for all APCU patients seen during
the preimplementation and postimplementation wait time data collection periods. An instrument developed by study team members allowed APCU staff to record (a) time of patient
arrival, (b) time the patient was seated in the

waiting room, (c) time the patient was seated


in an exam room, and (d) time the provider
entered the exam room. Staff in the reception and registration areas were responsible for
recording patient arrival and waiting room seating times. Clinical support staff, responsible for
taking each patients vital signs, recorded the
exam room seating time for each patient. Each
provider was responsible for recording his/her
own exam room entry times. APCU staff members received training in data collection procedures prior to implementation of the first data
collection exercise.
Wait time and patient satisfaction data
were collected before and after implementing
changes to the front-end operations. Each wait
time data collection period lasted 1 week concurrent with the first week of the pre- and
postimplementation patient satisfaction survey
data collection periods that each lasted 2 weeks.
Each of the wait time data collection periods
were limited to 1 week based on an average
weekly visit volume of approximately 375 encounters and the assumption that data would be
collected on all, or nearly all, visits. Accordingly,
both wait time and patient satisfaction data were
collected for patients visiting the APCU during the first week of each data collection period while only patient satisfaction data were

53

54

Journal for Healthcare Quality

collected for patients seen during the second


week of each data collection period.

Patient Satisfaction Instrument


Patient satisfaction was defined as (a) patient
satisfaction with waiting room wait time, (b)
patient satisfaction with exam room wait time,
and (c) the likelihood of referring friends
and relatives to the practice as a proxy measure associated with overall satisfaction and
the likelihood of returning for care in the
future. A patient satisfaction survey instrument developed and selected by the Health
Resources and Services Administration (n.d.)
for use in Federally Qualified Community
Health Centers was used. The survey is in
the public domain, available in English and
in Spanish, and is accessible at http://bphc.
hrsa.gov/patientsurvey/patients.htm. It is well
suited to this project given the similarities of
the safety net populations served by Community
Health Centers and the CHDs primary care patient population. The instrument provides for
anonymous collection of data and includes a
total of 29 items in three response formats, including items that allow for collection of data
specifically relating to patient satisfaction with
wait times. A summary question asks patients to
rate the likelihood of referring friends and relatives. Three open-ended questions provide an
opportunity for patients to comment on what
they like best about the practice, what they
like least about the practice, and make suggestions for improvement. Disadvantages associated with the use of the instrument include
lack of historical and comparison information
on the instruments psychometric properties.
Cronbachs alpha was calculated for questions
in the Likert-scale category and the instrument
was found to have high internal reliability (25
items; = .98).
Patient satisfaction surveys were distributed
at the point of care by study team members.
Trained medical interpreters were available for
non-English speaking patients. The oral invitation to participate was guided by a standard
script that covered all relevant elements of informed consent. Study team members received
information on the informed consent process
prior to interacting with patients. Written informed consent was not required as no unique
patient identifier information was collected and
participation was voluntary. Patients were able
to return completed surveys via a secure lock

box located in the APCU checkout area or


via the U.S. Postal Service using a stamped,
self-addressed envelope. Approximately, 86%
(1,259/1,470) of patients seen in the APCU
during the first and second data collection
periods verbally accepted the initial invitation
to participate. The remaining 14% either declined to participate or to speak with a study
team member regarding participation. Both patient satisfaction survey data collection periods
lasted 2 weeks. A parallel cut off point of 20 calendar days was established for return of patient
satisfaction surveys for both data collection periods. Surveys returned after the cutoff were
excluded from data analysis.

Analysis
All data were initially entered into two Microsoft
Office Access databases created exclusively for
the purpose of managing patient wait time data
and patient satisfaction data, respectively. The
data were subsequently imported into Microsoft
Excel and analyzed using Excel and Centers for
Disease Controls Epi Info. Two primary analyses were conducted. The t-test was used to compare mean wait times prior to and following
implementation of the test of change intervention. Chi-square was used to examine and compare patient satisfaction with waiting room and
exam room wait times, as well as the likelihood
of referring friends and family, for the pre- and
postimplementation periods. An alpha level of
.05 was used for all statistical tests.

Results
Sample Description
A comparison of the age and gender characteristics of the sample population and the entire
APCU population are summarized in Table 2.
The proportion of patients in the 18- to 44year age group was significantly lower and the
proportion in the 45- to 64-year age group was
significantly higher in the sample population
when compared to the entire APCU patient
population.
Wait time data were captured for 98%
(349/355) of patients seen by APCU providers
during the preimplementation wait time data
collection period and for 97% (365/375) of patients seen during the postimplementation wait
time data collection period. Missing and ambiguous data elements were identified in one
or both data categories for 6% of visits sampled

Vol. 35 No. 2 March/April 2013

Table 2. Comparison of Demographic Characteristics of Patient Satisfaction


Survey Participants and the APCU Population*
Preimplementation
Participants

Postimplementation
Participants

APCU
Population

Age Groups

n = 262

n = 285

N = 10,057

1844
4564
>65

124 (47%)
119 (45%)
19 (7%)

133 (47%)
135 (47%)
17 (6%)

6,258 (62%)
3,240 (32%)
559 (6%)

n = 263

n = 284

N = 10,057

111 (42%)
152 (58%)

126 (64%)
158 (56%)

4,046 (40%)
6,011 (60%)

Gender
Male
Female

2 = 52.99
df = 4

<.001

2 = 2.33
df = 2

.312

Based on category-specific sample and APCU population data.

during the preintervention period and for 4%


of visits sampled in the postintervention period.
All ambiguous data elements were excluded
from analyses of wait time data. Final calculation of the mean waiting room and exam room
wait times in the preimplementation period was
based on 327 and 331 visits, respectively, and in
the postimplementation period on 355 and 352
visits, respectively.
Response rates for the first and second patient satisfaction survey data collection periods
were 42% (270/643) and 47% (290/616), respectively. Of the 560 surveys returned, 75%
were returned via the drop box located inside
the APCU.

Wait Time
The results of a comparison of mean waiting
room wait times for the pre- and postimplementation periods are summarized in Table 3. Mean
waiting room wait time for patients seen during
the postimplementation period was 5.33 min
shorter when compared to the preimplementation period. This difference was significant
(p = <.001). Mean exam room wait time following test of change implementation was 1.81
min shorter. This difference was also significant
(p = .047). Although these results were significant, targeted wait time goals (i.e., 20 min in
the waiting room category and 10 min in the
exam room category) were not met.

Patient Satisfaction
Results of the Chi-square analysis of patient satisfaction scores in the waiting room wait time,
exam room wait time, and likelihood of refer-

ring friends/family categories are summarized


in Table 4. The results were significant in the
waiting room wait time category (2 = 10.77,
df = 4, p = .029) but were not significant for
the exam room wait time or likelihood of referring measures.
Pareto chart analysis of the fair and poor responses to the 25 questions scored by patients
using the Likert-type scale was performed. The
results, as depicted in Figure 2, indicate that
when rank ordered in terms of magnitude key
drivers of dissatisfaction among APCU patients
include waiting room wait time, exam room
wait time, turnaround time for return of phone
calls, and time spent waiting for laboratory testing and results, respectively. The responses in
these five categories accounted for nearly 50%
of all fair and poor ratings.
A preliminary appraisal of responses to the
open-ended questions included on the patient satisfaction survey during the pre- and
postimplementation survey periods using the
word search function in Microsoft Excel revealed a number of recurring patterns and
themes. Among patient responses to the question, What do you like best about our center? the following themes emerged: (a) friendliness and caring of nurses and support staff,
(b) affordability, (c) co-location of laboratory
and pharmacy services, (d) clean and comfortable facilities, (e) quality of care, and (f) location/convenience. In the What do you like
least about our center? category, emerging
themes included (a) wait times, (b) telephone
call back and response times, (c) crowded and
noisy waiting room, (d) amount of time with
provider, and (e) lack of specialty care and

55

56

Journal for Healthcare Quality

Table 3. Pre- and Postimplementation Wait Time Comparisons


Waiting room wait time
Preimplementation
Postimplementation
Exam room wait time
Preimplementation
Postimplementation

Range

SD

327
355

1133
1153

28.38
23.05

18.94
16.83

3.89

<.001

331
352

063
057

14.45
12.64

12.15
11.56

1.99

.047

Table 4. Pre- and Postintervention Comparison of Wait Time Satisfaction and


Likelihood of Referring
Response Scale Frequencies
Poor
1

Scale Response

Fair
2

OK
3

Good
4

Great
5

Waiting
Room

Pre
Post

42
24

40
36

71
76

76
93

38
59

= 10.77
df = 4

.029

Exam
Room

Pre
Post

31
16

27
31

66
72

86
89

51
71

2 = 8.06
df = 4

.089

Likely to refer

Pre
Post

6
8

5
3

23
19

85
91

126
142

2 = 1.69c
df = 4

.793

Preintervention period patient responses. Postintervention period patient responses. An expected value is <5. Chi-square may not be
valid.

mental health referral resources. Themes in


the suggestions for improvement category include (a) reducing wait times, (b) improving
telephone response time, and (c) increasing
the availability of referral resources, including
diagnostic imaging, specialty care, and mental
health services.

Discussion
Summary and Relation to Other Evidence
The APCUs baseline waiting room wait time
of 28 min is consistent with those reported in
other large-scale studies (Leddy et al., 2003;
Press Ganey, 2009). Using the DMIC framework

Figure 2. Drivers of APCU Patient Dissatisfaction

Vol. 35 No. 2 March/April 2013

and PDSA improvement process, significant reductions in the mean waiting room and exam
room wait times along with a significant increase in patient satisfaction with waiting room
wait time were achieved. No significant changes
in patient satisfaction with exam room wait time
or the likelihood of referring friends or family
were identified.

Although the wait time data collection instrument proved adequate, APCU and study
team members identified opportunities for instrument improvement in future PDSA cycles.
They also identified the need for improvement
in the accuracy and synchronization of timepieces used to collect wait time data.

Interpretation
Limitations
Key study limitations include use of a preexperimental pretest/posttest design, convenience
sampling strategy, and lack of historical information on the psychometric properties of
the patient satisfaction survey instrument. One
of the defining characteristics of QI relates
to demonstration of the relationship between
process and system improvements, which are
frequently multifaceted, and significant differences in targeted outcome or process measures
that are sustained over time (Ogrinc et al.,
2008). The results of the analyses reported here
provide only preliminary support for the hypothesis that the strategies implemented during this pilot resulted in meaningful improvements for APCU patients and staff. Replication
and consistent movement of key measures in
the desired directions over time using rigorous evaluation methods will provide the level of
evidence required for informed decision making regarding future improvement efforts using
this model.
A comparison of race and ethnicity characteristics of sample participants and the entire
APCU population was precluded by inconsistencies in methods used to categorize and collect data between the patient satisfaction survey
instrument and SCHDs information technology reporting systems. These issues will need to
be addressed prior to implementation of subsequent PDSA cycles.
The relative weights of the three process improvement strategies employed during the pilot cannot be quantified, however, at the conclusion of the study APCU staff members, managers, and study team member agreed that construction of a separate permanent reception station near the entrance of the unit would be
essential to achieving and sustaining targeted
wait time objectives. This recommendation was
approved by the CHD administration and the
construction was completed within a few weeks
of pilot conclusion.

Although the mean waiting room wait time was


reduced by 5.33 min, the 20-min wait target established for this category was not met during
the first PDSA cycle. Likewise, a 1.81-min reduction in the mean exam room wait time was
not sufficient to achieve the 10-min exam room
wait target. Pilot results and previous large-scale
studies provide support for the utility of the
PDSA model for improving wait times and patient satisfaction. Because the focus of the intervention was on front-end practice activities,
it is not surprising that the intervention had a
greater effect on waiting room time than it did
on exam room wait time. It is likely that one
or more subsequent PDSA cycles, focusing on
back-end clinical processes, will be required in
order to achieve meaningful improvement in
exam room wait times.
Pareto chart evaluation of pre- and postimplementation patient satisfaction survey data
supports the APCU teams plan to continue
focusing on wait time improvements in subsequent PDSA cycles. Expansion of the evaluation
plan to include examination of the impact of
any future improvements on patient retention,
practice financial outcomes, and risk management is also warranted.
The results of this pilot provide further support for the hypothesis that reducing waiting
room wait time improves patient satisfaction.
However, gains in the waiting room wait time
category measures were not accompanied by
significant changes in the reported likelihood
that patients will refer friends and relatives
to the practice. Evaluation of the impact of further improvements in satisfaction measures in
this category, and those associated with other
dimensions of care, should be included in the
long-term performance improvement plan for
the APCU.
Qualitative feedback from unit staff suggests
that process improvements may have resulted
in a calmer and less chaotic work environment
in the patient reception and registration areas.
Important upstream and downstream impacts

57

58

Journal for Healthcare Quality

reported by APCU team members include (a)


improved front-end patient flow and fewer delays in relay of charts between the registration
and clinical areas, (b) elimination of congestion in the APCU entrance area, (c) enhanced
patient privacy, (d) improved access to information and reception assistance for patients,
(e) fewer distractions and interruptions for registration staff, and (f) fewer registration process
errors.

Conclusion
The results of this project provide additional
support in favor of the DMIC framework and
PDSA improvement method as viable options
for conducting QI and achieving wait time
process improvements. The pretest/posttest
preexperimental study design employed is consistent with iterative learning which is fundamental to the PDSA method. It provides a
model for conducting sequential repetitive tests
of change over time that can lead to meaningful and sustained improvement in the delivery
of care and practice performance in a variety of
ambulatory care settings.

Acknowledgments
The authors wish to acknowledge and express
their gratitude to the following individuals who
participated in the planning and implementation of this project: Amanda Daly, Dianne
Nugent, Elizabeth Smith, Linda Flanagan, Marguerite Rappoport, Barbara OConnor, and
members of the Adult Primary Care Unit team.

References
Centers for Medicare and Medicaid. (2010). National health expenditures fact sheet. Retrieved March 1,
2011, from www.cms.gov/NationalHealthExpendData/
25_NHE_Fact_Sheet.asp.
Drain, M. (2007). Will your patients return? The foundation for
success. Retrieved June 24, 2010, from www.pressganey.
com/galleries/default-file/medical-practice_4.pdf.
Florida Department of Health. (2010). Personal health users
report [Data file]. Retrieved December 1, 2010, from
http://dohiws.doh. state.fl.us.
Garman, A. N., Garcia, J., & Hargreaves, M. (2004). Patient
satisfaction as a predictor of return-to-provider behavior:
Analysis and assessment of financial implications. Quality
Management in Health Care, 13, 7580.
Health Resources and Services Administration. (n.d.).
Health center patient satisfaction survey. Retrieved July
7, 2010, from http://bphc.hrsa.gov/patientsurvey/
default.htm.
Institute for Healthcare Improvement. (n.d.). Plan-DoStudy-Act (PDSA) worksheet (IHI tool). Retrieved July
15, 2010, from www.ihi.org/IHI/Topics/Improvement/
Improvement Methods/Tools/Plan-Do-Study-Act+%28
PDSA%29+ Worksheet.htm.

Institute of Medicine (IOM). (2001). Crossing the quality


chasm: A new health system for the 21st century. Washington,
DC: National Academy Press.
Leddy, K. M., Kaldenberg, D., & Becker, B. W. (2003). Timeliness of ambulatory care treatment. Journal of Ambulatory
Care Management, 26, 138149.
Nelson, E. C., Batalden, P. B., & Godfrey, M. M. (2007).
Quality by design: A clinical microsystems approach. San Francisco, CA: Jossey-Bass.
Nelson, E. C., Godfrey, M. M., Batalden, P. B., Berry, S. A.
Bothe, A. E., McKinley, K. E., et al. (2008). Clinical microsystems, Part 1: The building blocks of health systems.
Joint Commission Journal on Quality and Patient Safety, 34,
367378.
Ogrinc, G., Mooney, S. E., Estrada, C., Foster, T.,
Goldmann, D., Hall, L. W., et al. (2008). The SQUIRE
(Standards for Quality Improvement Reporting Excellence) guidelines for quality improvement reporting:
Explanation and elaboration. Quality and Safety in Health
Care, 17(Supplement 1), i13i32.
Press Ganey Associates, Inc. (2007a). Return on investment:
Patient loyalty pays. Retrieved June 24, 2010, from
www.pressganey.com/galleries/lead-generating-acute/
Loyalty_12-14-07.pdf.
Press Ganey Associates, Inc. (2007b). Return on investment: Increasing profitability by improving patient
satisfaction. Retrieved November 27, 2009, from
www.pressganey.com/galleries/default-file/WhitePaper
_Profitability.pdf.
Press Ganey Associates, Inc. (2009). Pulse report 2009:
Medical practice-Patient perspectives on American health
care. Retrieved November 29, 2009, from http://
pressganey.com.
Saxton, J. W., Finkelstein, M. M., Bavin, S. A., & Stawiski,
S. (2008). Reduce liability risk by improving your patient
satisfaction. Retrieved June 24, 2010, from www.pre
ssganey.com/galleries/default-file/MD_White_PaperMalpractice_0808.pdf.
Schappert, S. M., & Rechtsteiner, E. A. (2008). Ambulatory medical care utilization estimates for 2006. Retrieved June 24, 2010, from www.cdc.gov/nchs/data/
nhsr/nhsr008.pdf.
Speroff, T., & OConnor, G. T. (2004). Study designs for
PDSA quality improvement research. Quality Management in Health Care, 13, 1732.
Stelfox, H. T., Gandhi, T. K., Orav, E. J., & Gustafson,
M. L. (2005). The relation of patient satisfaction
with complaints against physicians and malpractice
lawsuits. American Journal of Medicine, 118, 1126
1133.

Authors Biographies
Melanie J. Michael, DNP, MS, FNP-C, CAPPM, CPHQ,
is an Assistant Professor at the University of South Florida
College of Nursing in Tampa, Florida, where she teaches
nursing leadership, healthcare quality improvement, and
patient safety.
Susan Schaffer, PhD, ARNP, FNP-BC, is Clinical Associate Professor and Chair, Department of Womens, Childrens Family Nursing at University of Florida College of
Nursing in Gainesville, Florida, where she also coordinates
the Family Nurse Practitioner Track. She is Editorial Board
Member for Clinical Nursing Research, An International
Journal and reviewer for Biological Research for Nursing.
Patricia L. Egan, MSN, RNC, CNL, CAPPM, is a Nursing Program Consultant at the Sarasota County Health

Vol. 35 No. 2 March/April 2013

Department in Sarasota, Florida. Her responsibilities


include data management and quality improvement.

a.

Barbara B. Little, DNP, MPH, RN, APHN-BC, CNE,


NCSN, is a faculty member with Florida State University College of Nursing in Tallahassee, Florida, where she
teaches leadership, evidence-based practice and epidemiology in the DNP program.

b.

Patrick Scott Pritchard, MPH, is an Epidemiologist with


the Florida Department of Health in Tampa, Florida.
His responsibilities include coordination and improvement of public health surveillance efforts, conducting outbreak investigations, and data management/analysis. Mr.
Pritchard is also a doctoral student at University of South
Florida College of Public Health.
For more information on this article, contact Melanie
Michael at mmichael@health.usf.edu.

Journal for Healthcare Quality is pleased to offer


the opportunity to earn continuing education
(CE) credit to those who read this article
and take the online posttest at http://www.
nahq.org/education/content/jhq-ce.html.
This continuing education offering, JHQ
241, will provide one contact hour to those
who complete it appropriately.

Core CPHQ Examination Content Area


III. Performance Measurement & Improvement

c.
d.

3. The results of the quality improvement project


reported here
a.
b.
c.
d.

a.
b.

d.

r
r

tient satisfaction, and medical practice outcomes


Describe four key principles of the Dartmouth
Clinical Microsystem Curriculum
Design a quality improvement project using the
pretest/posttest pre-experimental design and
the Plan-Do-Study-Act model for improvement

Questions:
1. An examination of the Pareto chart analysis presented suggests that
a.
b.
c.
d.

Staff members should receive privacy and


confidentiality training
The likelihood of referring friends and relatives increases as wait time decreases
Exam room wait times are a major driver of
patient dissatisfaction
Comfort and safety of the waiting area a major driver of patient dissatisfaction

2. Physician office practices that keep wait times to


a minimum can expect an increase in patient satisfaction because

Demonstrate that reducing the workload of


staff in the reception results in improved patient flow
Suggest that patient perceptions regarding
staff friendliness is a key driver of satisfaction
with care
Provide only preliminary support for the
original study hypothesis
Indicate clearly that the improvement strategies employed resulted in an increase in patient satisfaction

4. Key tasks typically associated with the first phase


of the PDSA cycle include

c.

Objectives
r Explain the relationship among wait time, pa-

A strong positive correlation between wait


times and patient satisfaction has been
demonstrated in several large-scale studies
A strong inverse relationship between wait
times and patient satisfaction has been
demonstrated in several large-scale studies
Anecdotal reports from patients suggest a
strong positive correlation between these
variables
Anecdotal reports from patients suggest a
strong negative correlation between these
variables

Collecting baseline data for future comparison


Conducting one or more t tests using baseline data
Documenting and evaluating upstream and
downstream impacts
Implementing process improvement strategies

5. Performance improvement efforts based on the


Darmouth Microsystem Improvement Curriculum include all of the following EXCEPT
a.
b.
c.
d.

Promoting positive relationships among individuals in related microsystems


Collecting and using data to drive and inform decision making
Exploring and understanding the needs of
patients served by the microsystem
Allocating responsibility for patient satisfaction to members of the care team

6. The PDSA method is consistent with iterative


learning and, as such, can be used
a.
b.
c.
d.

To estimate the magnitude of measure improvement that can be achieved


To implement a series of change tests focused on improving a single process
To develop and prioritize quality improvement efforts
To increase staff motivation to participate in
quality improvement efforts

7. In the pilot study presented, fewer registration


process errors represent

59

60

Journal for Healthcare Quality

a.
b.
c.
d.

An important unanticipated downstream impact


An important projected upstream impact
A coincidental and nonsignificant impact
A coincidental impact attributable to the
Hawthorne effect

8. The results of this pilot provide additional support for the hypothesis that
a.
b.
c.
d.

Improving wait times and patient satisfaction


in primary care can improve practice financial indicators
Improving exam room wait time increases
the likelihood that patients will refer their
friends to the practice
Improving exam room wait time reduces
the likelihood that patients will refer their
friends to the practice
Improving waiting room wait time results in
increased patient satisfaction with the care
experience

9. The purpose of the quality improvement pilot


study presented was to
a.

Measure the proportion of primary care patients referred by other practice patients

b.
c.
d.

Increase patient satisfaction among a group


of primary care patients by minimizing wait
times
Increase patient satisfaction among a group
of primary care patients by increasing
appointment access
Create a Pareto chart to examine the impact
of process improvements on patient satisfaction

10. Objectives for the study presented include


a.
b.
c.
d.

Improving practice financial outcomes


Improving utilization of check in technology
Improving staff participation in quality improvement efforts
Identifying causes of long wait times

11. Within the Dartmouth Clinical Microsystem Curriculum framework, the method of choice for testing change ideas is the
a.
b.
c.
d.

Chi square
Pareto chart
PDSA model for improvement
Delphi technique

You might also like