Using Small Group Instruction To Improve Students Reading Fluency An Evaluation of The Existing Research

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Journal of Applied School Psychology

ISSN: 1537-7903 (Print) 1537-7911 (Online) Journal homepage: http://www.tandfonline.com/loi/wapp20

Using Small-Group Instruction to Improve


Students' Reading Fluency: An Evaluation of the
Existing Research

John C. Begeny, Rebecca A. Levy & Stacey A. Field

To cite this article: John C. Begeny, Rebecca A. Levy & Stacey A. Field (2018) Using Small-Group
Instruction to Improve Students' Reading Fluency: An Evaluation of the Existing Research, Journal
of Applied School Psychology, 34:1, 36-64, DOI: 10.1080/15377903.2017.1328628

To link to this article: https://doi.org/10.1080/15377903.2017.1328628

Published online: 28 Jun 2017.

Submit your article to this journal

Article views: 149

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=wapp20
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY
, VOL. , NO. , –
https://doi.org/./..

Using Small-Group Instruction to Improve Students’


Reading Fluency: An Evaluation of the Existing Research
John C. Begeny, Rebecca A. Levy, and Stacey A. Field
Department of Psychology, North Carolina State University, Raleigh, North Carolina, USA

ABSTRACT ARTICLE HISTORY


Reading fluency is necessary for reading comprehension, but Received  December 
approximately 40% of U.S. fourth-grade students have inade- Revised  April 
quate reading fluency skills. Because small-group (SG) instruction Accepted  May 
is often used as a first line of intervention for struggling readers, KEYWORDS
SG instruction targeting deficiencies in text reading fluency ought Reading fluency;
to be part of every school’s intervention toolbox. The authors intervention; small group;
summarize the existing research on instruction and interventions literature review; response to
that specifically targets reading fluency and is implemented by intervention
an adult with 3 or more students at once. Key findings revealed
that most studies used a single-case design, nearly all studies
were rated positively in terms of methodological quality, and
the majority of participants significantly improved as a result of
receiving SG intervention. Furthermore, of the five studies exam-
ining comparable SG and 1-on-1 interventions, 79% of the students
performed equally well from both interventions. Implications
and several recommendations for future research are discussed.

The National Reading Panel (National Institute of Child Health and Human
Development, 2000) and several subsequent U.S. panels and publications (e.g.,
Armbruster, Lehr, & Osborne, 2001; Lonigan & Shanahan, 2010) identified oral
reading fluency as one of the five foundational skills crucial for successful reading
achievement. Reading fluency can be defined as the “ability to read connected text
rapidly, smoothly, effortlessly, and automatically with little conscious attention to
the mechanics of reading, such as decoding,” (Meyer & Felton, 1999, p. 284), and
is considered essential for comprehension, generalization, and maintenance of all
reading skills (Bonfiglio, Daly, Martens, Lin, & Corsaut, 2004; Daane, Campbell,
Grigg, Goodman, & Oranje, 2005; Fuchs, Fuchs, Hosp, & Jenkins, 2001; Torgesen
& Hudson, 2006).
Unfortunately, far too many students have inadequate reading fluency skills. For
example, Snow, Burns, and Griffin (1998) argued that 25–40% of students’ academic
careers are at risk because they are not reading quickly or easily enough, and nation-
ally representative studies of students’ reading fluency estimate approximately 40%

CONTACT John C. Begeny john_begeny@ncsu.edu Department of Psychology, North Carolina State University,
Campus Box , Raleigh, NC –, USA.
©  Taylor & Francis Group, LLC
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 37

of U.S. fourth-grade students are nonfluent readers (Daane et al., 2005). Given these
data and the importance of fluency for reading comprehension (Daane et al., 2005;
Fuchs et al., 2001), it is not surprising that 31% of U.S. fourth-grade students read
below a basic level and 64% of them read below a proficient level (National Assess-
ment of Educational Progress, 2015).

Effective instruction for reading fluency

Research focused on identifying the most effective components of reading fluency


instruction has suggested that a combination of techniques results in the best
outcomes for improving both typically performing and struggling students’ oral
reading fluency (Chard, Vaughn, & Tyler, 2002). Begeny et al. (2010) identified eight
evidence-based instructional components related to fluency through a review of
several meta-analyses and comprehensive literature reviews (e.g., Chard et al., 2002;
Meyer & Felton, 1999; Morgan & Sideridis, 2006; National Institute of Child Health
and Human Development, 2000; Therrien, 2004). For example, Therrien reported
that the following seven components are related to improved reading fluency: (a)
students’ repeated reading of ability-appropriate text out loud to an adult at least
three times (effect size [ES] = 1.37), (b) model reading by a more skilled reader
(ideally an adult; ES = 0.40), (c) systematic error-correction procedures (ES =
0.51), (d) goal setting (e.g., practicing reading material until a criterion is met; ES =
1.70), (e) performance feedback and showing results visually (e.g., with a graph; ES
= 0.57), (f) verbal cues to read with fluency (ES = 0.72), and (g) verbal cues to read
with comprehension (ES = 0.81). Also, Morgan and Sideridis evidenced strong
support for an additional component, using systematic praise and a structured
reward system for reading behaviors.
Collectively, there is compelling evidence from both individual studies and
meta-analyses that students improve in reading fluency and comprehension when
the aforementioned instructional and motivational strategies are integrated within
a one-to-one (student-teacher) context (Begeny et al., 2010; Chard et al., 2002;
National Institute of Child Health and Human Development, 2000). But a closer
look at the research in this area reveals that nearly all studies involve one-on-one
instruction; therefore, it is relatively unknown to what extent fluency-based instruc-
tion can be effectively used in groups of three or more students.

Small-group instruction

In contrast to one-on-one instruction, small-group (SG) instruction usually offers


a more practical approach to serving the needs of all struggling learners because
it is more time and resource efficient than serving students individually; and not
surprisingly, teachers have reported a preference for serving multiple students at
once (Elbaum, Vaughn, Tejero Hughes, & Watson Moody, 2000). Investigators mea-
suring factors related to SG instruction have also found that providing instruction
in a ratio of one teacher to no more than six students allows for similar amounts
38 J. C. BEGENY ET AL.

of corrective feedback, opportunities for responding, and teacher attention com-


pared with what can be provided in one-on-one instruction (Thurlow, Ysseldyke,
Wotruba, & Algozzine, 1993). Additionally, Foorman and Torgesen (2001) argued
that the most effective interventions for children at risk for reading problems
include strategies that can be applied within SG settings, such as direct instruction
in explicit skills, positive support, and encouragement.
Some research has shown that SG instruction may even be as effective as one-on-
one interventions, at least for some students (e.g., Elbaum et al., 2000). For example,
Vaughn et al. (2003) evaluated the effects of three different group sizes on the
effectiveness of a 30-min reading intervention for second-grade struggling readers.
Participants received instruction from an adult in one of three instructional con-
texts: one-on-one, one-on-three, or one-on-10. Each group received the same inter-
vention protocol for 13 weeks and for 30 min each day. Results indicated there were
no differences between the groups receiving one-on-one and one-on-three instruc-
tion, but both of these groups performed significantly better than the one-on-10
group.
SG instruction is also embedded within the first line of intervention recom-
mendations commonly made as part of instructional problem-solving models,
such as prereferral intervention teams (Buck, Polloway, Smith-Thomas, & Cook,
2003), professional learning communities (DuFour, Eaker, Karhanek, & DuFour,
2004), and multilevel models of responsiveness-to-intervention (RTI; Fuchs &
Vaughn, 2012). Using RTI as an example, multitiered service delivery is an essential
component of this model and is often described in the literature as a three-tier
process (Denton, 2012; National Center on Response to Intervention, 2010).
The primary level (Tier 1) includes high quality core instruction intended to
meet the needs of most students, the secondary level (Tier 2) includes evidence-
based interventions of moderate intensity that address the needs of struggling
learners, and the tertiary level (Tier 3) includes individualized interventions of
increased intensity for students who show no or minimal response to Tier 2
intervention.
In the present discussion, Tier 2 is especially of interest because most of the
RTI scholarly literature suggests that intervention at this level be implemented in
small groups. Although some have suggested particular group sizes of 3–4 students,
(e.g., Gersten et al., 2008) or 4–6 students (e.g., Burns & Gibbons, 2008), these
suggestions are mainly illustrative and the overall recommendation is for educators
to consider SG instruction at Tier 2 because it is more feasible for implementation.
Also relevant to this discussion, many argue that Tier 2 intervention should target
the student’s specific area of reading skill deficit (e.g., Burns & Gibbons, 2008; Fuchs
& Vaughn, 2012; Tilly, 2008). This is important to point out because many students,
especially elementary-aged students, struggle specifically with reading fluency. For
instance, Denton et al. (2011) found that 273 of 680 first-grade students (∼40%)
consistently read within an at-risk range when oral reading fluency was regularly
progress-monitored over an eight-week period. In a follow-up study, Fletcher et al.
(2011) reported that 104 of these students (72.1%) continued to display fluency-only
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 39

deficits after receiving a broad-based intervention targeting phonemic awareness,


decoding, fluency, vocabulary, and comprehension.

Purpose of the present study


The underlying rationale for our study can be summarized by connecting the main
points made previously: (a) reading fluency is a critical skill during literacy develop-
ment, but a large percentage of students are nonfluent (Daane et al., 2005); (b) a rela-
tively large body of research supports particular strategies to improve students’ read-
ing fluency, but meta-analyses of this research show that most investigators evaluate
the strategies when implemented in a one-on-one (teacher-student) context (e.g.,
Therrien, 2004); (c) SG instruction (generally speaking) is more time and resource
efficient than one-on-one instruction is, has evidence of efficacy in reading, and is a
highly recommended instructional context during initial (e.g., Tier 2) stages of inter-
vention for struggling readers; and (d) teachers would logically benefit their students
by using SG interventions that target reading fluency and are research supported.
A search of the research literature on SG interventions designed to target reading
fluency indicates that there has been active research in this area for at least a decade,
but a summative review of this literature is currently unavailable. Particularly
useful in evaluating intervention research, a comprehensive and summative review
of the existing literature (e.g., methodological characteristics, patterns of results,
quality indicators [QIs] of the available studies) is informative for researchers and
practitioners who seek to further investigate or implement the interventions being
evaluated. Such a review gives a clearer picture about the current state of the liter-
ature (e.g., what practices appear to be most effective and with which populations)
and needed directions for future research that could meaningfully expand the
knowledge base in that area.
The purpose of this study was to systematically review the existing research on
SG instruction that is designed to improve students’ text reading fluency. In doing
so, our goal was to answer the following five research questions. First, what are the
methodological characteristics and commonalities of the reported studies in this
area, including but not limited to participant characteristics, experimental design,
and SG instructional components? Second, what findings have the studies reported?
Third, using the indicators of methodological quality proposed by Gersten et al.
(2005) for group-design studies (GDS) and Horner et al. (2005) for single-case
designs (SCDs), to what extent do the studies in our review meet these QIs, and
collectively do the practices reported in the studies meet proposed criteria as an
evidence-based practice? Fourth, of studies that evaluate the relative effectiveness
between comparable SG versus one-on-one reading fluency instruction practices,
what are the findings of those studies (e.g., does evidence suggest that SG instruc-
tion for reading fluency is more, less, or equally effective compared to one-on-one
instruction)? Fifth, based on the findings of the aforementioned questions, what
methodological aspects are currently lacking from this area of research and what
are the most pressing directions for future investigations?1
40 J. C. BEGENY ET AL.

Method

Inclusion criteria and search procedures


To facilitate a comprehensive search of the available literature, we used searches of
electronic databases, hand searches of relevant articles (i.e., reading the entire article
and titles of all articles listed in the references section), and correspondence with
authors known to publish research in the area. Our goal was to include all relevant
peer-reviewed publications, though an assessment of methodological quality was
considered as part of our third research question.

Inclusion criteria
Given our focus on reviewing the research on SG interventions designed specifically
to target reading fluency, a research report was included if it was published prior to
2016 and met each of the following inclusion criteria.
r The research was reported in English within a peer-reviewed journal or book
chapter.
r The report presented previously unpublished findings.
r The study was experimental or quasiexperimental and generated quantitative
outcomes.
r The report described a reading intervention that was implemented by an adult
with three or more children at the same time.
r The study included at least one dependent variable that measured students’
reading fluency.
r The author(s) of the study indicated (explicitly or implicitly) that the primary
goal of the intervention was to improve students’ reading fluency. However, if
reading fluency was just one area being measured in the study and the authors
did not state that the main goal of the intervention was to improve fluency,
it was not included because we were interested in reviewing targeted (rather
than broad based) interventions for reading fluency. This criterion is consistent
with our study goals and the fact that many students struggle specifically with
reading fluency and would benefit from a targeted intervention (e.g., Denton
et. al., 2011; Fletcher et al., 2011).
r The SG instruction described in the study included at least two strategies (e.g.,
repeated readings, modeling, reward procedures, performance feedback) that
have been shown in prior research to improve reading fluency. This criterion
helped to ensure that the goal of the intervention was to improve participants’
reading fluency.

Database searches
Using the PsycINFO and ERIC database systems, a computer search of the literature
was conducted using combinations of unique keywords. Specifically, the following
22 separate searches were conducted: small group and fluency, small-group and flu-
ency, instruction and group and fluency, grouping and fluency, assisted reading and
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 41

fluency and group, RTI and fluency, response to intervention and fluency, response-to-
intervention and fluency, Tier 2 and fluency, Tier 3 and fluency, group and repeated
reading and fluency, group and model reading and fluency, group and repeated reading
and automaticity, fluency and Read Naturally, fluency and Reader’s Theater, fluency
and Great Leaps, fluency and RAVE-O, fluency and everybody reads, fluency and Fast
Start, fluency and HELPS program, fluency and Fast for Word, reading fluency and
the fluency assessment system. Results from this search yielded 553 unique studies,
which were reviewed closely by a member of the research team, Field, who has con-
siderable experience with the topic of our investigation and the process of including
or excluding studies according to established literature review criteria. As part of the
process, the full article or abstract were reviewed for potential inclusion and all arti-
cles that clearly did not meet the inclusion criteria were excluded. The articles that
could potentially meet the inclusion criteria (n = 12) were independently reviewed
by the entire research team. With 100% agreement, nine studies met the inclusion
criteria from these search procedures.
Next, we searched Google Scholar with the terms small group AND reading flu-
ency instruction as a means to confirm the comprehensiveness of our original search
and determine if there were additional articles that would meet the inclusion crite-
ria. Because this search resulted in 117,000 hits (which would be expected given the
nature of the Google search engine and our search terms), and because we knew the
vast majority of these hits would not be relevant to our study or redundant with prior
searches, the author, Field, who handled the initial search of articles from PsycINFO
and ERIC carefully reviewed the first 500 Google Scholar hits. No additional studies
met the inclusion criteria based on the search with Google Scholar.

Hand searches
After the database searches, each member of the research team conducted a hand
search of each study that met the inclusion criteria. This was done to determine if
other references were made within the source to other potentially eligible articles or
chapters not already obtained or reviewed. For the same purpose, we also read each
of the six published meta-analyses (Chard et al., 2002; Kuhn & Stahl, 2003; Meyer
& Felton, 1999; Morgan & Sideridis, 2006; National Institute of Child Health and
Human Development, 2000; Therrien, 2004) that were available at the time of the
search and were on the topic of reading fluency instruction. The References sections
of those studies were also carefully reviewed. If potential sources were referenced
in the hand search procedures, we obtained the potentially includable source and
reviewed it for possible inclusion. Using hand-search procedures, and with 100%
agreement among those conducting the search, two additional studies were identi-
fied as meeting the inclusion criteria.

Correspondence with authors


Using the correspondence information available in each of the 11 articles that met
our inclusion criteria, we then contacted each author by email. In this communi-
cation we stated the purpose of our study, listed our inclusion criteria, stated that
42 J. C. BEGENY ET AL.

we found one or more previously published articles from the author, referenced the
author’s work that we identified for inclusion, and asked if he or she previously pub-
lished or had a article in press that might also meet our inclusion criteria. As needed,
one follow-up email was sent to reiterate our request. Of the five authors contacted,
everyone replied and we identified one additional article that met our inclusion cri-
teria (i.e., an article that was in press at the time of our search). Based on all of our
search procedures, 12 articles met the inclusion criteria (representing 13 separate
studies) and each study was summarized and coded.

Coding and variables

To ensure reliable summaries and coding of the included studies, two authors,
Begeny and Field, independently coded 100% of the articles and a third author,
Levy, reviewed their coding to assess agreements and disagreements. All disagree-
ments were discussed by the entire research team and full consensus was derived
about how to accurately code the information, and the agreed-on coding (with 100%
agreement) is reflected within this report. Coding occurred in two phases. First, to
answer Research Questions 1, 2, and 4, we coded all of the information listed in
Table 1 (with data from this coding summarized in Tables 2–4). Across all categories
and studies, the initial percentage agreement between the two coders was 88.8%, and
then a consensus model among all authors was used to resolve disagreements (i.e.,
all authors discussed each disagreement until full consensus was achieved on the
respective issue).
To answer Research Question 3, the second phase of coding was completed to
assess the methodological quality of the included studies and determine to what
extent SG instruction practices that target reading fluency can be considered an
evidence-based practice (EBP). Across all categories and studies, the initial percent-
age agreement between the two coders was 97.3%, and then a consensus model was
used to resolve disagreements. Tables 5 and 6 summarize the coding categories and
data related to this phase of coding. The categories listed in the left-hand column
of those tables come directly from the criteria described by Horner et al. (2005) for
SCDs and Gersten et al. (2005) for GDS. We followed the definitions and criteria
from those resources, and used the binary coding scheme of met or did not meet
reported in a recent literature review by Sreckovic, Common, Knowles, and Lane
(2014). Due to space restrictions, we refer the reader to those three resources for
more description of each coding category.
In this study we followed the coding rules described by (Sreckovic et al., 2014),
with only two exceptions. First, in determining whether dependent measures were
reliable and valid, we did not require the study to specifically provide reliability and
validity estimates for measures that have been described in numerous past studies
and by national panels as having adequate evidence of reliability and validity. For
example, with words read correctly per minute (WCPM) via curriculum-based mea-
sures of oral reading fluency (CBM-R), nearly all studies reported using this depen-
dent variable; but very few studies specifically discussed its reliability and validity.
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 43

Table . Coding categories and corresponding definitions.


Category Definition

Authora,b,c Authors of the journal article


Yeara,b,c Year article was published
na Total sample size
Demographicsa Sample demographics related to sex and ethnicity
Received Ixa Total sample that received intervention
Evaluated : Ix? a Did the study evaluate a comparable : intervention to the SG version?
Small group sizea Number of participants who worked together in the small group(s)
Gradea Grade level of participants
Reading levela Participants’ level of need (general education, at risk, or special education)
Screenersa Measures used to screen and specifically identify appropriate participants for the
study
Rule-out conditionsa Any specified rule-out conditions the researchers used to ensure the participants
were suitable to receive the fluency-based intervention (e.g., participants needed
to perform above a certain level in decoding or within a targeted range for fluency)
Other characteristicsa Information about the sample regarding number of schools and types of school(s)
and setting
ELLa Whether participants were identified as English language learners
ELL na Number of participants who were identified as English language learners
Interventionistsb Adults who provided the intervention to the study participants
Training levelb How the interventionists were trained to provide the intervention
Implementation integrityb Percentage of intervention sessions observed for implementation integrity and mean
integrity reported
Designb Design of the study as reported by the authors
Componentsb Evidence-based strategies/components used as part of the intervention
Number of componentsb Number of evidence-based strategies/components used as part of the intervention
Manualized Ix?b Did the study use a manualized intervention program?
Passages usedb Passages used during intervention
Session durationb Length in minutes of the intervention sessions
Session frequencyb Frequency (in times per week) the participants received an intervention session
Study durationc Length of study as reported by authors
Dependent measuresc Outcome measures used to evaluate participants’ response to intervention
Acceptability measuresc Type of acceptability measure used
Analytic methodsc Type of analytic methods used to evaluate participants’ response to intervention
Overall outcomesc Outcomes described by author(s) and, if reported, percentage of students who,
during the SG Ix, significantly outperformed a control group or control condition
Effects of SG vs. :c Of the number of participants who demonstrated superior performance in either the
: or SG Ix compared to the control condition (see denominator), the number (see
numerator) and percentage of students that responded as well or better during SG
Ix compared to : Ix

Note. : = one-on-one; ELL = English language learner; Ix = intervention; SG = small group.
a Categories presented in Table .
b Categories presented in Table .
c Categories presented in Table .

Instead, most studies referred the reader to resources describing its reliability and
validity. We therefore did not require specific descriptions of reliability or validity
for measures such as these because we know reviewers often request that psychome-
tric information for measures widely used in research be deleted for space purposes.
In fact, one reviewed study (Begeny, Braun, Lynch, Ramsay, & Wendt, 2012) specif-
ically stated that page restrictions prohibited a report of psychometric properties
for the WCPM measure. Second, Gersten et al. (2005) listed one of the “desirable
quality indicators” as: “Did the research report include actual audio or videotape
excerpts that capture the nature of the intervention?” (p. 152). Our interpretation
of that indicator is that the report provides detail or resources specifying the nature
of the intervention. As such, if a study included actual implementation protocols
44

Table . Summary of findings: Sample size and participant characteristics.


Received Evaluated Small group Grade Screeners Other
Author Year N Demographics Ix : Ix? size (reading level) (rule-out conditions) characteristics ELL (n)

Kuhn   F =   N  and   (at risk) NR Three classrooms in a N


M =  low-SES, small city
B/AA =  in Southeastern
J. C. BEGENY ET AL.

H= United States


W=
Begeny &   F=  N   (at risk) WCPM, WJ-III Reading, Four classrooms in N
Martens M= CTOPP (scaled one urban school
B/AA =  scores must be >  in Northeastern
H= on WJ-III Reading United States
W= and CTOPP) afterschool
program
Begeny & Silber   F=  N   (at risk) WCPM, WJ-III Reading, Urban school in N
M= CTOPP (scaled Northeastern
B/AA =  scores must be >  United States
H= on WJ-III Reading
W= and CTOPP)
Bonfiglioa , Daly,   F=  N   (at risk) NR Same elementary N
Persampieri, & M= class
Andersen B/AA = 
H=
W=
McCurdy, Daly,   F=  N   (at risk) NR Urban school in N
Gortmaker, M= Midwestern
Bonfiglioa , & B/AA =  United States
Persampieri H=
(Experiment ) W=
McCurdy, Daly,   F=  N   (special education and NR Suburban school in N
Gortmaker, M= general education) Midwestern
Bonfiglioa , & B/AA A = United States
Persampieri H=
(Experiment ) W=
Begeny, Krouse,   F=  N   (at risk) WCPM, TOWRE, WJ-III Rural school in N
Ross, & M= Reading, CTOPP, Southeastern
Mitchell B/AA =  (th–th United States
H= percentile on
W= WCPM; below
average or average
on all other
screeners)
Klubnik & Ardoin   F=  Y   (at risk) WCPM (teachers were Two classrooms from N
M= encouraged to use one suburban
B/AA =  WCPM scores to school in
H= identify Southeastern
W= participants, but no United States
scores reported)
Begeny, Hawkins,   F=  Y   (at risk) WCPM, TOWRE One classroom in a N
Krouse, & M= (th–th rural school in the
Laugle B/AA =  percentile on Southeastern
H= WCPM; below United States
W= average or average
Mixed race =  on TOWRE)
Ross & Begeny   F=  Y   (at risk) WCPM (teacher Rural school in Y ()
M= identified Southeastern
B/AA =  participant and United States
H= th–th
W= percentile on
WCPM) (Continued on next page)
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY
45
46

Table . (Continued)
Received Evaluated Small group Grade Screeners Other
Author Year N Demographics Ix : Ix? size (reading level) (rule-out conditions) characteristics ELL (n)
J. C. BEGENY ET AL.

Begeny, Braun,   F=  N  and   and  (at risk) WCPM (just prior to Urban school in N
Lyncha , M= intervention, all Southeastern
Ramsaya , & B/AA =  participants scored United States
Wendt H= in the “at risk”
W= category on the
DIBELS universal
screening measure)
Begeny, Yeager, &   F=  Y   (at risk) WCPM (th–th Urban school in San N
Martínez M= percentile on Jose, Costa Rica
B/AA =  Spanish WCPM)
H=
W=
Ross & Begeny   F=  Y   (at risk) WCPM (th–th Three classrooms in a N
M= percentile on rural school in
B/AA =  WCPM) Southeastern
H= United States
W=

Note. B/AA = Black or African American; CTOPP = Subtest(s) from the Comprehensive Test of Phonological Processing; F = female; H = Hispanic or Latino; M = male; N = no; NR = not reported;
TOWRE = Test of Word Reading Efficiency; W = White; WCPM = words read correctly per minute (almost always as part of a seasonal benchmark assessment); WJ-III Reading = Reading subtest(s)
from the Woodcock-Johnson III; Y = yes;
a Author not affiliated with university.
Table . Summary of the findings: Characteristics of interventions, interventionists, and experimental design.
Training Implementation Components Manualized Session Session
Author Year Interventionists level integritya Design (n) Ix? Passagesb duration frequency

Kuhn  Research team NR NR Between subjects LPP, RR, PF, CR, N Trade books – min x/week
(quasiexperimental) VC ()
Begeny & Martens  Research team Trained to mastery by –% of sessions; Multiple baseline WLT, PD, LPP, RR, N Silver, Burdett, and – min NR
researchers mean integrity > M/R () Ginn reading
% series
Begeny & Silber  Research team Trained to mastery by % of sessions; mean Alternating treatments WLT, LPP, RR, N DIBELS – min x/week
researchers integrity = M/R, retell ()
–%
Bonfiglioa , Daly,  Research team & Some formal training –% of sessions; Multiple probe across tasks LPP, CR, PD, M/R, N Houghton Mifflin – min x/week
Persampieri, & School staff mean integrity= GS, PF () reading series
Andersen –%
McCurdy, Daly,  Research team NR % of sessions; mean Multiple probe across tasks LPP, RR, CR, PD, N Silver, Burdett, and – min NR
Gortmaker, integrity= % M/R () Ginn reading
Bonfiglioa , & series, Houghton
Persampieri Mifflin reading
(Experiment ) series, and
researcher
compiled
McCurdy, Daly,  School staff NR % of sessions; mean Adapted alternating CR, PD, LPP, M/R, N Silver, Burdett, and  min x/week
Gortmaker, integrity= .% treatments RR () Ginn reading
Bonfiglioa , & series
Persampieri
(Experiment )
(Continued on next page)
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY
47
48

Table . (Continued)
Training Implementation Components Manualized Session Session
Author Year Interventionists level integritya Design (n) Ix? Passagesb duration frequency

Begeny, Krouse,  Research team Trained to mastery by % of sessions; mean Alternating treatments RR, LPP, LO, M/R N DIBELS – min x/week
Ross, & Mitchell researchers integrity= % ()
J. C. BEGENY ET AL.

Klubnik & Ardoin  Research team Trained to mastery by % of sessions; mean Alternating treatments LPP, RR, PD, M/R N Silver, Burdett, and NR NR
researchers integrity=.% () Ginn reading
series
Begeny, Hawkins,  Research team Trained to mastery by –% of sessions; Alternating treatments RR, LPP, PD, retell, N Silver, Burdett, and – min x/week
Krouse, & researchers mean integrity= M/R () Ginn reading
Laugle –% series
Ross & Begeny  Research team Trained to mastery by –% of sessions; Alternating treatments LPP, RR, retell, PD, N DIBELS  min NR
researchers mean integrity= VI, M/R ()
%
Begeny, Braun,  School staff Trained to mastery by % of sessions; mean One-group RR, LPP, PD, VC, Y HELPS Curriculum – min –x/week
Lyncha , researchers integrity= % pretest–posttest GS, PF, M/R,
Ramsaya , & (quasiexperimental) retell ()
Wendt
Begeny, Yeager, &  Research team Trained to mastery by –% of sessions; Alternating treatments RR, LPP, PD, M/R, N AIMSweb .– min NR
Martínez researchers mean integrity∼ retell ()
%
Ross & Begeny  Research team Trained to mastery by –% of sessions; Alternating treatments LPP, RR, retell, N Silver, Burdett, and – min x/week
researchers mean integrity= M/R, PD, PF () Ginn reading
–% series, and
researcher
compiled

Note. CR = choral reading; DIBELS = Dynamic Indicators of Basic Early Literacy Skills; GS = goal setting; HELPS = Helping Early Literacy with Practice Strategies; Ix = intervention; LO = listening only
(to a passage read aloud); LPP = modeling/listening passage preview; M/R = motivational/reward system; N = no; NR = not reported; PD = phrase-drill error correction; PF = performance feedback;
RR = repeated reading; VC = verbal cueing; VI = vocabulary instruction; WLT = word list training; Y = yes.
a A range is sometimes reported because some studies monitored implementation integrity more/less for some components, conditions, or interventionists (e.g., researcher vs. teacher) and reported the
integrity data specifically by component, condition, or interventionist.
b All selected passages (in each of the  studies) were assessed according to students’ instructional level and needs, and in this way they were “leveled” before beginning the study.
Table . Summary of the findings: Study duration, measures, data analytic techniques, and outcomes.
Study Dependent Acceptability Analytic Overall Effects of
Author Year duration measures measures methods outcomes SG vs. :

Kuhn   weeks QRI, QRI-II, TOWRE, NAEP NR Interpretations based Based on visual analysis of mean NA
Oral Reading Fluency only on descriptive scores, students in the Ix conditions
Scale statistics (e.g., outperformed the control group;
means); no statistical wide reading and RR improved word
analyses computed recognition, prosody, and WCPM.
Wide reading improved
comprehension
Begeny & Martens  – weeks WCPM, Maze Student reported Visual analysisa , paired  of  (%) of students performed NA
comprehension, word samples t tests, better with Ix compared to a
lists, WJ-III Reading Wilcoxon signed treatment as usual condition
ranks t-test
Begeny & Silber  NR WCPM NR Visual analysis  of  (%) of students performed NA
better with each Ix compared to
control condition; combination of all
 components was most effective for
all students
Bonfigliob , Daly,   weeks WCPM, Level of Student Teacher reported Visual analysis, effect  of  (%) of students performed NA
Persampieri, & Engagement size calculations better with each Ix compared to
Andersen control condition; the identified
“effective package” was most
effective for three of the four
students
McCurdy, Daly,  NR WCPM NR Visual analysis  of  (%) of students performed NA
Gortmaker, better with Ix compared to baseline
Bonfigliob , & levels of performance
Persampieri
(Experiment )
McCurdy, Daly,   weeks WCPM Teacher reported Visual analysis  of  (%) of students performed NA
Gortmaker, better with Ix compared to the
Bonfigliob , & control condition
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY

Persampieri
(Experiment )
(Continued on next page)
49
Table . (Continued)
50

Study Dependent Acceptability Analytic Overall Effects of


Author Year duration measures measures methods outcomes SG vs. :

Begeny, Krouse, Ross,  NR WCPM NR Visual analysis  of  (%) of students performed NA
& Mitchell better with each of the Ix conditions
compared to control condition; RR
was most effective, followed by LPP
Klubnik & Ardoin   weeks WCPM NR Visual analysis  of  (%) of students performed / = %
better with at least one Ix condition
J. C. BEGENY ET AL.

compared to control condition


Begeny, Hawkins,  – weeks WCPM, Word-overlap NR Visual analysis, SEM  of  (%) of students performed / = %
Krouse, & Laugle passage gains analysis, PND better in each Ix condition
compared to control condition; for
most students, SG and : conditions
were more effective than PT
Ross & Begeny   weeks WCPM, TOWRE NR Visual analysis, SEM  of  (%) of students performed / = %
analysis, better with : Ix compared to
randomization tests, control condition;  of  (%) of
pre- to postproject students performed better with SG
analyses of standard Ix compared to control condition
scores
Begeny, Braun,  – weeks WCPM NR Wilcoxon signed ranks t  of  (%) of students exceeded NA
Lynchb , Ramsayb , & test, growth expected growth and improved
Wendt comparison using more with Ix compared to business
national norms as usual condition
Begeny, Yeager, &   weeks WCPM Student reported Visual analysis, PND,  of  (%) of students performed / = %
Martínez SEM analysis better in each Ix condition
compared to control condition
Ross & Begeny   weeks WCPM, Word-overlap NR Visual analysis, PND,  of  (%) of students performed / = %
passage gains SEM analysis, better in at least one Ix compared to
randomization test the control condition; the longer
analysis (∼ min) Ix was generally more
effective than the ∼ min Ix

Note. Ix = intervention; NA = not applicable; NAEP = National Assessment of Educational Progress; NR = not reported; QRI = Qualitative Reading Inventory; PND = percentage of nonoverlapping data;
PT = peer tutoring; RR = repeated reading; SEM = standard error of measurement; SG = small group; TOWRE = Test of Word Reading Efficiency; WCPM = Words read correctly per minute; WJ-III
Reading = Reading subtest(s) from the Woodcock- Johnson III.
a Visual analysis refers to visual analysis procedures that are commonly used with single-case design data.
b Author not affiliated with university
Table . Coding of quality indicators of included single-case design studies.
McCurdy et al. McCurdy et al. Begeny, Krouse,
Begeny & Martens Begeny & Silber Bonfiglio et al. (), (), Ross, and Mitchell Klubnik and Ross and Begeny Begeny, Yeager, & Ross and Begeny
Quality indicator () () () Experiment  Experiment  () Ardoin () Begeny et al. () () Martínez () ()

Participants Yes (1.0) Yes (1.0) No (0.33) No (0.33) No (0.33) Yes (1.0) No (0.33) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0)
and setting
Participant Yes (.) Yes (.) No (.) No (.) No (.) Yes (.) No (.) Yes (.) Yes (.) Yes (.) Yes (.)
description
Participant Yes (.) Yes (.) No (.) No (.) No (.) Yes (.) No (.) Yes (.) Yes (.) Yes (.) Yes (.)
selection
Setting Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
description
Dependent Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) No (0.8) No (0.8) Yes (1.0) No (0.8) No (0.8) Yes (1.0) Yes (1.0)
variable
Description Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
Quantifiable Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
Valid and Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
well-described
Measured Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
repeatedly
Interobserver Yes (.) Yes (.) Yes (.) Yes (.) No (.) No (.) Yes (.) No (.) No (.) Yes (.) Yes (.)
agreement
Independent Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0)
variable
Description Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
Systematic Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
manipulation
Fidelity of imple- Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
mentation
Baseline Yes (1.0) No (0.5) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0)
Repeated Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
measurement
Description Yes (.) No (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
Experimental Yes (1.0) No (0.66) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0)
control and
internal
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY

validity
(Continued on next page)
51
52

Table . (Continued)
McCurdy et al. McCurdy et al. Begeny, Krouse,
Begeny & Martens Begeny & Silber Bonfiglio et al. (), (), Ross, and Mitchell Klubnik and Ross and Begeny Begeny, Yeager, & Ross and Begeny
Quality indicator () () () Experiment  Experiment  () Ardoin () Begeny et al. () () Martínez () ()
J. C. BEGENY ET AL.

Demonstrations of Yes (.) No (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
experimental
effect ( or
more)
Internal validity Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
Pattern of results Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
External Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0)
validity
Social validity Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0) Yes (1.0)
DV is socially Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
important
Change in DV is Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
socially
important
IV is practical and Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
cost-effective
Used in typical Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.) Yes (.)
contexts
# of indicators 7 5 6 6 5 6 6 6 6 7 7
(absolute
coding)
# of indicators 7.0 6.17 6.33 6.33 6.13 6.8 6.33 6.8 6.8 7.0 7.0
(weighted
coding)

Note. DV = dependent variable; IV = independent variable. Bold text represents a primary quality indicator category.
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 53

Table . Coding of quality indicators of group design studies.


Quality Indicator Kuhn () Begeny, Braun, et al. ()

Essential quality indicator


Participant description No (0.33) Yes (1.0)
Participant disability/difficulty Yes (.) Yes (.)
Comparable participants across groups No (.) Yes (.)
Comparable interventionists across groups No (.) Yes (.)
Intervention implementation No (0.66) Yes (1.0)
Intervention described and specific Yes (.) Yes (.)
Fidelity described and assessed No (.) Yes (.)
Description of practices in comparison group Yes (.) Yes (.)
Outcome measures Yes (1.0) No (0.5)
Multiple measures Yes (.) No (.)
Measures completed at appropriate times Yes (.) Yes (.)
Data analysis No (0.0) No (0.5)
Linked to RQ and appropriate unit of analysis No (.) Yes (.)
Effect size calculations No (.) No (.)
Number of indicators (absolute coding) 1 2
Number of indicators (weighted coding) 1.99 3.0
Desirable quality indicators
Attrition Yes (.) Yes (.)
Reliability estimates Yes (.) Yes (.)
Outcomes beyond immediate posttest No (.) No (.)
Validity No (.) Yes (.)
Fidelity assessed beyond basic features No (.) Yes (.)
Comparison conditions well-documented Yes (.) Yes (.)
Audio or video excerpts of interventions No (.) Yes (.)
Clarity of results No (.) Yes (.)
Number of indicators (absolute coding) 3 7

Note. Bold text represents a primary quality indicator category.

or web links to access the protocols and training materials, we felt that sufficed as
detailed documentation to capture the nature of the intervention.

Results and discussion

What are the methodological characteristics of the studies?

Sample size and participant characteristics


As shown in Table 2, the n per study was relatively small and ranged from 4–23
(median = 5), and the total number of participants receiving intervention was
similar because all but one study employed a within-group design (range = 4–12;
median = 5). Across all studies, the sex of the participants was roughly equal
(female = 51.7%) but some ethnicities were more represented than others (Black =
47.2%; Latino = 21.3%; White = 30.3%). Students of mixed race or other ethnicity
made up only 1.1% of the participants. The Latino representation stated previously
was predominantly made up of one study by Ross and Begeny (2011), which was the
only to include (and specifically study) Latino, English language learners (ELLs),
and another by (Begeny, Yeager, & Martínez, 2012) that included Spanish-speaking
students living in Costa Rica. Table 2 also shows that participants were either in
Grades 2–4, with second-grade students represented in eight of the 13 studies.
In nearly all of the studies (n = 12), students had been previously identified as
experiencing reading difficulties (i.e., considered at risk for a reading disability
54 J. C. BEGENY ET AL.

or continued reading difficulties). These findings are not surprising because most
educators recognize the importance of ensuring students are fluent readers by Grade
4 (e.g., Armbruster et al., 2001) and that SG interventions are most often used with
struggling learners prior to a student being considered for special education services
(e.g., Gersten et al., 2008). In terms of identifying students most appropriate to
receive intervention, a variety of screening measures were used, with assessments
of students’ WCPM on seasonal benchmark assessments being most common.

Interventionists and experimental design


Table 3 shows that most studies relied on the research team to deliver the inter-
vention(s) and school staff was only used in three of the studies. However, in a
very positive way, the large majority of the 13 studies (77%) reported specific
procedures for training and verifying the competence of the interventionists, and
all but one study reported frequent monitoring of implementation integrity and
high levels of integrity. This point should not go unrecognized because much of the
related intervention research suggests that implementation integrity is not regularly
reported and that researchers believe it is too challenging to accomplish because
of time, cost, and labor demands (McIntyre, Gresham, DiGennaro, & Reed, 2007;
Sanetti & DiGennaro, 2012; Sanetti, Gritter, & Dobey, 2011). As noted previously, of
the 13 available studies in this area, all but two used a SCD and the others were both
quasiexperimental studies. Alternating-treatments designs were most common
(n = 8).

Intervention characteristics and components


Table 3 also shows that although the source of passages for instruction varied,
the researchers in each study aimed to select instructionally appropriate material.
Session duration and frequency varied across studies, but approximately 10–15 min
of intervention three times per week was common. In terms of the intervention
components examined within the studies, the most common components included:
listening passage preview or modeling (LPP; all 13 studies), repeated reading
(RR; 12 studies), motivation and reward contingencies (M/R; 12 studies), phrase-
drill error correction (PD; 10 studies), oral retell of the passage read (six studies),
performance feedback (PF; four studies), and choral reading (CR; four studies).
Other components were also examined, but only in one or two studies. Further-
more, most studies evaluated five or more components (median and mode = 5);
in some cases these were all part of the same intervention package (e.g., Begeny &
Martens, 2006), but in other cases they were evaluated as part of separate interven-
tion conditions (e.g., Kuhn, 2005). This finding about the type and combination
of intervention components should be informative for future researchers and
practitioners interested in developing (or evaluating) SG interventions that target
reading fluency. For example, it is interesting to note that most of these components
are adapted SG versions of the one-on-one components that have evidence of being
most effective (e.g., Morgan & Sideridis, 2006; Therrien, 2004), and combining
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 55

multiple strategies seems like a logical approach based on what is known about
one-on-one reading fluency interventions (Begeny et al., 2010). With this said, a
noticeable finding and gap in this research is that only one study (Begeny, Braun,
et al., 2012) evaluated a structured, manualized intervention program, rather than a
combination of evidence-based strategies and a researcher-developed instructional
protocol. Later we discuss the implications of this finding.

Study duration, measures, and data analytic techniques


Table 4 shows that the duration of the studies (when reported) ranged from six to
18 weeks (median = 8 weeks). A range of dependent measures was used with mea-
sures of WCPM (i.e., CBM-R) by far the most common (n = 12). Other assessments
were used, including acceptability measures, but these were much less common. As
might be expected from so many SCD studies, visual analysis was used in 11 of the
studies, but we believe it is a strength of this small literature base that of those 11
SCD studies, 55% supplemented visual analysis with statistical analyses (e.g., ran-
domization tests, standard error of measurement analysis).

What findings did the studies report?


As shown in Table 4, the authors consistently reported that the SG intervention pro-
cedures resulted in most participants improving their reading performance com-
pared to some type of a control condition. In fact, in eight of the 12 studies that
reported data at the individual level, 100% of the students receiving SG interven-
tion outperformed the control condition. Not surprisingly, some students were not
responsive to the SG intervention, and it is interesting to note that when studies
included six or more participants, only two of those five studies reported a 100%
success rate (Begeny, Hawkins, Krouse, & Laugle, 2011; Begeny & Martens, 2006).
Rates of response across the three other studies ranged from 86% (6 of 7 participants,
Begeny, Braun, et al., 2012), to 83% (5 of 6 participants, Begeny, Yeager, & Martínez,
2012) and 67% (4 of 6 participants, Klubnik & Ardoin, 2010). As one might expect,
as a sample increases, it would be less likely that all students will respond to the inter-
vention as hoped. This is plausible when considering that not all students are likely
to respond to Tier 2 intervention and some students will require more intense inter-
vention or special education instruction to address their reading deficits (Berkeley,
Bender, Gregg-Peaster, & Saunders, 2009).
But even in the absence of a 100% success rate, only two studies reported less
than 80% of students responding favorably to the SG intervention, regardless of
sample size (Klubnik & Ardoin, 2010; Ross & Begeny, 2011). Unfortunately, with
only two studies reporting this finding and variation in the methodology between
those studies, it would be inappropriate to hypothesize why some students did or did
not respond to the SG intervention used. Collectively, however, the 13 studies sug-
gest that SG interventions targeting reading fluency show promise as effective and
time efficient approaches for improving a reading skill with which many students
struggle.
56 J. C. BEGENY ET AL.

What QIs were present, and are SG fluency practices evidence-based?

For each SCD study, Table 5 reports the presence or absence of each QI proposed by
Horner et al. (2005). Using the weighted coding scores (with a maximum score of
seven) and an 80% or higher criterion (Sreckovic et al., 2014), each of the SCD stud-
ies can be considered methodologically credible. In fact, all studies had a weighted
coding score above six, and only two studies (Begeny & Silber, 2006; McCurdy,
Daly, Gortmaker, Bonfiglio, & Persampieri, 2007, Experiment 2) had an absolute
coding score below six. Thus, the overall quality of this literature base meets pro-
posed standards. However, there were some studies that did not meet certain stan-
dards. For example, the Begeny and Silber study lacked sufficient information about
the number of baseline data points, and as such it was unclear whether there were
three or more demonstrations of experimental effect. Also, four studies (Bonfiglio,
Daly, Persampieri, & Andersen, 2006; Klubnik & Ardoin, 2010; both experiments in
McCurdy et al., 2007) lacked information about the study participants. Specifically,
for the at-risk participants in those studies, they were nominated by their teachers
as needing reading assistance, but no screening data were reported to more compre-
hensively describe the participant characteristics and selection procedures, thereby
decreasing the ability for the studies to be replicated with precision.
Using the Gersten et al. (2005) criteria, Table 6 reports the QIs of the two GDS
that met our inclusion criteria. Gersten et al. suggested that an acceptable quality
study must meet at least three of the four essential QIs (EQIs) and one of the eight
desirable QIs (DQIs); and a high-quality study must meet at least three EQIs and
four DQIs. Although these criteria can be debated, the suggested guidelines reveal
that neither study in our review received a three or higher with the absolute coding
of the EQIs. However, the Begeny, Braun, et al. (2012) study received a weighted EQI
score of three and a DQI score of seven. Also worth noting, although that study did
not include traditional ES calculations, it did use participant data and national nor-
mative data to systematically examine magnitude of effect and practical significance,
similar to an ES estimate. Although the Kuhn (2005) study lacked several of the QIs,
its peer-reviewed publication may be partially explained by the lack of research on
fluency-based SG interventions at the time of its publication.
Given the QIs from this literature base, there is at least some evidence that com-
bined use of SG instructional components targeting reading fluency can be con-
sidered an EBP. Using the SCD studies, we applied the five criteria suggested by
(Horner et al., 2005) to determine if this body of research can be considered evi-
dence based: “(a) the practice is operationally defined; (b) the context and outcomes
associated with a practice are clearly defined; (c) the practice is implemented with
documented fidelity; (d) the practice is functionally related to change in valued out-
comes; (e) experimental control is demonstrated across a sufficient range of studies,
researchers, and participants to allow confidence in the effect” (p. 176). Aligned with
this last criterion, and consistent with the review by (Sreckovic et al., 2014), we eval-
uated the external validity of this research base (across all studies) by determining
whether the research was conducted by at least three different researchers, with 20 or
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 57

more total participants, and represents at least three geographical regions. With each
of the aforementioned criteria, the SCD studies included in our review demonstrate
that SG instructional practices aimed to improve reading fluency (i.e., the opera-
tionally defined and combined use of LPP, RR, M/R, and PD practices documented
in this literature) can be considered an EBP. Other practices such as oral retell, PF,
and CR also appear to be valuable when combined with LPP, RR, M/R, and PD, but
they have not yet met the external validity criteria specified previously.
For GDS, Gersten et al. (2005) proposed that a practice is evidence based if there
are “at least four acceptable quality studies, or two high quality studies that support
the practice; and there is a 20% confidence interval for the weighted effect size that
is greater than zero” (p. 162). As will be discussed subsequently regarding directions
for future research, there are too few GDS within this literature base to meet these
suggested criteria, and only one GDS study (Begeny, Braun, et al., 2012) arguably
met standards as an acceptable or high-quality study.

Is SG instruction as effective as comparable one-on-one instruction?

As shown in the last column of Table 4, there were five studies that specifically exam-
ined whether comparable SG and one-on-one interventions would result in different
effects (Begeny et al., 2011; Begeny, Yeager, & Martínez, 2012; Klubnik & Ardoin,
2010; Ross & Begeny, 2011, 2015). With each respective study, we first calculated
the number of participants that responded significantly better with either the one-
on-one or SG intervention, compared with the control condition (as shown in the
denominator reported in that column). We then reported the number (numerator)
and percentage of students who, according to the authors, responded as well or bet-
ter during the SG condition compared to the one-on-one condition. Across the five
studies, 79% (19 of 24 students) responded as well or better during the SG inter-
vention. According to the authors of these studies, in the vast majority of cases the
students responded as well to the SG condition as they did to one-on-one (not usu-
ally better), but this is still an important finding. It suggests that a large percentage
of students should fair just as well with the less resource-intensive SG intervention
that targets fluency as they would with a one-on-one intervention.
This should be useful information for practitioners because the vast majority of
research on targeted reading fluency interventions is conducted in a one-on-one
context (Chard et al., 2002; National Institute of Child Health and Human Devel-
opment, 2000; Therrien, 2004), and the studies reviewed in this report should give
useful direction for both researchers and practitioners for developing and further
evaluating SG reading fluency interventions. Also, when considering models of RTI
(Fuchs & Vaughn, 2012; Gersten et al., 2008) some might assume that many students
should respond as well to a less intensive (e.g., Tier 2) type of intervention com-
pared to a more intensive (e.g., Tier 3) intervention. However, as others previously
noted (e.g., Begeny et al., 2011; Ross & Begeny, 2015; Vaughn et al., 2003), differen-
tial responsiveness to interventions of varying intensity is an important empirical
question that cannot be assumed. With regard to reading fluency and other targeted
58 J. C. BEGENY ET AL.

reading interventions, currently very little research has specifically examined the
percentage of students who respond as well to SG intervention as they do to more
intensive forms of intervention. Our study therefore offers additional insight on the
topic by summarizing research that examined the differential effectiveness of com-
parable SG and one-on-one interventions.

Implications, limitations, and directions for future research

As we footnoted previously, one potential limitation of this study is that we can-


not use reliable meta-analytic methods to statistically examine the effectiveness of
SG fluency interventions and the variables that may influence their effectiveness.
Of course, this is because the current literature base is not conducive for a meta-
analysis. As we will discuss subsequently, there are numerous directions for future
research that will not only improve this overall literature base, but also should, even-
tually, allow for a meta-analysis of the available literature. In addition, as is true of
all research reviews (including meta-analyses), there are limitations related to the
inability to report on potentially unpublished findings and the potential biases of
journals to only publish studies with positive outcomes.
Despite the aforementioned limitations, the findings of this study should offer
several important implications, including directions for future research. One impli-
cation is that practitioners have at least 13 empirical studies allowing them to
consider current best practices for targeting reading fluency with SG instruction.
Similarly, practitioners can use the data presented here to estimate probabilities
of success with the interventions. For example, we would not anticipate that every
student receiving a targeted SG fluency intervention would respond positively, and
this will certainly be influenced by factors such as the specific components used,
outcome measures, and implementation integrity. However, the studies reviewed
here suggest a majority of students should benefit from the types of practices
summarized in this article. This study also presents preliminary evidence to suggest
that approximately 80% of students with poor oral reading fluency should respond
as well to SG intervention as they would to a one-on-one intervention that inte-
grates the same instructional and motivational components. For researchers, our
findings indicate that meaningful progress has been made in this literature base and
there are several methodological features to consider replicating when designing
forthcoming experiments. Still, there is much room for improving this research
base with studies that expand the existing methodology.

Most significant gaps in the existing research


Based on the available data, we think the following represent the most significant
gaps in the existing research. First, although largely consistent with most of the
intervention research in education and school psychology (e.g., Bliss, Skinner,
Hautau, & Carroll, 2008; Bramlett, Cates, Savina, & Lauinger, 2010; Burns,
Klingbeil, Ysseldyke, & Petersen-Brown, 2012; Seethaler & Fuchs, 2005) a small
number of studies in this literature base have used group designs, and with this
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 59

sample sizes have been relatively small for a complete body of literature. Also, there
have been no between group comparison studies with sufficient power; in fact, the
one between-group study published (Kuhn, 2005) did not even use inferential statis-
tics to test hypotheses due to such a small sample size. Despite a large proportion of
published school-based intervention studies that do not meet suggested standards
for methodological rigor (e.g., Burns et al., 2012), and despite the extensive chal-
lenges (ethical, resource related, and otherwise) that can come with conducting such
research (Bliss et al., 2008; Villarreal, Gonzalez, McCormick, Simek, & Yoon, 2013),
we hope this current gap in this literature inspires rather than dissuades researchers
from attempting to conduct such studies on SG reading fluency interventions.
Second, the relatively limited populations of students that have been studied in
this research area calls for additional investigations to explore SG fluency inter-
ventions with other populations, including but not limited to upper-grade students
(e.g., Grades 5–10), ELLs living in the United States, students receiving special edu-
cation services, and students of varying ethnicity (including students living outside
of the United States). Third, the existing research only includes one relatively small-
scale study of a manualized, structured intervention program; thus, more studies on
intervention programs are warranted. Although teachers might be able to improve
students’ academic outcomes by reviewing the research and developing their own
protocols based on operational definitions of evidence-based strategies, there
are many reasons related to intervention effectiveness and efficiency that suggest
educators would be best equipped with research-supported intervention programs
versus a protocol of strategies developed by an individual educator, school team, or
district—especially at a Tier 2 level when prescriptive problem solving is arguably
most effective (Begeny, Schulte, & Johnson, 2012). For example, intervention pro-
grams often provide structured protocols, scripted instructions, field-tested reading
passages, forms for recording students’ daily receipt of intervention and academic
progress, embedded assessments, and other related features that ultimately help
to make training more efficient, improve teachers’ implementation integrity, and
ensure that students get explicit, systematic instruction (Begeny, Braun, et al., 2012;
Begeny, Schulte, & Johnson, 2012; Reeves, 2010; Simmons et al., 2011; What Works
Clearinghouse, 2015). Additional research on manualized programs may also help
to enhance operational definitions of specific practices and the determination of
whether those practices are evidence based.
Fourth, to increase the probability of successfully merging research with practice
in this area of reading intervention, future researchers should also ensure there are
more school-based practitioners delivering the interventions, as well as providing
valuable feedback about usability, feasibility, sustainability, and acceptability of the
intervention and the methods of teacher training. Ideally, future researchers would
include school-based practitioners as part of the research team because there are
numerous benefits to this approach to research and it is unfortunately uncommon
in education-based scholarship (Carroll, Skinner, McCleary, Hautau von Mizener,
& Bliss, 2009). For example, of all of the publications included in the present study,
only three authors were listed as being unaffiliated with a university and only two of
60 J. C. BEGENY ET AL.

those individuals (Lynch and Ramsay—both on the same publication) did not have
prior affiliation with one of the lead university researchers.
Overall, our findings suggest that meaningful pathways have been forged in this
area of reading intervention research, and the SCD studies suggest that several SG
instructional practices can be considered evidence-based. However, this body of
empirical literature will undoubtedly benefit from future studies that utilize, for
example, GDS, larger and more diversified samples, evaluations of manualized pro-
grams, and collaborative partnerships with education practitioners.

Note
1. Although we originally set out to complete a meta-analysis of the available research, sev-
eral reasons precluded using meta-analytic procedures. One reason for this decision was
because we realized too few studies used group designs, which would be necessary to facil-
itate the original, traditional, and more common use of meta-analysis (Glass, McGaw, &
Smith, 1981; Rosenthal, 1986). Because 11 of the 13 studies (85%) used an SCD, we then
considered the possibility of including those articles within a meta-analysis for SCDs.
However, a relatively recent special issue within the Journal of School Psychology (April
2014) shed light on this topic and assisted us in coming to the conclusion that a SCD meta-
analysis is not appropriate at this time. The reasons for this decision are as follows. First,
SCD meta-analytic procedures are still emerging, and unlike group-design meta-analytic
procedures, there is no general consensus of a gold standard for SCD meta-analytic pro-
cedures (Kratochwill & Levin, 2014). Second, Maggin and Odom (2014) explained the
necessary requirements to complete a meta-analysis with SCDs and concluded that none
of the proposed analytical methods succeed in meeting all of the criteria; they further indi-
cated that each method requires more development. In addition to the analytical methods
lacking support for their use, they are not readily available to researchers at this time. As
noted by Fisher and Lerman (2014), SCD meta-analytic procedures will require education
researchers to team up with statisticians to correctly use these preliminary and underde-
veloped procedures. Finally, many of those with sophisticated knowledge of meta-analysis
with SCDs (and their limitations) are concerned that the community of SCD scholars will
not accept the proposed meta-analytic procedures as they currently stand (Kratochwill &
Levin, 2014; Shadish, 2014).

References
Armbruster, B., Lehr, F., & Osborne, J. (2001). Put reading first: The research building blocks for
teaching children to read (Kindergarten through Grade 3) (3rd ed.). Washington, DC: National
Institute for Literacy.
Begeny, J. C., Braun, L. M., Lynch, H. L., Ramsay, A. C., & Wendt, J. M. (2012). Initial evidence for
using the HELPS reading fluency program with small instructional groups. School Psychology
Forum: Research in Practice, 6, 50–63.
Begeny, J. C., Hawkins, A. L., Krouse, H. E., & Laugle, K. M. (2011). Altering instruc-
tional delivery options to improve intervention outcomes: Does increased instructional
intensity also increase instructional effectiveness? Psychology in the Schools, 48, 769–785.
https://doi.org/10.1002/pits.20591
Begeny, J. C., Krouse, H. E., Ross, S. G., & Mitchell, R. C. (2009). Increasing elementary-aged
students’ reading fluency with small-group interventions: A comparison of repeated reading,
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 61

listening passage preview, and listening only strategies. Journal of Behavioral Education, 18,
211–228. https://doi.org/10.1007/s10864-009-9090-9
Begeny, J. C., Laugle, K. M., Krouse, H. E., Lynn, A. E., Tayrose, M. P., & Stage, S. A. (2010).
A control-group comparison of two reading fluency programs: The Helping Early Literacy
with Practice Strategies (HELPS) program and the Great Leaps K–2 reading program. School
Psychology Review, 39, 137–155.
Begeny, J. C., & Martens, B. K. (2006). Assisting low-performing readers with a group-based read-
ing fluency intervention. School Psychology Review, 35, 91–107.
Begeny, J. C., Schulte, A. C., & Johnson, K. (2012). Enhancing instructional problem solving: An
efficient system for assisting struggling learners. New York, NY: Guilford Press.
Begeny, J. C., & Silber, J. M. (2006). An examination of group-based treatment packages for
increasing elementary-aged students’ reading fluency. Psychology in the Schools, 43, 183–195.
https://doi.org/10.1002/pits.20138
Begeny, J. C., Yeager, A., & Martínez, R. S. (2012). Effects of small-group and one-on-one reading
fluency interventions with second grade, low-performing Spanish readers. Journal of Behav-
ioral Education, 21, 58–79. https://doi.org/10.1007/s10864-011-9141-x
Berkeley, S., Bender, W. N., Gregg-Peaster, L., & Saunders, L. (2009). Implementation of
response to intervention: A snapshot of progress. Journal of Learning Disabilities, 42, 85–95.
https://doi.org/10.1177/0022219408326214
Bliss, S. L., Skinner, C. H., Hautau, B., & Carroll, E. E. (2008). Articles published in four school
psychology journals from 2000 to 2005: An analysis of experimental/intervention research.
Psychology in the Schools, 45, 483–498. https://doi.org/10.1002/pits.20318
Bonfiglio, C. M., Daly, E. J., Martens, B. K., Lin, L. R., & Corsaut, S. (2004). An experimental analy-
sis of reading interventions: Generalization across instructional strategies, passages, and time.
Journal of Applied Behavioral Analysis, 27, 111–114. https://doi.org/10.1901/jaba.2004.37-111
Bonfiglio, C. M., Daly, E. J., Persampieri, M., & Andersen, M. (2006). An experimental analysis
of the effects of reading interventions in a small group reading instruction context. Journal of
Behavioral Education, 15, 93–109. https://doi.org/10.1007/s10864-006-9009-7
Bramlett, R., Cates, G. L., Savina, E., & Lauinger, B. (2010). Assessing effectiveness and effi-
ciency of academic interventions in school psychology journals. Psychology in the schools, 47,
114–125. https://doi.org/10.1002/pits.20457
Buck, G. H., Polloway, E. A., Smith-Thomas, A., & Cook, K. W. (2003). Prereferral intervention
processes: A survey of state practices. Exceptional Children, 69, 349–360.
Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and
secondary schools. New York, NY: Routledge.
Burns, M. K., Klingbeil, D. A., Ysseldyke, J. E., & Petersen-Brown, S. (2012). Trends in method-
ological rigor in intervention research published in school psychology journals. Psychology
in the Schools, 49, 843–851. https://doi.org/10.1002/pits.21637
Carroll, E. E., Skinner, C. H., McCleary, D. F., Hautau von Mizener, B., & Bliss, S. L.
(2009). Analysis of author affiliation across four school psychology journals from 2000
to 2008: Where is the practitioner research? Psychology in the Schools, 46, 627–635.
https://doi.org/10.1002/pits.20403
Chard, D. J., Vaughn, S., & Tyler, B. J. (2002). A synthesis of research on effective interventions
for building reading fluency with elementary students with learning disabilities. Journal of
Learning Disabilities, 35, 386–406.
Daane, M. C., Campbell, J. R., Grigg, W. S., Goodman, M. J., & Oranje, A. (2005). Fourth-grade
students reading aloud: NAEP 2002 Special Study of Oral Reading. U.S. Department of Edu-
cation, National Center for Education Statistics. Washington, DC: U.S. Government Printing
Office. Retrieved from http://nces.ed.gov/nationsreportcard/pdf/studies/2006469.pdf
62 J. C. BEGENY ET AL.

Denton, C. A. (2012). Response to intervention for reading difficulties in the primary grades:
Some answers and lingering questions. Journal of Learning Disabilities, 45, 232–243.
https://doi.org/10.1177/0022219412442155
Denton, C. A., Cirino, P. T., Barth, A. E., Romain, M., Vaughn, S., Wexler, J., … Fletcher,
J. M. (2011). An experimental study of scheduling and duration of “Tier 2” first-
grade reading intervention. Journal of Research on Educational Effectiveness, 4, 208–230.
https://doi.org/10.1080/19345747.2010.530127
DuFour, R., Eaker, R., Karhanek, G., & DuFour, R. (2004). Whatever it takes: How professional
learning communities respond when kids don’t learn. Bloomington, IN: Solution Tree.
Elbaum, B., Vaughn, S., Tejero Hughes, M., & Watson Moody, S. (2000). How effective are one-
to-one tutoring programs in reading for elementary students at risk for reading failure? A
meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605–619.
https://doi.org/10.1037/0022-0663.92.4.605
Fisher, W. W., & Lerman, D. C. (2014). It has been said that, “There are three degrees of
falsehoods: Lies, damn lies, and statistics”. Journal of School Psychology, 52, 243–248.
https://doi.org/10.1016/j.jsp.2014.01.001
Fletcher, J. M., Stubeing, K. K., Barth, A. E., Denton, C. A., Cirino, P. T., Francis, D. J., & Vaughn,
S. (2011). Cognitive correlates of inadequate response to reading intervention. School Psy-
chology Review, 40, 3–22.
Foorman, B. R., & Torgesen, J. (2001). Critical elements of classroom and small-group instruc-
tion to promote reading success in all children. Learning Disabilities Research & Practice, 16,
203–212. https://doi.org/10.1111/0938-8982.00020
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator
of reading competence: A theoretical, empirical and historical analysis. Scientific Studies of
Reading, 5, 239–256.
Fuchs, L. S., & Vaughn, S. (2012). Responsiveness-to-intervention: A decade later. Journal of
Learning Disabilities, 45, 195–203. https://doi.org/10.1177/0022219412442150
Gersten, R., Compton, D., Connor, C. M., Dimino, J., Santoro, L., Linan-Thompson, S., &
Tilly, W. D. (2008). Assisting students struggling with reading: Response to intervention and
multi-tier intervention for reading in the primary grades. A practice guide (NCEE 2009–
4045). Washington, DC: National Center for Education Evaluation and Regional Assis-
tance, Institute of Education Sciences, U.S. Department of Education. Retrieved from
http://ies.ed.gov/ncee/wwc/publications/practiceguides/
Gersten, R., Fuchs, L. S., Compton, D., Coyne, M. S., Greenwood, C., & Innocenti, M. (2005).
Quality indicators for group experimental and quasi-experimental research in special educa-
tion. Exceptional Children, 71, 149–164. https://doi.org/10.1177/001440290507100202
Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research. Beverly Hills,
CA: Sage.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-
subject research to identify evidence-based practice in special education. Exceptional Chil-
dren, 7, 165–179.
Klubnik, C., & Ardoin, S. P. (2010). Examining immediate and maintenance effects of a reading
intervention package on generalization materials: Individual verses group implementation.
Journal of Behavioral Education, 19, 7–29. https://doi.org/10.1007/s10864-009-9096-3
Kratochwill, T. R., & Levin, J. R. (2014). Meta- and statistical analysis of single-case intervention
research data: Quantitative gifts and a wish list. Journal of School Psychology, 52, 231–235.
https://doi.org/10.1016/j.jsp.2014.01.003
Kuhn, M. R. (2005). A comparative study of small group fluency instruction. Reading Psychology,
26, 127–146. https://doi.org/10.1080/02702710590930492
Kuhn, M. R., & Stahl, S. A. (2003). Fluency: A review of developmental and remedial practices.
Journal of Educational Psychology, 95, 3–21. https://doi.org/10.1037/0022-0663.95.1.3
JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 63

Lonigan, C. J., & Shanahan, T. (2010). Executive summary, Developing early literacy: Report of
the National Early Literacy Panel. Washington, DC: National Institute for Literacy. Retrieved
from https://www.nichd.nih.gov/publications/pubs/documents/NELPSummary.pdf
Maggin, D. M., & Odom, S. L. (2014). Evaluating single-case research data for systematic
review: A commentary for the special issue. Journal of School Psychology, 52, 237–241.
https://doi.org/10.1016/j.jsp.2014.01.002
McCurdy, M., Daly, E., Gortmaker, V., Bonfiglio, C., & Persampieri, M. (2007). Use of brief
instructional trials to identify small group reading strategies: A two experiment study. Journal
of Behavioral Education, 16, 7–26. https://doi.org/10.1007/s10864-006-9021-y
McIntyre, L. L., Gresham, F. M., DiGennaro, F. D., & Reed, D. D. (2007). Treatment
integrity of school-based interventions with children in the Journal of Applied
Behavior Analysis: 1991–2005. Journal of Applied Behavior Analysis, 40, 659–672.
https://doi.org/10.1901/jaba.2007.659-672
Meyer, M. S., & Felton, R. H. (1999). Repeated reading to enhance reading flu-
ency: Old approaches and new directions. Annals of Dyslexia, 49, 283–306.
https://doi.org/10.1007/s11881-999-0027-8
Morgan, P. L., & Sideridis, P. D. (2006). Contrasting the effectiveness of fluency interven-
tions for students with or at risk for learning disabilities: A multilevel random coeffi-
cient modeling meta-analysis. Learning Disabilities Research and Practice, 21, 191–210.
https://doi.org/10.1111/j.1540-5826.2006.00218.x
National Assessment of Educational Progress. (2015). Reading assessment. Retrieved from
http://www.nationsreportcard.gov/reading_math_2015/#reading/acl?grade=4
National Center on Response to Intervention. (2010). Essential components of RTI—A closer
look at response to intervention. Washington, DC: U.S. Department of Education, Office of
Special Education Programs, National Center on Response to Intervention. Retrieved from
http://www.rti4success.org/sites/default/files/rtiessentialcomponents_042710.pdf
National Institute of Child Health and Human Development. (2000). Report of the National Read-
ing Panel: Teaching children to read: An evidence based assessment of the scientific research lit-
erature on reading and its implications for reading instruction: Reports of the subgroups (NIH
Publication No. 00-4754). Washington, DC: U.S. Government Printing Office. Retrieved from
https://www.nichd.nih.gov/publications/pubs/nrp/documents/report.pdf
Reeves, J. (2010). Teacher learning by script. Language Teaching Research, 14, 241–258.
https://doi.org/10.1177/1362168810365252
Rosenthal, R. (1986). Meta-analytic procedures for social research. London: Sage.
Ross, S. G., & Begeny, J. C. (2011). Improving Latino, English language learners’ reading flu-
ency: The effects of small-group and one-on-one intervention. Psychology in the Schools, 48,
604–618. https://doi.org/10.1002/pits.20575
Ross, S. G., & Begeny, J. C. (2015). An examination of treatment intensity with an oral
reading fluency intervention: Do intervention duration and student-teacher instructional
ratios impact intervention effectiveness? Journal of Behavioral Education, 24, 11–32.
https://doi.org/10.1007/s10864-014-9202-z
Sanetti, L. M. H., & DiGennaro, F. M. (2012). Barriers to implementing treatment integrity pro-
cedures in school psychology research: Survey of treatment outcome researchers. Assessment
for Effective Intervention, 37, 195–202. https://doi.org/10.1177/1534508411432466
Sanetti, L. M. H., Gritter, K. L., & Dobey, L. M. (2011). Treatment integrity of interventions with
children in the school psychology literature from 1995 to 2008. School Psychology Review, 40,
72–84.
Seethaler, P. M., & Fuchs, L. S. (2005). A drop in the bucket: Randomized controlled trials test-
ing reading and math interventions. Learning Disabilities Research and Practice, 20, 98–102.
https://doi.org/10.1111/j.1540-5826.2005.00125.x
64 J. C. BEGENY ET AL.

Shadish, W. R. (2014). Analysis and meta-analysis of single-case designs: An introduction. Journal


of School Psychology, 52, 109–122. https://doi.org/10.1016/j.jsp.2013.11.009
Simmons, D., Coyne, M., Hagan-Burke, S., Kwok, O., Simmons, L., Johnson, , … & Crevecouer,
Y. (2011). Effects of supplemental reading interventions in authentic contexts: A comparison
of kindergarteners’ response. Exceptional Children, 77, 207–228.
Snow, C. E., Burns, M. S., & Griffin, P. (1998). Preventing reading difficulties in young children.
Washington, DC: National Academy Press.
Sreckovic, M., Common, E. A., Knowles, M., & Lane, K. L. (2014). A review of self-regulated
strategy development for writing for students with EBD. Behavioral Disorder, 39, 56–77.
Tilly, W. D. (2008). The evolution of school psychology to science-based practice: Problem solv-
ing and the three-tiered model. In A. Thomas, & J. Grimes (Eds.), Best practices in school
psychology-V (pp. 17–35). Bethesda, MD: National Association of School Psychologists.
Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading: A meta-
analysis. Remedial and Special Education, 25, 252–261.
Thurlow, M. L., Ysseldyke, J. E., Wotruba, J. W., & Algozzine, B. (1993). Instruction in special
education under varying student-teacher ratios. Elementary School Journal, 3, 305–320.
Torgesen, J. K., & Hudson, R. (2006). Reading fluency: Critical issues for struggling readers.
In S. J. Samuels, & A. Farstrup (Eds.), What research has to say about fluency instruction
(pp. 130–158). Newark, DE: International Reading Association.
Vaughn, S., Linan-Thompson, S., Kouzekanani, K., Pedrotty Bryant, D., Dickson, S., & Blozis, S.
A. (2003). Reading instruction grouping for students with reading difficulties. Remedial and
Special Education, 24, 301–315.
Villarreal, V., Gonzalez, J. E., McCormick, A. S., Simek, A., & Yoon, H. (2013). Articles published
in six school psychology journals from 2005–2009: Where’s the intervention research? Psy-
chology in the Schools, 50, 500–519. https://doi.org/10.1002/pits.21687
What Works Clearinghouse. (2015). Reviews of programs intended to increase literacy skills.
Retrieved from http://ies.ed.gov/ncee/wwc/topic.aspx?sid=8

You might also like