Preparing Students For College Entrance Exams

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Articles Remedial and Special Education

Volume 30 Number 1
January/February 2009 3-18
© 2009 Hammill Institute on
Disabilities
Preparing Students for College Entrance Exams 10.1177/0741932507314022
http://rase.sagepub.com
hosted at
Findings of a Secondary Intervention Conducted Within http://online.sagepub.com

a Three-Tiered Model of Support


Kathleen Lynne Lane
Vanderbilt University
Jemma Robertson Kalberg
Vanderbilt University
Emily Mofield
Vanderbilt University
Joseph H. Wehby
Vanderbilt University
Robin J. Parks
Vanderbilt University

This study examined outcomes associated with participation in a program, Preparing for the ACT, designed to enhance
student performance (N = 126) on the ACT college entrance exam. This targeted intervention was implemented as part of a
three-tiered model of positive behavior support. Results of descriptive analyses revealed that only academic performance
in the previous academic year was significant in predicting postintervention practice scores. Furthermore, students’ post-
intervention scores were significant in predicting actual ACT scores. However, only in the case of the English subject area
test were academic and behavioral performance predictive of English ACT scores. Results of a quasiexperimental design
used to compare actual ACT performance for students who did and did not participate in the intervention suggested
improved performance for students who did participate in the program, as evidenced by positive effect sizes—an increase in the
percentage of students who met the district target scores and school mean scores that exceeded state mean scores following
intervention participation. Limitations and implications for future research are offered.

Keywords: positive behavior supports; quantitative research methodology; secondary

W ith the call for academic excellence (No Child Left


Behind Act, 2001), combined with the goal of serv-
ing students with special needs in inclusive settings (Fuchs
2001) and district-developed literacy and academic plans
(Lane & Menzies, 2003; Luiselli, Putnam, & Sutherland,
2002), are implemented with the entire student body.
& Fuchs, 1994; Individuals with Disabilities Education Primary plans focus on preventing harm by eliminating
Improvement Act, 2004), there has been a growing conditions that set the stage for learning and behavioral
emphasis on looking at schools as a context for change, concerns. Next, secondary prevention efforts typically lend
rather than focusing primarily on child concerns (Lane & support for students who are nonresponsive to primary
Beebe-Frankenberger, 2007; Sugai & Horner, 2002). This intervention efforts (10%–15% of students). The final level
has been accomplished via response-to-intervention (RTI; of support is the most intensive, reserved for students
Gresham, 2002) and positive behavior support (PBS; with multiple risk factors (3%–5% of students) who
Sugai & Horner, 2002) models, both of which involve require tertiary prevention supports (e.g., function-based
data-driven, three-tiered models of support that contain
primary, secondary, and tertiary levels of prevention.
Authors’ Note: This project was funded by an OSEP-funded, directed
In both models, interventions of increasing intensity and research grant titled Project PBS: A Three-Tiered Prevention Model to
depth are provided based on students’ needs. For example, Better Serve All Students (OSEP Award Number, H324D020048) and
primary plans, such as violence prevention (Sprague et al., NICHD Grant No. P30HD15052 to Vanderbilt University.

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015 3


4 Remedial and Special Education

interventions, Kern & Manz, 2004). Data are used to were taught as an elective course. Results revealed changes
monitor student performance at each level and determine in students’ skill knowledge; however, generalization did
which students require more intensive, evidence-based not occur to study skills habits, conflict resolution styles,
supports. grade point average, and risk status.
Another less traditional approach to secondary inter- Results of an electronic and hand search did not iden-
ventions is to provide support to subgroups of students tify any secondary interventions implemented within the
who have historically been less than successful in meet- context of a three-tiered PBS model at the high school
ing a given objective; namely, secondary programs may level. This absence is concerning given that high school
also be used to support students with common core areas is the last point of public education before students either
of concern who may not have yet failed but who have enter the work force or continue on in higher education.
specific instructional or behavioral needs that are not char- We think a particularly concerning void within the three-
acteristic of the entire student body’s needs (Lane & tiered model is the absence of targeted intervention
Parks, 2007). For example, if high school freshmen have support to assist high school students in preparing for
historically struggled to negotiate the demands of long- college entrance exams. Students’ performance on the
term assignments, an intervention could be designed to college entrance exams, such as the SAT and ACT, serve
teach only this group of students the requisite skills asso- as an important predictor of which students will pursue
ciated with such tasks (e.g., planning, incremental objec- the college path (Fuller & Wehman, 2003).
tives, etc.; Lane, Wehby, & Robertson, 2007). In this case,
the PBS team does not wait for students to fail on long- College Entrance Examinations:
term assignments. Instead, a secondary intervention could A Pivotal Determinant
be implemented to reverse the negative consequences
typically experienced by pervious freshman classes by College admissions committees consider a number
addressing the appropriate acquisition or performance of important factors when determining which students
deficits (Gresham, 2002). to admit, one of which is a student’s performance on
Many schools, particularly at the elementary level, have college entrance exams (Ehrehnhaft, Lehrman, Obrecht, &
made strides implementing primary (Lane, Robertson, & Mundsack, 2001; Fuller & Wehman, 2003). For example,
Graham-Bailey, 2006) and tertiary (Lane, Umbreit, & the ACT is taken by numerous 11th-grade students
Beebe-Frankenberger, 1999) prevention programs. Yet across the country, and admissions committees use these
less attention has been devoted to secondary prevention scores to compare the academic achievement of applicants
efforts, particularly in middle and high schools (Robertson and to draw inferences about the probability of success-
& Lane, 2007). ful performance at the university level (Ehrehnhaft et al.,
2001). The ACT is a time-limited, multiple-choice test
that includes English, mathematics, reading, science, and
Secondary Prevention Efforts
writing subject area tests, with the items assessing content
One challenge facing schools that are developing PBS knowledge as well as students’ ability to infer, analyze,
models, in particular, is how to design, implement, and eval- problem solve, and reason.
uate secondary interventions during the regular school day. Given the important outcomes associated with student
A systematic review of the literature revealed only a hand- performance on the ACT, it is likely that students will
ful of studies examining the efficacy of secondary interven- need to, or should, spend time preparing to take this
tions implemented within the context of multitiered PBS exam. Yet a substantial body of literature indicates that
models. Most studies identified were conducted at the ele- secondary school students in the United States have poor
mentary level (Lane, Wehby, Menzies, Doukas, et al. 2003; study skills (Jones & Slate, 1992; Jones, Slate, Bell, &
Lane, Wehby, Menzies, Gregg, et al., 2002; Walker, Cheney, Sadler, 1991; Jones, Slate, Blake, & Sloas, 1995; Slate,
Stage, & Blum, 2005), with very few secondary interven- Jones, & Dawson, 1993). Jones and Slate (1992) report
tions implemented within the context of a PBS model at the that high school students exhibit only 40% to 46% of
middle- and high school levels. For example, Robertson and desired study behaviors as measured by the Study Habits
Lane (2007) used course grades, behavior screening, and Inventory (SHI). Kovach, Fleming, and Wilgosh (2001)
schoolwide information system (May et al., 2000) data to found similar results, with 568 high school students
identify middle school students (N = 65) who had both aca- reporting that they performed fewer than 50% of the
demic and behavioral concerns. Students were assigned study habits delineated in the SHI.
randomly to one of two interventions: study skills only or In addition, the extent to which students used effective
study skills plus conflict resolution skills, both of which study skills declined between 8th and 11th grade because

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


Lane et al. / College Entrance Exams 5

of heavy reliance on rote memorization (Jones et al., year and high GPAs from the previous academic year
1995). Jones and colleagues indicated that the most com- would be more responsive to the intervention as measured
mon deficits characteristic of high school students were by performance on the practice ACT test; (b) with higher
poor reading skills, inappropriate use of study skills, postintervention, practice ACT scores, lower rates of
and lack of application of organizational strategies. ODRs, and higher GPAs would have higher actual ACT
Unfortunately, many high school study skills programs scores; and (c) who participated in the intervention pro-
fail to address students’ specific acquisition deficits (Jones, gram would perform better on the actual ACT test than
Slate, Mahan, et al., 1993/1994) and do not provide students who did not participate in the program. Third,
explicit instruction to help prepare students to take their we used descriptive procedures to examine teacher and
college entrance exams. Jones, Slate, Bell, et al. (1991) student perspectives regarding the goals, procedures, and
contend that it is imperative for teachers to become even outcomes associated with this plan. We hypothesized that
more involved in improving students’ study skills. teachers would view the program as more socially valid
This preliminary investigation sought to address this as compared to students.
call by establishing a targeted, secondary intervention,
implemented within the context of a three-tired PBS
Method
model, to assist in preparing 11th-grade students to take
the ACT college entrance exam. Rather than waiting to
identify students who were nonresponsive, as in traditional
Participants and Setting
models, we conducted a targeted, secondary intervention Participants were 126 eleventh-grade students, 66
for current 11th-grade students given that previous years’ (52.38%) boys, and 60 (47.62%) girls, attending a rural
11th-grade students had been less than successful with high school located in Middle Tennessee that was partici-
their performance on the ACT test. Data on ACT test pating in a longitudinal study of PBS at the high school
scores from the previous 11th-grade classes indicated that level. The majority of students were White (n = 120,
school-site group means fell below the state means. In 95.24%), with 18 (14.29%) receiving special education
brief, 11th-grade students needed additional support services.
beyond that which was received as part of the primary All students attended School F, which enrolled a total
plan to perform more successfully on the ACT test. of 716 students. The school was predominately White
Consequently, we designed a curriculum to supplement (n = 684; 95.53%). School F was above the district mean
the primary intervention program to better prepare students of economically disadvantaged students. Data were not
for the ACT test. Furthermore, we wanted to extend available as to the number or percentage of graduating
this line of inquiry by examining the extent to which seniors who entered 2- and 4-year degree programs. There
students with various academic and behavioral perfor- were 109 students (15.20%) enrolled in special education
mance levels responded to the support provided. at School F. School F served a total of 188 eleventh-grade
students, all of whom participated in the ACT interven-
Purpose tion. Of these students, 136 were (a) enrolled at School F
during the 2004 to 2005 and 2005 to 2006 academic years
Thus, this preliminary study examined outcomes asso-
and (b) completed the pre- and postintervention practice
ciated with participation in a program, Preparing for the
ACT test. However, 10 students had patterned responses
ACT, designed to enhance student performance on the
(e.g., Christmas tree pattern or marked the same letter for
ACT college entrance exam. First, descriptive procedures
all items) and were excluded in the analyses. The final
were employed to determine the extent to which students’
number of participants thus was 126 (see Table 1).
behavioral and academic characteristics, as measured by
office disciplinary referrals (ODRs) and grade point aver-
age (GPA), respectively, predicted performance on both a
Procedures
practice ACT test, as well as the actual ACT, for students School F was participating in a federally funded, lon-
who completed the program. Second, a quasiexperimen- gitudinal study examining the design, implementation,
tal, two group, posttest only design was used to compare and evaluation of schoolwide positive behavior support
actual ACT performance for students who did (11th-grade (SW-PBS) plans at the high school level. See Lane, Wehby,
students, 2005–2006 academic year) and did not (11th- Robertson, and Rogers (2007) for a detailed description
grade students, 2004–2005 academic year) participate in of the SW-PBS program. The secondary intervention
the intervention. We hypothesize that students (a) with reported in this article occurred during School F’s second
lower rates of ODRs earned during the previous academic year of implementation of a SW-PBS plan. Fidelity of

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


6 Remedial and Special Education

Table 1 preparation curriculum to the 11th-graders during home-


Participants Characteristics room. The project director attended this initial meeting
Students (N = 126) and reviewed the curriculum and implementation calendar
with all participating teachers. A sample lesson was pro-
Characteristic N % vided to introduce the format of the lesson plans, which
Gender included both scripted and brief outlined versions. All
Male 66 52.38 lessons included procedures to introduce the information,
Female 60 47.62 model the skill, lead guided practice, and provide time for
Ethnicity independent practice with corrective feedback.
Caucasian 120 95.24
Teachers were informed that the curriculum was going
African American 4 3.17
Hispanic 2 1.59 to be taught as part of regular school practices, separate
Special education 18 14.20 from any research project, and that teacher participation
Special learning disability 9 7.14 in the evaluation study was optional. Specific teacher
Gifted 8 6.35 duties beyond the regular school day included (a) allowing
Other health impaired 1 0.79 project staff to observe instruction for 30 min twice
per month to assess treatment integrity, (b) completing a
similar treatment fidelity form to assess implementation
from the teacher perspective, and (c) filling out a social
the SW-PBS plan was monitored from the teacher (self- validity form to obtain their opinions about the curricu-
report) and research assistant (RA; direct observations) lum. The project director read through the consent letter
perspectives using behavior component checklist. Fidelity and answered questions. Teachers were informed that
was adequate from both perspectives, with respective information collected would be kept confidential and
overall annual implementation percentage of 78.70 they could withdraw at any time. All teachers consented
(SD = 1.91, teacher perspective) and 74.08 (SD = 15.56). to participate (N = 10).
The PBS team held monthly meetings to discuss logis-
tics for implementation and to review schoolwide data to
Intervention Description: Preparing
monitor student progress. At the end of the first year of
for the ACT Curriculum
implementation, the PBS team identified two key areas for
targeted, secondary intervention, one of which included A number of studies indicate the effectiveness of teach-
designing a program to prepare 11th-grade students for the ing test-taking skills to increase student performance
ACT tests. During the summer, the PBS team collaborated (Bangert-Drowns, Kulik, & Kulik, 1983; Dreisbach &
with Vanderbilt University to design a curriculum, which Keogh, 1982; Ritter & Idol-Maestas, 1986; Sampson,
included a prepost assessment and student opinions about 1985). Specifically, research on teaching test-taking skills
the program, to be implemented during the following indicates that when the following guidelines in test-taking
school year as part of regular school practices. strategy lessons are implemented, students achieve higher
The principal and PBS team at School F decided to test performance: (a) Students take a pretest to determine
implement an ACT preparation program for all 11th-grade types of questions they need to master; (b) the teacher
students as part of their PBS program to be taught by provides modeling for choosing the correct answer;
teachers during homeroom as part of regular school prac- (c) students practice test items that mirror the actual test
tices. Specifically, teachers would teach an ACT prepara- they will take; and (d) students apply learned test-taking
tion program, Preparing for the ACT (described below), to strategies on a timed posttest (Scruggs & Mastropieri,
11th-grade students once per week during homeroom time 1992). The scope and sequence of the Preparing for the
and administer ongoing probes to practice for the actual ACT curriculum, developed by Vanderbilt University’s
ACT test taken in the spring. One reason for the focus on Project PBS staff with input from two high school PBS
preparing students for the ACT was to increase the likeli- teams, included these guidelines to help students achieve
hood of students being able to attend college, particularly maximum test performance on the ACT test (see Table 2).
given that students in Tennessee now had the opportunity The goal of the curriculum was to enhance test readiness
to receive college funding through the Tennessee Hope by (a) introducing students to the format of the assessment;
Scholarship program that awarded college funding to (b) providing multiple opportunities to practice exam
students based on ACT scores. questions, (c) teaching general test-taking strategies, and
Prior to implementation, the principal organized a meet- (d) teaching additional specific test-taking strategies for the
ing with all teachers that would be teaching the ACT four subject area tests: English, reading, math, and science.

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


Lane et al. / College Entrance Exams 7

Table 2
Curriculum Overview
Lesson Topic CBM Probe Lesson Topic CBM Probe

1 Overview: ACT test; the program and 14 English 3: Sentence structure and
curriculum; benefits of strong grammar review
performance on the actual ACT; 15 Math 3: Overview of content areas
and CBM probes. 16 Reading 3: Reading Speed
2 Curriculum-based probe #1 1 17 Science 3: Strategies for reading
3 Test-Taking Strategies research summary passages
4 English 1: Mechanics and rhetoric; 18 Curriculum-based probe #4 4
long-range strategies 19 English 4: writing strategy,
5 English 2: Pacing tactics and organization, transition, and main
punctuation idea statements
6 Math 1: Overview of potential areas 20 Math 4: study strategies for the
(pre-algebra; elementary, ACT math test
intermediated, coordinate, and plane 21 Reading 4: identifying distracters,
geometry; and trigonometry) question types; and shifts in thoughts
7 Math 2: Approaches to algebra problems 22 Curriculum-based probe #5 5
8 Curriculum-based probe #2 23 Science 4: Research summary strategies
9 Reading 1: Description of test; types 2 24 English 5: English style, word choice,
of questions and language
10 Reading 2: Techniques for skimming; 25 Reading 5: reading review and practice;
deciding on a technique marking passages
11 Science 1: Content areas covered 26 Curriculum-based probe #6 6
(Earth or space; chemistry; and
physics) and science skills tested
12 Science 2: Types of data representation
13 Curriculum-based probe #3 3

Note: CBM = curriculum-based measurement.

The curriculum contained 26 lessons, each 30 min in tests with a break following the second test. Students
length, for a total of 13 hours of intervention. took the test in classrooms and the library located in one
Lesson 1 contained an overview of the ACT test wing of the building. Hall monitors were present to min-
(ACT, 2004a), the program and curriculum, benefits asso- imize noise from other parts of the building. Immediately
ciated with successful performance on the actual ACT, following test completion, all materials were collected
and self-monitoring procedures as applied to graphed and scored.
progress on the four subject area tests as measured by Table 2 contains a scope and sequence of the remain-
curriculum-based measurement (CBM) probes. Students ing lessons and CBM probes. The Preparing for the
were informed that they would complete mini tests every ACT curriculum was not designed to serve as an in-
5 to 6 weeks with five questions from each subject area depth review of content-related skills; instead, the cur-
tests. Sample test items obtained from the official ACT riculum focused on developing students’ familiarity
Web site (ACT, 2004) were reprinted with permission with test form and test questions. As students evaluated
from Preparing for the ACT Assessment. Results were their own progress through the use of monthly CBM
graphed to monitor student progress and help students probes, they were encouraged to seek further assistance
make decisions about how to focus their study efforts. to further develop content-related skills from published
After the first lesson, students were administered a ACT study guides and various ACT Web sites to maxi-
full practice ACT test also obtained from the official mize their scores. The program closed with a final full
ACT (2004a) Web site to familiarize the students with 2 hour, 55 min postintervention practice ACT assess-
the format and content of the ACT test and to obtain a ment (obtained from the official ACT Web site
baseline level of performance. The test was adminis- reprinted with permission [ACT, 2004a]), which was
tered following the standard procedures of an actual administered 2 1/2 weeks prior to the administration of
ACT test administration, including all four subject area the actual ACT.

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


8 Remedial and Special Education

Intervention Implementation passages and consists of 75 multiple-choice questions.


The math test, which is administered in 60 min, contains
Following consenting and training procedures, 10
60 multiple-choice questions covering math skills
11th-grade teachers taught the ACT curriculum during a
from six areas of study: pre-algebra, elementary algebra,
30 min period each Tuesday. Fidelity data were collected
intermediate algebra, coordinate geometry, plane geom-
throughout the program from teacher and RA perspec-
etry, and trigonometry. The reading test consists of 40
tives (details to follow). Social validity data were
multiple-choice questions based on four reading pas-
obtained at year end.
sages (prose, social studies, natural sciences, and human-
ities) to be answered within a 35-min limit. This test
Measures measures comprehension skills such as making infer-
Student performance was monitored using the following ences, developing generalizations, and analyzing author’s
measures, collected as part of regular school practices. voice. Finally, the 35-min science test consists of 40
Data were collected and entered by an RA. A second RA multiple-choice questions designed to measure a
assessed reliability of data entry for 25% of the data. The student’s ability to solve scientific problems using rea-
few identified errors were corrected. soning skills. This test contains seven passages covering
the following content areas: biology, Earth and space
Grade point average (GPA). During the 2004 to 2005 sciences, chemistry, and physics (ACT, 2004a). Students
academic year, the district GPA scale was based on a receive scores for each of the subject area tests (ranging
4-point scale with the grades ranging from A+ (4.0), from 0–36) as well as a composite score that is the arith-
B (3.0), C (2.0), D– (1.0), and F (0.00, including pluses metic average of the English, mathematics, reading, and
and minuses). During the 2005 to 2006 academic year, a science scale scores (AACT; ranging from 0–36; ACT,
C– and D– were eliminated from the grading scale, 2004a). Higher scores indicate higher levels of mastery.
slightly changing the grade point scale to include a D as Most colleges in the United States require either the ACT
a 1.0 grade point. Mean reliability of entry for quarterly or the SAT for admission. The ACT Assessment is now
GPA scores was 100% for the 2004 to 2005 and 2005 to accepted by virtually all colleges and universities in the
2006 academic years. Quarterly grade point averages United States. Although the ACT now contains a writing
were averaged into an annual GPA for students’ 10th- and component, we did not examine writing outcomes as the
11th-grade years. Tenth-grade GPAs were used as predictor intervention curriculum did not address the writing
variables in models to predict (a) postintervention prac- subject area test.
tice ACT scores and (b) actual ACT scores. Comparisons
were not made across years because of changes in the ACT practice test. Participants took a sample ACT
grading scale. test from Preparing for the ACT assessment: Test prepa-
ration, reprinted with permission from the official ACT
Office disciplinary referrals (ODR). The ODR data website (ACT, 2004a). This sample test is a representa-
were collected monthly at the student level during students’ tion of the authentic ACT assessment consisting of four
10th and 11th grades. Annual ODR rates were computed subject areas (English, PES; Math, PMS; Reading, PRS;
by dividing the total number of referrals by the number and Science, PSS) accumulating to 215 multiple-choice
of instructional days during each year. Mean reliabilities questions. Participants took the full practice ACT (pre-
for entry of ODR data were as follows: 2004 to 2005 aca- intervention) 1 week prior to the first lesson (PACT-T1)
demic year equals 99.8% and 2005 to 2006 academic and took the full practice ACT (postintervention) 1 week
year equals 99.07%. The 10th-grade ODRs were used as after the last lesson (PACT-T2). Participants had 2 hours
predictor variables in models to predict (a) postinterven- 55 min to complete the pre- and posttests as they would
tion practice ACT scores and (b) actual ACT scores. when taking the actual ACT assessment adhering to the
subject area test time limits specified above. The ACT
ACT actual test. The ACT is a curriculum-based test practice tests were scored and entered with high reliability
consisting of 215 multiple-choice questions. The test is percentages, specifically 99% for the preintervention test
divided into four subject area tests: English (AES), math and 100% for the posttest.
(AMS), reading (ARS), and science (ASS). Actual testing
time is 2 hours and 55 min. The English test measures a Treatment integrity. Treatment integrity was assessed
student’s rhetorical (style, organization, and strategy) and from two perspectives (teacher and RA) using compo-
mechanical (sentence structure, basic grammar, and nent checklists on which the rater indicated the presence
punctuation) skills. This 45-min test contains five prose or absence of 12 items constituting the core intervention

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


Lane et al. / College Entrance Exams 9

components. Teachers completed a treatment integrity each rater were computed by summing the daily session
form after each lesson. Teachers were observed between integrity percentage scores (0%–100%) for each lesson
10 (37%) and 13 (48%) lessons. and dividing by the quantity by the number of lessons
taught (teacher scores) or observed (RA scores). An effect
Social validity. Social validity was assessed postinter- size was computed using the pooled standard deviation
vention by two raters, students, and teachers. Each teacher in the denominator to determine the magnitude of the
rater used a modified version of the Intervention Rating differences in implementation between RA and teacher
Profile (IRP-15; Martens, Witt, Elliott, & Darveaux, perspectives (Busk & Serlin, 1992).
1985), whereas students used a modified version of the
Children’s Intervention Rating Profile (CIRP; Martens Findings. Average session integrity scores from the
et al., 1985) during their final lesson. The wording of each RA perspective ranged from 62.33 to 95.83 (M = 82.96,
item was slightly modified to reflect the intervention con- SD = 11.91). Teacher treatment integrity ratings were
tent. The IRP-15 was a 15-item survey containing state- moderately higher (M = 89.44, SD = 9.30; range, 69.90–
ments to be rated on a 6-point Likert-type scale, ranging 98.45; effect size = 0.61).
from 1 (strongly disagree) to 6 (strongly agree). Items
included statements related to the perceived ability to meet
the purpose of the intervention, the appropriateness of the Factors Predicting Practice ACT
intervention with the student population, and the perceived Postintervention Test Results
acceptability of the curriculum. Total scores ranged from
15 to 90, with higher scores indicating higher rates of Statistical analysis. Results were analyzed using bivari-
acceptability. The CIRP contained 7 items rated using a ate correlation and multiple regression procedures. Multiple
6-point Likert-type scale (1 = I do not agree, 6 = I agree). regression procedures were used to examine the extent to
Total scores range from 7 to 42, with higher scores sug- which behavioral performance (as measured by ODR) and
gesting higher acceptability. Both social validity ratings academic performance (as measured by GPA) predicted
were entered with 100% reliability. Alpha coefficients five outcome variables: practice English (PES-T2), practice
were computed for the IRP-15 and CIRP for the current math (PMS-T2), practice reading (PRS-T2), practice
sample, with respective values of .95 and .66. science (PSS-T2), and practice ACT total scores (PACTS-
T2). F values were inspected to determine the significance
of the overall model. Univariate analyses were conducted if
Experimental Designs
the model was significant to ascertain the unique contribu-
Two experimental designs, a descriptive and a quasi- tion of each variable constituting the model. T tests were
experimental, were used to address the research objectives. conducted to determine the individual contribution of each
First, descriptive procedures were employed to examine predictor variable, controlling for the remaining variables.
the extent to which characteristics of students who par- Standardized multiple regression coefficients (beta weights)
ticipated in the program (e.g., disciplinary contacts and and unique indices were inspected to ascertain the relative
GPA) predicted performance on (a) the ACT practice test contribution of each predictor variable in explaining the
and (b) the actual ACT test. Second, a quasiexperimental criterion variable. Specifically, the unique index of a pre-
design was used in which actual ACT test performance dictor variable is the percentage of variance in the criterion
for 11th-grade students who did (2005–2006 academic variable accounted for by the given variable over and above
year) and did not participate (students, 2004–2005 acad- the variance explained by the remaining variables in
emic year) was compared using a two-group, posttest only the model. Bivariate correlations explored overall relation-
design. ships between variables, whereas semipartial correlations
determined the relation between the predictor and criterion
Results variable, controlling for remaining variables (see Tables 3
and 4).
Treatment Integrity
Y = β0 + βODR + βGPA + ε
Statistical analysis. Treatment integrity was assessed
from the RA and teacher perspectives using the procedures This model was based on the hypothesis that students
described above. Average session integrity scores were with higher levels of problem behavior, as measured by
computed for each class as well as the overall percentage the rate of ODRs earned during the previous academic
of implementation for the ACT program. Averages for year, and lower academic performance, as measured by

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


10 Remedial and Special Education

Table 3
Study 1: Correlation Matrix All Variables
Variable 1 2 3 4 5 6 7 8 9 10

1. ODR-10 1.00
2. ODR-11 0.31*** 1.00
3. GPA-10 0.39**** –0.23* 1.00
4. GPA-11 0.31*** –0.67**** 0.40**** 1.00
5. PES-T1 0.23* –0.36**** 0.43**** 0.59**** 1.00
6. PES-T2 –0.16 –0.40**** 0.26** 0.61**** 0.79**** 1.00
7. PMS-T1 –0.14 –0.36**** 0.45**** 0.62**** 0.78**** 0.71**** 1.00
8. PMS-T2 –0.18 –0.39**** 0.36**** 0.64**** 0.69**** 0.76**** 0.83**** 1.00
9. PRS-T1 –0.25** –0.27** 0.36**** 0.56**** 0.79**** 0.70**** 0.70**** 0.60**** 1.00
10. PRS-T2 –0.21** –0.42** 0.24** 0.60**** 0.70**** 0.84**** 0.65**** 0.72**** 0.74**** 1.00
11. PSS-T1 –0.25** –0.46**** 0.44**** 0.63**** 0.80**** 0.72**** 0.81**** 0.75**** 0.76**** 0.69****
12. PSS-T2 –0.13 –0.31*** 0.32*** 0.56**** 0.62**** 0.72**** 0.71**** 0.80**** 0.70**** 0.76****
13. PACT-T1 –0.24** –0.37**** 0.46**** 0.62**** 0.94**** 0.78**** 0.88**** 0.76**** 0.89**** 0.74****
14. PACT-T2 –0.21* –0.46**** 0.34**** 0.66**** 0.77**** 0.89**** 0.77**** 0.86**** 0.72**** 0.94****
15. AES –0.28** –0.39**** 0.42**** 0.66**** 0.85**** 0.82**** 0.77**** 0.66**** 0.71**** 0.74****
16. AMS –0.11 –0.38*** 0.43**** 0.62**** 0.77**** 0.67**** 0.91**** 0.83**** 0.63**** 0.62****
17. ARS –0.25* –0.33** 0.35*** 0.64**** 0.81**** 0.78**** 0.70**** 0.60**** 0.82**** 0.80****
18. ASS –0.17 –0.31** 0.39*** 0.59**** 0.80**** 0.74**** 0.78**** 0.70**** 0.72**** 0.71****
19. AACT –0.23* –0.38*** 0.42*** 0.68**** 0.88**** 0.83**** 0.84**** 0.75**** 0.79**** 0.79****

Note: N = X; ODR-10 = office discipline referrals from the 10th grade; GPA-10 = grade point average from the 10th grade; PES-T1 = prein-
tervention practice English subtest; PES-T2 = postintervention practice English subtest; PMS-T1 = preintervention practice math subtest; PMS-
T2 = postintervention practice math subtest; PRS-T1 = preintervention practice reading subtest; PRS-T2 = postintervention practice reading subtest;
PSS-T1 = preintervention practice science subtest; PSS-T2 = postintervention practice science subtest; PACT-T1 = preintervention practice ACT
total score [respectively]; PACT-T2 = postintervention practice ACT total score; AES = actual English subtest; AMS = actual math subtest;
ARS = actual reading subtest; ASS = actual science subtest; AACT = actual ACT total score.
*p < .05. **p < .01. ***p < .001. ****p < .0001.

Table 4
Mean Score Comparisons of Practice ACT Scores
Preintervention Postintervention Difference Scores Effect Size
(Time 1) (Time 2) (Time 2 – Time 1) (Time 1 – Time 2)
Variable M (SD) M (SD) M (SD) M (SD)

English 16.98 (5.98) 18.42 (6.84) 1.44 (4.28) 0.22


Math 18.47 (4.48) 19.38 (5.30) 0.91 (2.93) 0.19
Reading 18.17 (6.12) 18.52 (7.72) 0.36 (5.22) 0.05
Science 17.22 (5.13) 18.19 (5.91) 0.98 (4.11) 0.18
Total 17.87 (5.03) 18.64 (6.06) 0.77 (3.52) 0.14

GPA from the previous academic year, would be less with higher GPAs in 10th grade had higher postinter-
responsive. vention practice English ACT scores (β = .24; r = .26;
p = .0029).
Results. In predicting PES-T2, the two variable model In predicting PMS-T2, the two variable model
accounted for 7% of the variance in students’ postinter- accounted for 13% of the variance in students’ postinter-
vention scores, R2 = 0.07, F(2, 123) = 4.80, p = .0099. vention scores, R2 = 0.13, F(2, 123) = 9.17, p = .0002.
Inspection of semipartial correlations indicated that one Inspection of semipartial correlations indicated that one
variable, GPA-10 (t = 2.53, p = 0.0128), was a significant variable, GPA-10 (t = 3.73, p = .0003), was a significant
predictor of PES-T2 accounting for 5% of unique vari- predictor of PMS-T2 accounting for 10% of unique vari-
ance after controlling for the other variable (ODR-10) in ance after controlling for the other variable (ODR-10)
the model. There was a significant, positive correlation in the model. There was a positive correlation between
between GPA-10 and PES-T2 indicating that students GPA-10 and PMS-T2 indicating that students with higher

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


Lane et al. / College Entrance Exams 11

GPAs in 10th grade had higher postintervention practice We hypothesized that students’ performance on the prac-
math ACT scores (β = .34; r = .36; p < .0001). tice test would be highly predictive of performance on
In predicting PRS-T2, the two variable model the actual ACT test, with a positive correlation between
accounted for 8% of the variance in students’ postinter- the two tests. In addition, we hypothesized that students
vention scores, R2 = 0.08, F(2, 123) = 4.98, p = 0.008. with higher GPAs and lower rates of ODR would earn
Inspection of semipartial correlations indicated that one higher actual ACT tests.
variable, GPA-10 (t = 2.00, p = 0.048), was a significant
predictor of PRS-T2 accounting for 3% of unique variance Y = β0 + βPost-intervention score + βODR + βGPA + ε
after controlling for the other variable (ODR-10) in the
model. There was a positive correlation between GPA-10
and PRS-T2 indicating that students with higher GPAs in Results. In predicting AES, the three variable model
10th grade had higher postintervention practice reading accounted for 71% of the variance in students’ ACT
ACT scores (β = .19; r = .24; p = .006). English test scores, R2 = 0.71, F(3, 86) = 71.16, p < .0001.
In predicting PSS-T2, the two variable model accounted Inspection of semipartial correlations indicated that all
for 10% of the variance in students’ postintervention three variables were significant predictors of AES:
scores, R2 = 0.10, F(2, 123) = 6.82, p = .0016. Inspection PES-T2 (t = 12.47, p < .0001), ODR-10 (t = –2.20, p = .03),
of semipartial correlations indicated that one variable, and GPA-10 (t = 2.17, p = .03). Respective contributions
GPA-10 (t = 3.36, p = 0.001), was a significant predictor to unique variance are as follows: PES-T1 = 52%; ODR-
of PSS-T2 accounting for 8% of unique variance after 10 = 2%, and GPA-10 = 2%. There were positive corre-
controlling for the other variable (ODR-10) in the model. lations between PES-T2 and AES scores (β = 0.76; r = 0.82;
There was a positive correlation between GPA-10 and p < .0001) as well as between GPA-10 and AES scores
PSS-T2, indicating that students with higher GPAs in (β = 0.14; r = 0.43; p < .0001) indicating that students
10th grade had higher postintervention practice science with higher postintervention practice English scores and
ACT scores (β = .31; r = .32; p = .0003). with higher GPAs during 10th grade had higher scores
In predicting PACT-T2, the two variable model on the English test. Also, there was a negative correlation
accounted for 13% of the variance in students’ postinter- between ODR-10 and AES scores (β = –0.14; r = –0.28;
vention scores, R2 = 0.13, F(2, 123) = 8.79, p = .0003. p = .008), suggesting that students with higher rates of
Inspection of semipartial correlations indicated that one office referrals during 10th grade had lower English test
variable, GPA-10 (t = 3.36, p = .001), was a significant scores.
predictor of PACT-T2 accounting for 8% of unique vari- In predicting AMS, the three variable model accounted
ance after controlling for the other variable (ODR-10) for 69% of the variance in students’ postintervention
in the model. There was a positive correlation between scores, R2 = 0.69, F(3, 86) = 65.06, p < .0001. Inspection
GPA-10 and PACT-T2 indicating that students with higher of semipartial correlations indicated that one variable,
GPAs in 10th grade had higher postintervention practice PMS-T2 (t = 11.98, p < .0001), was a significant predictor
total ACT scores (β = .31; r = .34; p < .0001). of AMS accounting for 51% of unique variance after con-
As originally hypothesized, for each subject area test trolling for the other variables (ODR-10 and GPA-10) in
as well as the overall ACT practice test scores, academic the model. The GPA-10 approached significance, t = 1.70,
performance in the previous academic year was significant p = .09. There was a positive correlation between PMS-T2
in predicting postintervention practice scores. Yet con- and AMS indicating that students with higher practice
trary to the original hypothesis, behavioral performance postintervention math scores had higher math ACT scores
from the previous academic year was not significantly (β = 0.78; r = 0.83; p < .0001).
associated with practice test scores. Although the corre- In predicting ARS, the three variable model accounted
lation between ODRs and postintervention scores were for 66% of the variance in students’ reading scores, R2 =
negative, the amount of unique variance associated with 0.66, F(3, 86) = 54.99, p < .0001. Inspection of semipartial
ODRs was not significant in the model (see Table 5). correlations indicated that one variable, PRS-T2 (t = 11.33,
p < .0001), was a significant predictor of ARS accounting
Factors Predicting Actual ACT Results for 51% of unique variance after controlling for the other
variables (ODR-10 and GPA-10) in the model. There was
Statistical analysis. Results were analyzed using bivari- a positive correlation between PRS-T2 and ARS indicating
ate correlation and multiple regression procedures as that students with higher post-intervention practice reading
described above. In this model, postintervention practice ACT scores had higher actual reading scores on the ACT
scores were included in addition to ODR and GPA scores. (β = 0.76; r = 0.80; p < .0001).

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


12 Remedial and Special Education

Table 5
Results of Multiple Regression Analyses on Practice ACT scores
PES-T2 PMS-T2 PRS-T2 PSS-T2 PACT-T2

Beta Unique Beta Unique Beta Unique Beta Unique Beta Unique
Predictor Weights Indices Weights Indices Weights Indices Weights Indices Weights Indices
Variables Value (ta) Value (Fb) Value (ta) Value (Fb) Value (ta) Value (Fb) Value (ta) Value (Fb) Value (ta) Value (Fb)

ODR-10 –0.06 0.003 –0.425 0.002 –0.14 0.016 –0.008 0 –0.09 0.007
(–0.66) (0.44) (–0.46) (0.21) (–1.47) (2.03) (–0.09) (0.00) (–0.99) (0.98)
GPA-10 0.24 0.048 0.341 0.099 0.19 0.03 0.313 0.083 0.31 0.0803
(2.53*) (6.39*) (3.73***) (14.09***) (2.00*) (3.75) (3.36**) (11.80***) (3.36***) (11.29**)

Note: Standardized beta weights and unique indices shown reflect all variables in each model. ODR-10 = office discipline referrals from the
10th grade; GPA-10 = grade point average from the 10th grade; PES-T2 = postintervention practice English subtest; PMS-T2 = postintervention
practice math subtest; PRS-T2 = postintervention practice reading subtest; PSS-T2 = postintervention practice science subtest; PACT-T2 =
postintervention practice ACT total score.
a. For t tests that tested the significance of the beta weights df = 123.
b. For F tests that tested the significance of the uniqueness indices df = 1, 123.
*p < .05. **p < .01. ***p < .001. ****p < .0001.

In predicting ASS, the three variable model accounted Comparison of ACT Scores: Students Who
for 54% of the variance in students’ postintervention Did and Did Not Receive the Intervention
scores, R2 = 0.54, F(3, 86) = 33.41, p < .0001. Inspection
of semipartial correlations indicated that one variable, Statistical analysis. Descriptive statistics (frequency
PSS (t = 8.49, p < .0001), was a significant predictor of tables, mean score comparisons, and measures of disper-
ASS accounting for 39% of unique variance after control- sion) were examined from the 2004 to 2005 (noninterven-
ling for the other variables (ODR-10 and GPA-10) in the tion year) and 2005 to 2006 (intervention year) school
model. There was a positive correlation between PSS-T2 years to determine the (a) potential differences between the
and ASS indicating that students with higher postinter- academic achievement of the 11th-grade classes to deter-
vention practice science ACT scores and higher actual mine if the two groups were comparable, (b) percentage
science ACT scores (β = 0.67; r = 0.72; p < .0001). of enrolled 11th-grade students who took the actual ACT,
In predicting AACT, the three variable model accounted (c) mean scores on each subject area test and the overall
for 73% of the variance in students ACT total scores, ACT test and the extent to which these mean scores differed
R2 = 0.73, F(3, 86) = 78.73, p < .0001. Inspection of semi- from annual state mean scores, and (d) the percentage of
partial correlations indicated that one variable, PACT-T2 students who scored at or above a 22—the district target
(t = 13.25, p < .0001), was a significant predictor of and the minimum score necessary to obtain state monies
AACT-T2 accounting for 55% of unique variance after from Tennessee’s Hope Scholarship fund. Effect sizes
controlling for the other variables (ODR-10 and GPA-10) were computed using the pooled standard deviation in
in the model. There was a positive correlation between the denominator to determine the magnitude of changes in
AACT and PACT-T2 indicating that students with higher mean scores between years. Effect sizes were interpreted
postintervention scores on the practice test had higher according to recommendations offered by Bloom, Hill,
actual ACT scores (β = 0.81; r = 0.85; p < .0001). Black, and Lipsey (2006). Rather than relying solely on
As hypothesized, for each subject area test as well as fixed numerical interpretations (small = 0.15 σ, medium =
the overall ACT practice test scores, students’ postinter- 0.45 σ, large = 0.90 σ), we interpreted effect sizes by com-
vention scores were significant in predicting actual ACT paring computed effect sizes with (a) effect size distribu-
scores. However, only in the case of the English test were tions from comparable studies, (b) student attainment of
GPA-10 and ODR-10 predictive of actual English ACT performance criterion without intervention, and (c) norma-
scores. As expected, there was a positive relationship tive expectations for change.
between GPA during 10th grade and ACT scores.
Furthermore, there was an inverse relationship between Findings. Results of an independent t test revealed
ODRs and AET scores, with students who had higher nonsignificant differences in GPA indicating that both
rates of ODRs had lower English scores on the ACT. 11-grade classes had comparable GPAs, with respective
GPA-10 and ODR-10 were not significant predictor vari- mean scores of 2.83 (SD = 0.85, for 2004–2005 class)
ables in the remaining models (see Table 6). and 2.87 (SD = 0.87, for the 2005–2006 class).
Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015
Lane et al. / College Entrance Exams 13

Table 6
Results of Multiple Regression Analyses on Actual ACT scores
AES AMS ARS ASS AACT

Beta Unique Beta Unique Beta Unique Beta Unique Beta Unique
Weights Indices Weights Indices Weights Indices Weights Indices Weights Indices
Variable Value (ta) Value (Fb) Value (ta) Value (Fb) Value (ta) Value (Fb) Value (ta) Value (Fb) Value (ta) Value (Fb)

PIPS 0.76 0.52 0.78 0.51 0.76 0.51 0.67 0.39 0.81 0.55
(12.47****) (173.17****) (11.98****) (127.55****) (11.33****) (127.93****) (8.49****) (77.42****) (13.25****) (181.73****)
ODR-10 –0.14 0.02 0.01 0.0001 –0.05 0.00 –0.07 0.003 –0.02 0.0005
(–2.20*) (0.37*) (0.21) (0.25) (–0.71) (0.00) (–0.78) (0.64) (–0.38) (0.167)
GPA-10 0.14 0.02 0.12 0.01 0.09 0.006 0.13 0.013 0.08 0.005
(2.17*) (5.23*) (1.70) (2.55) (1.26) (1.58) (1.54) (2.54) (1.24) (1.60)

Note: Standardized beta weights and unique indices shown reflect all variables in each model. PIPS = postintervention practice score for each
subtest predicted; ODR-10 = office discipline referrals from the 10th grade; GPA-10 = grade point average from the 10th grade; AES = actual
English subtest; AMS = actual math subtest; ARS = actual reading subtest; ASS = actual science subtest; AACT = actual ACT total score.
a. For t tests that tested the significance of the beta weights df = 86.
b. For F tests that tested the significance of the uniqueness indices df = 1, 86.
*p < .05. **p < .01. ***p < .001. ****p < .0001.

Furthermore, chi square analyses indicated that both on all subject area tests as well as the overall score,
11th-grade classes were similar with respect to student school means during the intervention year (a) were equiv-
ethnicity and special education participation. alent to the state mean scores on the science test and total
Results indicated that a similar percentage of enrolled score means and (b) exceeded state mean scores on the
students took the ACT during the 2004 to 2005 academic English and math tests. School means scores fell below
year and 2005 to 2006 academic year when the secondary the state means on only the reading test (see Table 7).
intervention was implemented. During 2004 to 2005, 166
students were enrolled in 11th grade, 90 of whom took the Social Validity
actual ACT exam (54.22%, n = 90). During the interven-
tion year, 188 students were enrolled in the 11th grade, of Statistical analysis. Social validity data were analyzed
which 99 took the actual ACT exam (52.66%, n = 99). using descriptive procedures such as inspection of means
Nine students were excluded from the analysis as they and standard deviations of item level and total scores.
either (a) were not present during the previous school year
or (b) had a patterned response on the practice ACT. Thus, Findings. Students rated the intervention slightly
for subsequent analyses, 90 students who completed the favorably with an overall mean of 25.43 (SD = 6.33) and
ACT during the 2004 to 2005 school year were compared individual scores varying greatly (11 to 42). Inspection
to the 90 students who completed the intervention and of individual items revealed that students felt the inter-
took the ACT during the 2004 to 2005 school year. vention would not negatively affect their reputation with
Mean score comparisons indicated slight increases in their friends at school (M = 4.59, SD = 1.66) and the
mean scores between the 2 years, with the greatest lessons were not too difficult (M = 4.56, SD = 1.48).
improvement on the English test (ES = 0.09) and least Narrative comments also suggested that students viewed
improvement on the science test (ES = 0.02). Overall the program as beneficial (e.g., “being able to see what
ACT scores showed low increase between the 2004 to the format is”). However, some students viewed the inter-
2005 (M = 19.66, SD = 4.23) and 2005 to 2006 (M = 20.65, vention as less than helpful (e.g., “make it more hands on
SD = 5.37) school years (ES = 0.06). Although the mag- and less talking for the teacher”).
nitude of the increases may appear small, given the age and Teachers rated the intervention overall positively with
grade level of the students, one would not expect high an average rating of 54.50 (SD = 15.96). There was
magnitude improvements (Bloom et al., 2006). substantial variability amongst the teachers, with total
Despite the minimal increases in mean scores, the per- scores ranging from 29 to 76. Comments also suggested
centage of students who attained the district target objec- that teachers found this intervention as helpful, “It was
tive (total score = 22) increased by 10%. Furthermore, very beneficial for the students to see and discuss the
whereas school means fell below the state mean scores various types of problems and questions they will see on

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


14 Remedial and Special Education

Table 7 Gregg, et al., 2002; Walker et al., 2005). One similar study
Actual ACT Scores Over Time has been conducted at the elementary level (Robertson &
Preintervention Intervention Effect Lane, 2007), and none have been attempted at the high
Subtest Year (N = 90) Year (N = 90) Size school level, when learning and behavioral concerns are
particularly resistant to intervention efforts (Kazdin, 1987).
English
The absence of targeted interventions in high schools is
M (SD) 19.66 (5.97) 20.91 (6.50) 0.09
Range 7–33 7–35 particularly concerning given that high school is the final
Scores: 22+ 29 (32.22) 40 (44.44) point of public education before students either enter the
State 20.6 20.8 work force or pursue collegial opportunities.
Math Performance on college entrance exams such as the
M (SD) 19.59 (4.29) 20.13 (4.86) 0.04 ACT serves as an important predictor of which students
Range 13–30 12–36 will be admitted to colleges and universities (Ehrehnhaft
Scores: 22+ 29 (32.22) 32 (35.56) et al., 2001; Fuller & Wehman, 2003). Yet a strong body
State 19.7 19.9 of literature indicates that secondary students in the
Reading United States of America lack strong study skills (Jones,
M (SD) 19.72 (5.52) 20.76 (6.54) 0.08 Slate, Bell, et al., 1991; Jones, Slate, Blake, et al., 1995;
Range 9–32 8–36 Kovach et al., 2001; Slate et al., 1993) and many schools
Scores: 22+ 34 (37.78) 36 (40.00)
State 20.8 21.1
do not offer explicit instruction to assist students in prepar-
ing for college entrance exams.
Science This preliminary study sought to address this void of
M (SD) 20.10 (3.59) 20.34 (5.29) 0.02
Range 12–29 9–36
secondary interventions at the high school level and
Scores: 22+ 27 (30.00) 29 (32.22) address the limited study skills characteristic of many
State 20.2 20.3 high school students by examining the efficacy of a
Total
targeted, secondary intervention to assist in preparing
M (SD) 19.96 (4.23) 20.65 (5.37) 0.06 11th-grade students to take the ACT college entrance
Range 12–30 11–35 exam. Furthermore, we extended this line of inquiry by
Scores: 22+ 29 (32.22) 38 (42.22) examining the degree to which students with various aca-
State 20.5 20.7 demic and behavioral performance levels responded to
Note: During the 2004 to 2005 administration, 90 students took the the support provided.
ACT; however, 1 student did not take the science subtest and, conse-
quently, did not have a total ACT score. Student Characteristics Associated
With Performance on the ACT
the ACT.” However, some teachers provided criticism of Based on the notion that students with (a) behavior
the intervention that primarily focused on being asked to problems exhibit behaviors that prompt office referrals
prepare students for all aspects of the exam: “My knowl- and impede instruction and (b) high GPAs are apt to have
edge was limited to Reading/English sections; therefore strong academic and study skills, we hypothesize that
I typically read straight from the packet which caused my students with lower rates of ODRs earned during the pre-
students to zone out.” Overall, the teachers and students vious academic year and high GPAs from the previous
rated the intervention beneficial. academic year would be more responsive to the interven-
tion as measured by performance on the practice ACT
Discussion test. Results partially confirmed our hypotheses.
Although the overall models were significant, only
As RTI and PBS models are implemented across the students’ academic performance in the previous year
country, researchers and practitioners are faced with the (GPA-10) was a significant predictor of postintervention
formidable task of identifying and supporting students English, math, reading, science, and overall test scores,
using targeted interventions. To date the only published after controlling for variance accounted for by ODRs
studies of secondary prevention efforts implemented earned during 10th grade. Behavioral performance during
within the context of three-tiered PBS models have been the prior year (ODR-10) was not a significant predictor of
conducted with elementary age students who were non- postintervention practice test scores after accounting for
responsive to primary prevention efforts (Lane, Wehby, GPA; although the relationships between ODRs and
Menzies, Doukas, et al., 2003; Lane, Wehby, Menzies, postintervention practice test scores were all negative.

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


Lane et al. / College Entrance Exams 15

A similar pattern was observed when predicting Results of this quasiexperimental study have impor-
student performance on the actual ACT. As originally tant practical implications for schools. This program was
hypothesized, students’ postintervention scores were highly feasible in the sense that it demanded few instructional
significant in predicting subject area test and overall ACT resources—limited teacher and student time—and no
scores, accounting for up to 52% unique variance in the additional time beyond the school day (Lane & Beebe-
models (English subject area test). Namely, student per- Frankenberger, 2004). Thus, one could argue that the
formance on the postintervention practice test was indica- cost–benefit ratio of implementing such a program is
tive of performance on the actual ACT test. However, positive in the sense that it enabled an additional 10% of
students’ academic (GPA-10) and behavioral (ODR-10) 11th-grade students to meet the district objective, which
performance were only significant in predicting perfor- also made these students eligible for monies from the
mance on the English test. As expected, students with Tennessee Hope Scholarship (www.collegepaystn.com/
higher GPAs and fewer ODRs performed better on the mon_college/hope_scholar.htm). This scholarship offers
English test. Yet neither of these variables explained a $3,800 for 4-year institutions and $1,900 for 2-year insti-
significant amount of unique variance over and above the tutions with monies applicable to the cost of attendance.
postintervention practice test scores. This is an enormous incentive for students who do not
In terms of practical implications, there are two key have the financial resources to attend college.
points. First, given that postintervention practice test scores Two other indices of feasibility are the high levels of
are highly predictive of actual ACT test performance, it is treatment fidelity from teacher (89.44%) and research
important that students develop the skill sets such as study assistant (82.96%) perspectives as well as moderate-to-
skills, content knowledge, and practice opportunities to high social validity ratings from teachers and students. As
bolster practice test scores results. Second, given that expected, teacher ratings were more favorable than
students’ academic performance during 10th grade is highly student ratings. Yet the intervention goals, procedures, and
predictive of postintervention practice test scores, inter- outcomes were rated as moderately favorable by students
vention efforts should ideally begin prior to 11th grade. If and more so by teachers. Gresham and colleagues have
students’ study skills could be developed earlier (i.e., in argued that treatment integrity can actually serve as a
10th or even 9th grade) resulting in improved academic behavioral marker for social validity, if teachers imple-
performance in 10th grade, it is likely that student perfor- ment the program as designed, then, by definition, they
mance on both the practice and ultimately the actual ACT accept it (Gresham & Lopez, 1996). Lane and Beebe-
test would improve as well. Frankenberger (2004) contend that if an empirically
sound, feasible intervention is implemented with fidelity
and produces lasting, desirable changes in student behav-
Comparison of ACT Over Time
ior, the intervention is apt to be viewed as socially valid. If
The next objective was to use a quasiexperimental socially valid, the intervention is likely to be used in the
design to determine the relative, practical benefit of this future, producing a circular set of events.
intervention by comparing actual ACT performance for Although the magnitude of the differences between
11th-grade students who did (2005–2006 academic year) years may seem low, students become increasingly less
and did not (2004–2005 academic year) participate in the responsive to intervention efforts over time (Bullis &
intervention. We hypothesized that students who partici- Walker, 1994). Therefore, the magnitude of treatment out-
pated in the intervention program would perform better on comes is not expected to be consistent across the K-12
the actual ACT test relative to the previous years’ 11th-grade grade span. For example, evidence from national norm-
students who did not participate in the program. Findings ing samplings for several state achievement tests suggest
support this expectation in three ways. First, mean score decreasing effect sizes for annual reading and math
comparisons across the 2 years revealed nominal improve- growth as students progress through their educational
ments as measured by effect sizes. Second, the percentage experiences (Bloom et al., 2006). For example, reading
of students who met the district objective of earning a total growth effect sizes were 1.59 for students transitioning
score of 22 or greater increased by 10%. Third, school between kindergarten and 1st grade, 0.35 for students
mean scores met (science and total score) or exceeded transitioning between 5th and 6th grades, 0.21 for students
(English and math) state mean scores during the interven- transitioning between 10th and 11th grades, and 0.03 for
tion year, whereas during the year prior to intervention students transitioning between 11th and 12th grades.
school mean scores were below the state mean on all four Effect sizes were slightly lower for math growth with
subject area tests as well as the total ACT score. effect sizes of 1.13, 0.41, 0.15 and 0.00 for the above

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


16 Remedial and Special Education

named transition periods. Thus, the effect sizes produced alternate versions of the practice tests to determine how
from this study are comparable to growth demonstrated the program influences students’ performance on the prac-
for normative samples of reading and math growth (Bloom tice ACT. However, the primary objectives of this study
et al., 2006). were to (a) determine the extent to which students’ behav-
ioral and academic characteristics (e.g., GPA and ODR)
Limitations and Directions predicted performance on the actual ACT and (b) compare
for Future Research actual ACT performance for two cohorts of 11th-grade
students, one that did and one that did not participate in
Results of this study should be interpreted in light of the intervention. Using the same version of the practice
the following limitations and considerations pertaining to test would not impact either of these latter objectives.
experimental design and outcome measures. First, this is Fourth, social validity data were collected anony-
a preliminary investigation, and additional replications mously to increase both the quantity and candor of student
are necessary to establish the validity of these findings. and teacher responses. Future research could be enhanced
Specifically, because of the experimental designs by examining student responsiveness in relation to social
employed, the inclusion of one school, and the lack of validity to determine the extent to which social validity
replication in alternative settings (e.g., urban schools), predicted intervention outcomes as described above
results must be examined cautiously. Causal conclusions (Lane & Beebe-Frankenberger, 2004). Also, subsequent
cannot be drawn in the absence of a true experimental research could benefit from collecting social validity
design. To draw definitive conclusions regarding outcomes prior to intervention onset to identify any potential pro-
associated with intervention outcomes, future research cedural concerns. For example, some of the teachers
involving more rigorous designs such as true experimen- indicated that they felt uncomfortable teaching lessons
tal studies with random assignment of students to inter- outside their specific area of expertise. Had this concern
vention conditions (Gersten et al., 2005) or single case been identified prior to intervention onset, intervention
studies with multiple baseline across students (Horner delivery could have been restructured. One option would
et al., 2005) is necessary. Generalization of these findings have been to alter the delivery of instruction so that
cannot be established until these findings are replicated in students rotated through different homerooms so that
subsequent studies. math teachers taught all math lessons, science teachers
Second, because the intervention was taking place as taught all science lessons, and so on.
part of regular school practices, only a limited number of Fifth, neither teachers nor students participated in the
outcome measures were collected. Future investigations intervention development phase. Moving forward, partici-
could be enhanced by including additional measures to pation of these stakeholders in designing both the con-
determine if (a) students’ knowledge of study skills or tent and instructional strategies and activities may lead to
study skills habits (Jones & Slate, 1992) actually increased higher levels of treatment fidelity, improved social validity
and (b) these variables (knowledge and habits) actually ratings, and ultimately even stronger student outcomes.
predicted performance on the actual ACT. In addition,
it would be interesting to see if other variables, such as
treatment integrity and teacher perceptions of social valid- Conclusion
ity, mediated intervention outcomes (Lane & Beebe-
Frankenberger, 2004). Larger scale investigations with Despite the limitations mentioned above, this prelimi-
larger sample sizes could use hierarchal linear modeling nary study addresses two voids in the literature by (a) illus-
to determine if such class level variables predicted changes trating one method of providing targeted, secondary
in student performance. interventions at the high school level and (b) examining the
Third, permission was obtained for the administration efficacy of a feasible approach to preparing 11th-grade
of only one version of the practice ACT test. Thus the pre- students for the ACT college entrance exam. Furthermore,
and postintervention practice test was the same version; this study extends this line of inquiry by using multivariate
alternate versions were not used at the two time points. procedures to examine the degree to which students’ acad-
Although 8 months lapsed between test administrations, emic and behavioral performance predicts intervention
limitations associated with using the same version should outcomes. We encourage other researchers and educators
be considered when examining outcomes of analyses pre- to continue to develop this area of interest with an overall
dicting postintervention practice scores. Future studies call to develop the methodological rigor used to support
would be improved by obtaining permission to administer high school students with three-tiered models of support.

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


Lane et al. / College Entrance Exams 17

For those interested in improving student performance D. Hallahan (Eds.), Identification of learning disabilities: Research
on college entrance exams specifically, we also encour- to practice. Mahwah, NJ: Lawrence Erlbaum.
Gresham, F. M., & Lopez, M. F. (1996). Social validation: A unifying
age a focus on a more comprehensive intervention pro-
construct for school-based consultation research and practice.
gram. For example, research conducted by ACT (2005) School Psychology Quarterly, 11, 204–227.
suggests that short-term test preparation interventions Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S. L., &
may yield small, positive impacts. However, long-term Wolery, M. (2005). The use of single-subject research to identify
interventions that included taking the recommended core evidence-based practices in special education. Exceptional
college courses (ACT, 2004b) or even taking advanced Children, 71, 165–179.
Jones, C. H., & Slate, J. R. (1992). Technical manual for the Study
courses beyond the core requirements (e.g., additional Habits Inventory. Unpublished manuscript, Arkansas Sate
mathematics courses) produced greater increases in University.
composite ACT scores after controlling for students’ Jones, C. H., Slate, J. R., Bell, S., & Saddler, C. (1991). Helping high
prior achievement level (ACT, 2005). Consequently, school students improve their academic skills: A necessary role
future interventions would be wise to also examine the for teachers. High School Journal, 74, 198–202.
Jones, C. H., Slate, J. R., Blake, P. C., & Sloas, S. (1995). Relationship
role of students’ course completed as the predicted ACT
of study skills, conceptions of intelligence, and grade level in sec-
performance. ondary school students. High School Journal, 79, 25–32.
Jones, C. H., Slate, J. R., Mahan, K., Green, A., Marini, I., &
DeWater, B. (1993/1994). Study skills of college students as a
References function of gender, age, class and grade point average. Louisiana
Education Research Journal, 19, 60–74.
ACT. (2004a). ACT assessment. Test preparation. Retrieved June 11, Kazdin, A. E. (1987). Problem solving skills training and relationship
2004 from http://www.act.org/aap/testprep therapy in the treatment of antisocial child behavior. Journal of
ACT. (2004b). ACT high school profile report: HS graduating class Consulting and Clinical Psychology, 55, 76–85.
2004. Iowa City: Author. Kern, L. & Manz, P. (2004). A look at current validity issues of school-
ACT. (2005). Issues in college readiness: What kind of test prepara- wide behavior support. Behavioral Disorders, 30, 47–59.
tion is best? Iowa City: Author. Kovach, K., Fleming, D., & Wilgosh, L. (2001). The relationship
Bangert-Drowns, R. L., Kulik, J. A., & Kulik, C. (1983). Effects of between study skills and conceptions of intelligence for high
coaching programs on achievement test performance. Review of school students. Korean Journal of Thinking and Problem Solving,
Educational Research, 53, 571–585. 11, 39–49.
Bloom, H., Hill, C., Black, A. R., & Lipsey, M. (2006, June). Effect sizes Lane, K. L. & Beebe-Frankenberger, M. (2004). School-based
in educational research: What they are, what they mean, and why interventions: The tools you need to succeed. Boston: Pierson
they’re important. Paper presented at the Institute for Educational Education.
Sciences 2006 Research Conference, Washington, DC. Lane, K. L., & Beebe-Frankenberger, M. (2007). An integrated
Bullis, M., & Walker, H. M. (1994). Comprehensive school-based intervention model for school-based research. Manuscript in
systems for troubled youth. Eugene: University of Oregon, Center preparation.
on Human Development. Lane, K. L., & Menzies, H. M. (2003). A school-wide intervention
Busk, P. L., & Serlin, R. C. (1992). Meta-analysis for single-case with primary and secondary levels of support for elementary
research. In T. Kratochwill & J. Levin (Eds.), Single case research students: Outcomes and considerations. Education and Treatment
design and analysis (pp. 187–212). Hillsdale, NJ: Lawrence of Children, 26, 431–451.
Erlbaum. Lane, K. L., & Parks, R. J. (2007, March). Using school-wide data to
Ehrehnhaft, G., Lehrman, R. L., Obrecht, F., & Mundsack, A. (2001). identify students for targeted interventions across the K-12 grade
Barron’s how to prepare for the ACT assessment. Hauppauge, NY: span: Illustrations and recommendations. Paper presented at the
Barron’s Educational Series. Fourth International Conference on Positive Behavior Support,
Dreisbach, M., & Keogh, B. K. (1982). Testwiseness as a factor in Boston, Massachusetts.
readiness test performance of young Mexican-American children. Lane, K. L., Robertson, E. J., & Graham-Bailey, M. A. L. (2006). An
Journal of Educational Psychology, 74, 224–229. examination of school-wide interventions with primary level
Fuchs, D., & Fuchs, L. (1994). Inclusive schools movement and the efforts conducted in secondary schools: Methodological
radicalization of special education reform. Exceptional Children, considerations. In T. E. Scruggs & M. A. Mastropieri (Eds.),
60, 294–309. Applications of research methodology: Advances in learning and
Fuller, W. E., & Wehman, P. (2003). College entrance exams for behavioral disabilities (Vol. 19). Oxford, UK: Elsevier.
students with disabilities: Accommodations and testing guidelines. Lane, K. L., Umbreit, J., & Beebe-Frankenberger, M. (1999). A review
Journal of Vocational Rehabilitation, 18, 191–197. of functional assessment research with students with or at-risk for
Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C., & emotional and behavioral disorders. Journal of Positive Behavioral
Innocenti, M. S. (2005). Quality indicators for group experimental Interventions, 1, 101–111.
and quasi-experimental research in special education. Exceptional Lane, K. L., Wehby, J., Menzies, H. M., Doukas, G. L., Munton, S. M.,
Children, 71, 149–164. & Gregg, R. M. (2003). Social skills instruction for students at
Gresham, F. M. (2002). Responsiveness to intervention: An alternative risk for antisocial behavior: The effects of small-group instruction.
approach to learning disabilities. In R. Bradley, L. Danielson, & Behavioral Disorders, 28, 229–248.

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015


18 Remedial and Special Education

Lane, K. L., Wehby, J. H., Menzies, H. M., Gregg, R. M., Doukas, G. L., Sugai, G., & Horner, R. (2002). The evolution of discipline practices:
& Munton, S. M. (2002). Early literacy instruction for first-grade School-wide positive behavior supports. Child and Family
students at-risk for antisocial behavior. Education and Treatment Behavior Therapy, 24, 23–50.
of Children, 25, 438–458. Walker, B., Cheney, D., Stage, S., & Blum, C. (2005). Schoolwide
Lane, K. L., Wehby, J., Robertson, E. J., & Rogers, L. (2007). How screening and positive behavior supports: Identifying and support-
do different types of high school students respond to positive ing students at risk for school failure. Journal of Positive Behavior
behavior support programs? Characteristics and responsiveness of Interventions, 7, 194–204.
teacher-identified students. Journal of Emotional and Behavioral
Disorders, 15, 3–20. Kathleen Lynne Lane is an assistant professor in the Department of
Luiselli, J. K., Putnam, R. F., & Sunderland, M. (2002). Longitudinal Special Education of Peabody College at Vanderbilt University. Her
evaluation of behavior support intervention in a public middle research program primarily focuses on designing, implementing, and
school. Journal of Positive Behavior Intervention, 4, 182–188. evaluating multilevel, school-based interventions to prevent the
Martens, B. K., Witt, J. C., Elliott, S. N, & Darveaux, D. X. (1985). development of learning and behavior problems for students at-risk
Teacher judgments concerning the acceptability of school-based for EBD and remediate the deleterious effects of existing problems
interventions. Professional Psychology Research and Practice, exhibited by students with EBD.
16, 191–198.
May, S., Ard, W., III, Todd, A. W., Horner, R. H., Glasgow, A., Sugai, G., Jemma Robertson Kalberg graduated from the Special Education
et al. (2000). School-wide Information System. Eugene: University Department of Peabody College at Vanderbilt University in 2006. Her
of Oregon, Educational and Community Supports. interests include schoolwide screeners to identify students with or
No Child Left Behind Act, 20 U.S.C. 70 § 6301 (2001). at-risk for emotional and behavioral disorders, positive behavior
Ritter, S., & Idol-Maestas, L. (1986). Teaching middle school students support interventions, and secondary interventions supporting both
to use a test-taking strategy. Journal of Educational Research, academic and behavioral deficits.
79, 350–357.
Robertson, E. J., & Lane, K. L. (2007). Supporting middle school
Emily Mofield graduated from the Department of Special Education
students with academic and behavioral concerns within the con-
at Peabody College of Vanderbilt University in 2004. She is a doctoral
text of a three-tiered model of support: Findings of a secondary
candidate in the Department of Teaching and Learning at Tennessee
intervention. Manuscript submitted for publication.
State University and a special education teacher for Sumner County
Sampson, G. E. (1985). Effects of training in test-taking skills on
Schools in Middle Tennessee.
achievement test performance: A quantitative synthesis. Journal
of Educational Research, 78, 261–266.
Scruggs, T. E., & Mastropieri, M. A. (1992). Teaching test-taking Joseph H. Wehby is an associate professor in the Department of
skills. Brookline, MA: Brookline Books. Special Education at Peabody College of Vanderbilt University. His
Slate, J., Jones, C., & Dawson, P. (1993). Academic skills of high research interests include teacher–student interactions as well as aca-
school students as a function of grade, gender, and academic track. demic and behavioral interventions for student with emotional or
High School Journal, 76, 245–251. behavioral disorders.
Sprague, J., Walker, H., Golly, A., White, K., Myers, D. R., &
Shannon, T. (2001). Translating research into effective practice: Robin J. Parks is a special education teacher at the Hoover High
The effects of a universal staff and student intervention on indica- School Freshman Campus in Hoover, Alabama, and is a recent gradu-
tors of discipline and school safety. Education and Treatment of ate from the Department of Special Education of Peabody College at
Children, 24, 495–511. Vanderbilt University.

Downloaded from rse.sagepub.com at SIMON FRASER LIBRARY on June 8, 2015

You might also like