Background Information

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Background Information

The 2016 - 2017 school year has seen the implementation of procedures addressing
components of Nevada Senate Bill SB 391, which is commonly referred to as Read by Three.
The overarching idea is that students that are not proficient by third grade according to the
statewide criterion-referenced test that is in place in 2019 will be retained. Implementation this
year requires that all students, even my fifth graders, be tested using assessments deemed
appropriate by the bill in order to provide interventions that would hopefully prevent the
necessity of retention by the time that part of the bill is enacted. As such, several of the
assessments included in this directed learning experience are ones that would I would actually be
required to use per administrative and legislative directives.
Supplementary and complementary assessments and administrations of both
supplementary and complementary assessments and those I have no choice in giving were
chosen with the National Reading Panels five components of reading in mind. Those five
components are phonemic awareness, phonics, vocabulary, fluency, and comprehension
(NICHD, n.d.). Of those components, I chose assessments for vocabulary, fluency, and
comprehension. Though spelling and writing are not included as key components, they do
employ elements of phonics and phonemic awareness, and are necessary for a more
well-rounded picture of a student as a whole.
Introduction
My principal will be escorting a new student, Alex, into my already over-crowded
classroom sometime next week. Besides knowing that he is a 10-year-old 5th grade student who
has just recently moved to Las Vegas from out of state, there are no other records available to me

at this time. Not only will I want to get to know and welcome Alex as a new student to our
classroom, school policy and recent state legislation demand that I get to know Alex by means of
assessment as well.
The following assessments would allow me to address the requirements of my
administration and the new Read by Three law while also determining how best to address the
interests and needs of the newest addition to my classroom.
Student Interest Assessments
One of the first assessments that I would give to Alex would be a student inventory to
gauge his feeling towards school and reading in particular. There are two such assessments that I
used with my students at the beginning of this school year. The first one that I used was taken
from the Scholastic website. Questions asked students about who they live with, what they
would buy with a million dollars, and what subjects they feel strongest and weakest in at school
-- students responded about a variety of topics. From this assessment, I was able to begin
assembling a picture of my students as individuals as well as glimpse how they viewed
themselves. In my experience, most students tend to be generally honest about their in-class
behavior and what might help them to be successful during the upcoming school year on the
Scholastic student inventory.
New in my classroom this year was a second inventory that focused primarily on
students reading habits. The reason for a second, more specific, student interest assessment was
to aid in building a classroom community of readers. Based on readings about motivating
students to read (Guthrie, 2015), I decided to build reading relationships between myself and my
students. Guthrie discusses how relevance relates to motivation in the classroom: Appealing to

students interest is a popular motivational approach. . . . If their curiosities can be identified


through interest inventories, they may become engrossed in a book or a topic and learn to find
satisfaction through literacy (p.66). Reading students responses on what influences their
selection of books, their favorite books, and explanations of recent reading habits allowed me to
have personal, meaningful individual discussions with them about reading. This led to providing
book suggestions tailored to student interests and a classroom climate that encouraged and
supported reading. Whether for leisure or academic purposes, our classroom understands and
values the importance of reading.
Although the aforementioned interest assessments are not on the Read by Three approved
list of assessments for informing instructional practices, I would give them to Alex anyway in
order to get to know him. His responses would also help him acclimate to our literary classroom
environment by making connections with students having similar interests as his own. When
further assessments show his skills in fluency, vocabulary, and comprehension, knowing his
personal interests and academic strengths would guide my choice of teaching strategies and
content to use in both the whole and small group activities involving Alex.
Fluency Assessment
Kitty Ward Elementary School requires that all students oral reading fluency be assessed
using the AimsWeb Reading Curriculum-Based Measurement (R-CBM) assessment. The
purpose for evaluating in this manner is to determine a childs fluency in terms of how many
words he or she is able to read correctly when given a one-minute time constraint. According to
the AimsWeb resources provided on the website:

More than 30 years of research has shown that listening to a child read graded passages
aloud for 1 minute and calculating the number of words read correct per minute provides
a highly reliable and valid measure of general reading achievement, including
comprehension, for most students. (2014)
An examination of the research alluded to in the above quote does provide shed some light on
the purported significance of using CBM assessments. Lynn Fuchs offers that For this reason,
passage reading fluency produces a broad dispersion of scores across individuals of the same
age, which results in strong correlations with measures of reading comprehension, decoding,
word identification, and vocabulary (2004, p. 189).
When given as a benchmark assessment to initially determine the level of a student, 3
grade-level passages (as deemed appropriate by AimsWeb) are read aloud for one minute each.
Typically, Alex would be given a printed version of each passage to read from while I would
follow along using the web-based version which allows me to select and highlight in red the
words that Alex omits or otherwise reads incorrectly. Scores for each passage are given in terms
of words read correctly and errors, with the latter including omissions. Alexs final score would
be the median of the three scores.
A drawback of administering this assessment as prescribed is that it relies simply on
reading rate without giving any credence to prosody, which can have bearing on a students
success with understanding what was read. Just as literature exists to support R-CBM, so does it
exist to questions its validity as the only measure of fluency. Theresa Deeney (2010) notes that
accuracy and speed are aptly measure using assessments such as R-CBM, however:

One-minute measures help us identify students who cannot read accurately and quickly.
This is useful information. Yet to focus instruction, we need additional information. Most
important, we need to understand why students are dysfluent and the effects of time on
students' reading. (p. 443)
Though the R-CBM test is meant to assess oral reading fluency, by the companys own
admission as quoted above from the AimsWeb website, it may also be an indication of a
students comprehension. I usually encounter a few students each school year whose word
counts meet or exceed grade-level expectations, but it is later revealed that the students struggle
with comprehending what has been read. For that reason, when assessing Alex, I would
complement the prescribed scores with notes of my own.
Rather than using the web-based electronic version, I would print physical copies for
myself to write on. This way I would be able to annotate Alexs responses as if the R-CBM were
a running record. In my experience, this provides more useful information than the word count
does. Research also provides that Through careful examination of error patterns, a teacher can
determine which strategies the student is using and which strategies the student is failing to use.
These observations can provide information about areas in need of further instruction. . .
(Hudson, R., Lane, H., & Pullen, P., 2005, p. 705). Instead of only marking whether Alex read a
word incorrectly, I would code each error more specifically. When considering what Alexs
strengths and weaknesses might be, knowing if Alex self-corrected, transposed letters, or
replaced words with or without loss of meaning would be more beneficial than just knowing how
many words he was able to read in one minute. The results from the adjusted scoring would
allow me to more strategically determine how to instruct him in fluency. It might turn out that

he is strong in automaticity but could benefit from working with a partner strong in intonation
and inflection.
Comprehension Assessments
AimsWeb also provides the method by which Alex must be assessed for comprehension.
A description of the assessment is provided on the website:
Maze is a multiple-choice cloze task that students complete while reading silently. The
first sentence of a 150-400 word passage is left intact. Thereafter, every 7th word is
replaced with three words inside parenthesis. One of the words is the exact one from the
original passage. (2014)
Students are given three minutes to read and circle as many of the correct parenthetical words as
they can. Alexs score would be determined by how many words he identified correctly as
compared to criterion-referenced scores established by AimsWeb.
I have long contested the ability of the Maze assessment to accurately depict students
comprehension by depending upon the measurement of how quickly and accurately a student is
able to identify words in a passage. Students know that scores are dependent upon responses
given in a fixed amount of time which may not be conducive to spending time closely examining
a text. No questions or discourse are required of students; in fact, a student need only look at the
word immediately preceding and following the parenthetical word choices, and it is often
possible to discern which of the three words makes the most sense. In the case of those students
whose scores fall rank far above or below what is expected, it may be possible to accurately
make the assumption that those students are truly struggling or adept (respectively) at
comprehending text. However, as is the case with the R-CBM assessment for fluency, no

specific areas for improvement or enrichment can be identified as a result of the test as it is
intended to be given and scored. Well-known literary figure Timothy Shanahan conducted a
study to determine the sensitivity of cloze passages as measures of the ability to use information
across sentence boundaries (1982, p. 229). At the studys end, Shanahan concluded that
What they [studies] do suggest is that cloze, as now recommended for classroom use,
does not usually measure intersentential information integration. It might be possible to
design cloze tests to measure this ability, as is already possible with question and answer
tests (Bormuth, Carr, Manning, & Pearson, 1970; Pearson, 1974-75). However, until
valid guidelines are developed which indicate how this is to be done, it seems to be
unreasonable to use and interpret cloze in classroom practice as a global measure of
reading comprehension. (p. 250)
Though I am obliged by school policies to use the Maze assessment, to better suit my
needs as a teacher assessing a new student, I would also use the Morris Informal Reading
Inventory. The Morris Informal Reading Inventory assesses a students fluency, comprehension,
and reading habits/strategies during the oral reading of a passage. More than one passage is
typically given in order to find the students instructional level and frustrational level. Because I
would have already modified the R-CBM assessment as a running record for information about
Alexs oral fluency, I would likely use the Morris assessment just for comprehension. That
second assessment would be supplemental, so I would base its administration upon the scores
from Alexs R-CBM and MAZE. If I knew from the oral fluency assessment that Alex struggles
with reading aloud, I would read the Morris passage aloud to him. That would provide a much
clearer picture of his comprehension as a result of understanding as opposed to decoding.

Writing Assessment
The last assessment provided by AimsWeb is the Written Expression Curriculum Based
Assessment (WE-CBM). Specific research for this particular test is provided in the assessment
manual (Aimsweb, 2004):
School-based research (Deno, Marston, & Mirkin, 1982; Deno et al., 1982; Marston,
1989; Marston & Deno, 1981; Marston, Lowry, Deno, & Mirkin, 1981; Videen, Deno, &
Marston, 1982) has shown that having students write a story for 3 minutes given an
age-appropriate story starter is a reliable and valid general outcome measure of general
written expression for typically achieving students through Grade 6 and for students with
severe problems. (p. 4)
The WE-CBM is administered by providing students with a grade appropriate writing
prompt and giving the student 3 minutes to write the best story they can. Student responses are
scored on three criteria. The first of these is Total Words Written. Each word, regardless of
spelling and attending to rules outlined in the assessment manual, is tallied towards the Total
Words Written. Next, a students response is assessed by Correct Writing Sequences which are
defined as Two adjacent writing units (words and punctuation) that are correct within the
context of what is written (Aimsweb, 2004, p. 11). Finally, the written expression response is
combed through for Words Spelled Correctly. As is the case with the R-CBM and Maze
assessments, criterion-referenced scoring tables exist to determine whether a students scores are
well-below, below, at, or above average.
I do think that the scoring methods have some merit, but would have little suggestion for
how to provide instructions for students. Provided within that same manual (2004), are

checklists that support qualitative features of student writing such as structure, mechanics, and
semantics (p. 15). That information would be more beneficial to teach Alex. The features
detailed on the checklists echo the elements of the writing rubrics created and used by teachers at
Kitty Ward in all genres of writing. To align Alexs scores more closely to how he would be
graded in class, I would use the school-generated rubrics instead. In response to a narrative
writing prompt, I would be able to tell whether Alex was proficient or deficit in specific areas
such as establishing a narrator and using dialogue. There would be one adjustment in the
administration of the assessment for Alex. Instead of asking him to stop writing once the timed 3
minutes are up, I would ask him to draw an asterisk after the final word written under the time
constraint, and then continue writing until he felt the story was finished. I feel that allowing
Alex a chance to conclude his writing is important as writing conclusions is a feature assessed on
all of the genre rubrics (narrative, informative, and opinion) used at my school.
Spelling Assessment
Interestingly enough, my schools curriculum does not include a focus on spelling,
insisting instead that we focus upon word meaning. Even so, another of the required benchmark
assessments at my school is the Words Their Way Elementary Qualitative Spelling Inventory
(QSI). Essentially a spelling test, this inventory is designed to assess students knowledge of
key spelling features that relate to the different spelling stages. (Bear, D. R., Invernizzi, M.,
Templeton, S., & Johnston, F., 2008, p. 28).
In addition to providing information about spelling abilities, results of the QSI might also
have implications for Alexs phonemic awareness, phonics, and general reading achievement.

Linnea Ehri, a distinguished professor of educational psychology offered the following when
considering the role of spelling in classroom instruction:
Withholding spelling instruction or subordinating it to the job of computer spell checkers
is indefensible because of the important role that spelling plays in learning to read. Poor
spellers do not develop into skilled readers. Spelling instruction must remain an
important goal of teachers and schools. (2000)
So though I am not provided support, curriculum, or materials for teaching spelling, and results
of the QSI are not recorded for student grades, I would be mindful of the implications of Alexs
scores anyway.
The Elementary Spelling Inventory (ESI) is composed of 25 words which increase in
difficulty from the beginning of the assessment to the end. Each of the 25 words would be read
aloud with Alex recording his responses on a numbered piece of paper. The scoring of the ESI
differs from typical spelling assessments in that it is the features of the words (e.g. short vowels,
inflected endings) that earn the points rather than whether the word as a whole is spelled
correctly. In this way, I would be able to determine where Alex is most successful in spelling,
and what features, if any, he may need to focus on.
School and district mandates determine that Alexs proficiency in spelling be established
based upon the last feature of the QSI in which Alex made one or fewer errors. However, in
order to provide instruction to Alex, I would look more more closely. If Alex was successful in
spelling 20 or more of the words on the ESI, I would then administer the Upper-Level Spelling
Inventory (USI). The USI assesses the same spelling features, but employs words that are more
difficult than those of the ESI. Conversely, if Alex was unsuccessful in spelling 8 or more words

on the ESI, there is a Primary Spelling Inventory with lower leveled words. Whichever of the
assessments found to be most successful would be examined to find the first feature in which
Alex made 2 or more errors. This would be indicative of where to provide instruction. To this
point, I would note that if Alexs results from the oral reading assessment, R-CBM, appeared to
be lower than is expected of a fifth grade student at that point in the school year, it would be
worthwhile to see whether he struggled with early spelling features such as digraphs or blends on
the spelling inventories. That may provide additional information for instruction in fluency.
Vocabulary Assessment
As a final assessment component to complete a profile using all the elements of reading, I
would give Alex a vocabulary assessment. This particular test is the CORE Vocabulary
Screening.
In order for students to comprehend the entirety of a text, it is necessary for them to
understand the meanings of the words that comprise the text. The CORE assessment notebook
(2008) describes the purpose of vocabulary as critical to understanding grade-appropriate text.
Even students who are good decoders will have difficulty comprehending what they read if they
do not have adequate vocabulary knowledge (p. 120). Given the test at the fifth grade level,
Alex would read a word in a box and then choose which of three provided answer choices has
about the same meaning as the initial word.
At the completion of the assessment, I would be able to see how many grade-appropriate
words Alex is already familiar with. However, there are limitations to the assessment. Because
it is multiple-choice, the possibility exists that the selected answers were chosen by luck.
Another issue is that the assessment relies solely on a students background knowledge of the

words. It does not account for skills such as using context clues that would aid in determining
the meaning. A workaround for this particular drawback would be to provide a few of the
vocabulary words in sentences with open-ended questions that would require Alex to supply a
written (or oral) answer. As the assessment is written, it is best used to determine whether, if
results were very low, Alex were likely to struggle with understanding grade-level material
because of insufficient vocabulary knowledge. However, if Alex were to perform well on each
kind of question, that would indicate that his vocabulary skills were proficient, and I would
provide instruction to further enrich his word knowledge.
Conclusion
There is no way to avoid having to give the multitude of assessments as is necessary and
insisted upon by my school administration and Clark County School District policies. What is
possible though, is looking at those assessments with a critical lens, reading and understanding
some of the research that supports such measures. For most of the assessments that I detailed
throughout this learning experience, research was available to justify the necessity of
administration. However, I was also able to find research to the contrary. Examining multiple
perspectives about each test measure allowed me to think critically about the results, benefits,
and drawbacks of each test. As an educator trying to meet specific needs for students, I feel it
would be beneficial, necessary even, to consider how best to use the results of any assessment to
instruct students in a purposeful way. In supplementing or altering the administration of
mandated assessments, I would able to construct a plan for assessing Alex that would show a
more complete profile of him as a student of literacy.

References
Aimsweb. Administration and Scoring of Written Expression. (2004). Retrieved October 21,
2016, from
http://www.aimsweb.com/wp-content/uploads/Written-Expression-CBM-Manual.pdf.
Aimsweb. Cloze tasks from aimsweb Maze CBM. (2014). Retrieved October 21, 2016, from
http://www.aimsweb.com/assessments/features/assessments/maze.
Aimsweb. Reading Assessment resource for educators. (2014). Retrieved October 21, 2016, from
http://www.aimsweb.com/assessments/features/assessments/reading-cbm.
Bear, D. R., Invernizzi, M., Templeton, S., & Johnston, F. (2008). Words their way: Word study
for phonics, vocabulary, and spelling instruction (4th ed.). Upper Saddle River, NJ:
Pearson Prentice Hall.
Deeney, T. (2010). One-Minute Fluency Measures: Mixed Messages in Assessment and
Instruction. The Reading Teacher, 63(6), 440-450. Retrieved from
http://www.jstor.org.ezproxy.library.unlv.edu/stable/25615834
Ehri, L. C. (2000). Learning to read and learning to spell: Two sides of a coin. Topics in
Language Disorders, 20(3), 19-36. doi:10.1097/00011363-200020030-00005
Fuchs, L. S. (2004). The Past, Present, and Future of Curriculum-Based Measurement Research.
School Psychology Review, 33(2), 188-192.
Guthrie, J. T. (2015). Best practices for motivating students to read. In Gambrell, L. B. (2015),
Best practices in literacy instruction. New York: Guilford Press.
Honig, B., Diamond, L., & Gutlohn, L. (2008). Assessing reading multiple measures: For all
educators working to improve reading achievement (2nd ed.). Novato, CA: Arena Press.

Hudson, R., Lane, H., & Pullen, P. (2005). Reading Fluency Assessment and Instruction: What,
Why, and How? The Reading Teacher, 58(8), 702-714. Retrieved from
http://www.jstor.org.ezproxy.library.unlv.edu/stable/20204298
Report of the National Reading Panel. (n.d.). Retrieved from
https://www.nichd.nih.gov/publications/pubs/nrp/pages/findings.aspx
Shanahan, T., Kamil, M., & Tobin, A. (1982). Cloze as a Measure of Intersentential
Comprehension. Reading Research Quarterly, 17(2), 229-255. doi:1. Retrieved from
http://www.jstor.org.ezproxy.library.unlv.edu/stable/747485 doi:1

You might also like