Professional Documents
Culture Documents
Comprehension Questions - Differences Among Standardized Tests - Doris
Comprehension Questions - Differences Among Standardized Tests - Doris
Comprehension Questions - Differences Among Standardized Tests - Doris
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.
Wiley, International Reading Association are collaborating with JSTOR to digitize, preserve and extend
access to Journal of Reading
This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
Crowell is a curriculum researcher,
Au is an educational psychologist,
and Blake is a research assistant at
the Kamehameha Early Education
Program in Honolulu, Hawaii.
This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
tests consist of short passages followed Johnson (1978) focuses on the sources
by multiple choice questions. The of information required to answer
student reads each passage and then comprehension questions correctly
chooses the best answer to each and shows the relationship between
question. In some cases the student information found in the text and
must select the word or phrase which information that the child already
best completes a sentence. has. The second system, by Crowell
and Au (1981a, 1981b), classifies
What is comprehension? questions according to the level of
The term reading comprehension has complexity of the thinking processes
many possible meanings. Difficulties that are required to produce a satis-
in arriving at a precise definition are factory answer. The following sections
discussed in Pearson and Johnson describe each category system and
(1978). For the purposes of this the study,
results obtained.
we assumed that there are three
aspects to defining and understanding Explicit/implicit
reading comprehension. First, it is Pearson and Johnson proposed a
the ability to understand text, dem- simple taxonomy of questions to deal
onstrated by answering comprehen- with the relationships between infor-
sion subtest questions or items cor- mation presented in the text and
rectly. Second, we assumed that information from the reader's previous
reading comprehension involves a knowledge. The three kinds of ques-
number of different kinds of skills, tion-answer relationships they propose
some easy and others more difficult are text explicit, text implicit, and
to master. Finally, a related assump- script implicit. Definitions of the
tion was that students' comprehension categories were refined by Raphael
abilities develop gradually over time. (in press), and the descriptions given
With these aspects in mind, we here draw on her work as well.
raised two issues. First, do the tests Interrater reliability in categorizing
assess different kinds of reading test questions by this taxonomy ranged
comprehension skills? Second, are from 94% to 97%.
the various grade level forms designed In a text explicit relationship, both
to detect qualitative development in question and answer can be derived
comprehension abilities? That is, it is from the text, and thp answer is
important to know whether students explicitly cued, either logically or
are improving in their use of more grammatically. The answer is "right
difficult comprehension skills. We there," usually in the same sentence
feel that comprehension tests intended as the question stem.
for students in higher grades should In a text implicit relationship, both
show a greater proportion of questions the question and answer can be
involving "reading between" and derived from the text, but one or more
"reading beyond" the lines, since this steps of inference are required to get
represents a more sophisticated typefrom the question to the answer. To
of reading comprehension. identify the correct answer, the stu-
To address these issues we used dent must "think and search."
two systems that emphasize different
In a script implicit relationship, the
sets of factors for categorizing ques-
answer cannot be derived from the
tions. One developed by Pearson and
text, but can only be determined with
This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
reference to background knowledge. the story or passage that the child can
For the student, this is an "on my recall. (2) Categorization questions
own" situation, where the answer require the child to classify story
must come from information in his or characters (e.g., as good or bad), and
her own memory. to justify the response with information
The first set of analyses looked atfrom the story. (3) Seriation questions
whether the test items tapped all deal with relationships among details,
three question-answer relationships. including cause and effect or the
The results indicated that the tests sequence of events. (4) Integration
attended to just two kinds of relation-
questions require the child to combine
ships, text explicit and text implicit.
elements of the story into a coherent
Script implicit items were so few in
structure not necessarily given by the
number that they will not be discussedstory itself (e.g., summarizing the
further. story or giving the main idea). (5)
The six tests included many more Extension questions require the child
to apply an understanding of the
text explicit than text implicit questions.
Only the sixth grade level of the storyGates(e.g., relating the story to other
contained no text implicit questions. stories or suggesting a plausible
The percentage of these questions alternate ending). Questions requiring
varied considerably, however, with the use of higher level thinking skills
averages (the three grade levels com- (levels 2-5) should be the more
bined) as follows: MAT 39%, CTBS difficult to answer.
30%, CAT 28%, SAT 19%, ASAT 11%, Interrater reliability for the coding
and Gates 6% (see Figure 1). of test questions with this system was
How well can we assess children's 90%.
growth in comprehension skills from Because categorization and seria-
year to year, using the various levels tion questions (levels 2 and 3) are
of a test? If text implicit questions quite similar in difficulty (Crowell and
really are more difficult than text Au, 1981b) these two levels were
explicit questions, an answer appears treated as one. No level 5 items were
This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
Figure 1
Percentage of text implicit questions across grades in the six tests
tests was as follows: CTBS 52%, MAT (84%), with a few falling at level 2
49%, SAT 32%, CAT 29%, Gates 12%, (13%) and a very small number at
and ASAT 11%. levels 3 and 4. Most text implicit
Our second question, on assessing questions were at level 2 (63%), with
qualitative development of compre- the remainder divided almost equally
hension skills, was addressed by between levels 1 and 4, and a very
considering whether the tests for small number at level 3.
higher grades showed an increasing Again, all tests contained questions
percentage of questions requiring assessing different aspects of reading
higher level thinking skills. In Figure 2 comprehension. Taking just the two
the bar graph shows the percentage largest categories, text explicit ques-
of questions in level 2/3 and level 4. tions at level 1 and text implicit
None of the tests showed a systematic questions at level 2, the following
increase in the percentage of questions results were obtained: ASAT 86% and
at level 2/3, but the CAT, MAT, and 9%, CAT 61% and 14%, CTBS 44%
SAT all showed a systematic increase and 18%, Gates 87% and 5%, MAT
in the percentage of level 4 questions.48% and 31%, and SAT 63% and 9%.
Only the SAT followed a staircase The MAT contained by far the highest
pattern for all higher level questions percentage of text implicit questions
combined. at level 2. The CTBS and the SAT
included a substantial number of text
Combined results explicit questions at level 2, 22% and
Although the two coding systems 13% respectively.
explored different features of test Considering an increase only in the
questions, there was still considerable percentage of text implicit questions
overlap. Most text explicit questions at level 2, it appears possible to look
also tended to be level 1 questions at the development of comprehension
This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
Figure 2
Percentage of higher level questions across grades in the six tests
This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
some tests showed the expected References
American School Achievement Test. Indianapolis, Ind.:
staircase pattern, but most did not. Bobbs-Merrill, 1975.
The factors considered in the two California Achievement Tests. Monterey, Calif.: CTB/
McGraw-Hill, 1970.
coding systems evidently were not Comprehensive Test of Basic Skills. Monterey, Calif.:
CTB/McGraw-Hill, 1969.
uppermost in the minds of the test Crowell, Doris C, and Kathryn H. Au. "A Scale of Ques-
tions to Guide Comprehension Instruction." The
developers. There are, of course, Reading Teacher, vol. 34 (January 1981a), pp 389-
many other factors to be considered 93.
Crowell, Doris C, and Kathryn H. Au. "Developing Chil-
in assessing standardized tests, but dren's Comprehension in Listening, Reading and
Television Viewing." Elementary School Journal , vol.
the results presented here should be 92 (November 1981b), pp. 51-57.
useful for educators who consider Gates-MacGinitie Reading Tests. New York, N.Y.:
Teachers College Press, 1972.
both different question-answerMetropolitan
rela- Achievement Tests. New York, N.Y.:
Harcourt Brace Jovanovich, 1970.
tionships and thinking skills to be Pearson, P. David, and Dale Johnson. Teaching Reading
Comprehension. New York, N.Y.: Holt, Rinehartand
important elements in the evaluation Winston, 1978.
This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms