Comprehension Questions - Differences Among Standardized Tests - Doris

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Comprehension Questions: Differences Among Standardized Tests

Author(s): Doris C. Crowell, Kathryn Hu-pei Au and Karen M. Blake


Source: Journal of Reading, Vol. 26, No. 4 (Jan., 1983), pp. 314-319
Published by: Wiley on behalf of the International Reading Association
Stable URL: http://www.jstor.org/stable/40031737
Accessed: 10-08-2016 23:45 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.

Wiley, International Reading Association are collaborating with JSTOR to digitize, preserve and extend
access to Journal of Reading

This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
Crowell is a curriculum researcher,
Au is an educational psychologist,
and Blake is a research assistant at
the Kamehameha Early Education
Program in Honolulu, Hawaii.

Comprehension Doris C. Crowell


Kathryn Hu-pei Au
questions: Karen M. Blake

Differences among The types of questions in standardized


reading tests reflect assumptions that
standardized tests test developers make about reading
comprehension. Information about
these assumptions is important to
educators and researchers. To inter-
pret comprehension subtest scores
accurately, both for the purposes of
program evaluation and for the assess-
ment of individual achievement, we
must understand what those scores
actually reflect.
To discover how six of the major
standardized tests measured compre-
hension, we designed a study to
identify various types of questions in
these tests, examining the forms for
grades three, six, and nine. We
analyzed the reading comprehension
subtests from the American School
Achievement Test (ASAT), Revised
Edition, Form X (Primary Battery 2,
Intermediate Battery, and Advanced
Battery); the California Achievement
Test (CAT), Form A (Levels 2, 3, and
4); the Comprehensive Test of Basic
Skills (CTBS), Form R (Levels 1, 2,
and 4); the Gates-MacGinitie Reading
Test (Gates), Form 1 (Levels C, D,
and E); the Metropolitan Achievement
Test (MAT), Form F (Elementary,
Intermediate, and Advanced); and
the Stanford Achievement Test (SAT),
Form A (Primary Level III, Intermediate
Level II, and Advanced).
Typically, the comprehension sub-

314 Journal of Reading January 1983

This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
tests consist of short passages followed Johnson (1978) focuses on the sources
by multiple choice questions. The of information required to answer
student reads each passage and then comprehension questions correctly
chooses the best answer to each and shows the relationship between
question. In some cases the student information found in the text and
must select the word or phrase which information that the child already
best completes a sentence. has. The second system, by Crowell
and Au (1981a, 1981b), classifies
What is comprehension? questions according to the level of
The term reading comprehension has complexity of the thinking processes
many possible meanings. Difficulties that are required to produce a satis-
in arriving at a precise definition are factory answer. The following sections
discussed in Pearson and Johnson describe each category system and
(1978). For the purposes of this the study,
results obtained.
we assumed that there are three
aspects to defining and understanding Explicit/implicit
reading comprehension. First, it is Pearson and Johnson proposed a
the ability to understand text, dem- simple taxonomy of questions to deal
onstrated by answering comprehen- with the relationships between infor-
sion subtest questions or items cor- mation presented in the text and
rectly. Second, we assumed that information from the reader's previous
reading comprehension involves a knowledge. The three kinds of ques-
number of different kinds of skills, tion-answer relationships they propose
some easy and others more difficult are text explicit, text implicit, and
to master. Finally, a related assump- script implicit. Definitions of the
tion was that students' comprehension categories were refined by Raphael
abilities develop gradually over time. (in press), and the descriptions given
With these aspects in mind, we here draw on her work as well.
raised two issues. First, do the tests Interrater reliability in categorizing
assess different kinds of reading test questions by this taxonomy ranged
comprehension skills? Second, are from 94% to 97%.
the various grade level forms designed In a text explicit relationship, both
to detect qualitative development in question and answer can be derived
comprehension abilities? That is, it is from the text, and thp answer is
important to know whether students explicitly cued, either logically or
are improving in their use of more grammatically. The answer is "right
difficult comprehension skills. We there," usually in the same sentence
feel that comprehension tests intended as the question stem.
for students in higher grades should In a text implicit relationship, both
show a greater proportion of questions the question and answer can be
involving "reading between" and derived from the text, but one or more
"reading beyond" the lines, since this steps of inference are required to get
represents a more sophisticated typefrom the question to the answer. To
of reading comprehension. identify the correct answer, the stu-
To address these issues we used dent must "think and search."
two systems that emphasize different
In a script implicit relationship, the
sets of factors for categorizing ques-
answer cannot be derived from the
tions. One developed by Pearson and
text, but can only be determined with

Comprehension questions 315

This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
reference to background knowledge. the story or passage that the child can
For the student, this is an "on my recall. (2) Categorization questions
own" situation, where the answer require the child to classify story
must come from information in his or characters (e.g., as good or bad), and
her own memory. to justify the response with information
The first set of analyses looked atfrom the story. (3) Seriation questions
whether the test items tapped all deal with relationships among details,
three question-answer relationships. including cause and effect or the
The results indicated that the tests sequence of events. (4) Integration
attended to just two kinds of relation-
questions require the child to combine
ships, text explicit and text implicit.
elements of the story into a coherent
Script implicit items were so few in
structure not necessarily given by the
number that they will not be discussedstory itself (e.g., summarizing the
further. story or giving the main idea). (5)
The six tests included many more Extension questions require the child
to apply an understanding of the
text explicit than text implicit questions.
Only the sixth grade level of the storyGates(e.g., relating the story to other
contained no text implicit questions. stories or suggesting a plausible
The percentage of these questions alternate ending). Questions requiring
varied considerably, however, with the use of higher level thinking skills
averages (the three grade levels com- (levels 2-5) should be the more
bined) as follows: MAT 39%, CTBS difficult to answer.
30%, CAT 28%, SAT 19%, ASAT 11%, Interrater reliability for the coding
and Gates 6% (see Figure 1). of test questions with this system was
How well can we assess children's 90%.
growth in comprehension skills from Because categorization and seria-
year to year, using the various levels tion questions (levels 2 and 3) are
of a test? If text implicit questions quite similar in difficulty (Crowell and
really are more difficult than text Au, 1981b) these two levels were
explicit questions, an answer appears treated as one. No level 5 items were

in Figure I, where bar graphs show found in any of the tests.


the percentage of text implicit ques- With respect to whether the test
tions in the grade-level forms of all six questions tapped different aspects of
tests. reading comprehension, we found
Only in two tests, CAT and SAT, that all tests included questions at
does the percentage of text implicit level 1 (association) and at levels 2
questions increase gradually over the and 3 combined (categorization/seri-
grades. This staircase pattern is not ation). The sixth and ninth grade
seen in the other tests. forms of the Gates consisted exclu-
sively of level 1 items, although the
Cognitive levels third grade form did not. Four of the
The categories developed by Crowell tests (CAT, CTBS, MAT, and SAT)
and Au (1981a) look at the kinds of included level 4 (integration) questions.
thinking skills or cognitive processes When the results for the three

required to answer questions correctly. grade-level forms were combined,


Their scale of questions contains five the percentage of questions requiring
levels: (1) Association questions- higher level thinking skills (questions
the lowest level- elicit any detail ofat levels 2, 3, and 4) in each of the

316 Journal of Reading January 1983

This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
Figure 1
Percentage of text implicit questions across grades in the six tests

tests was as follows: CTBS 52%, MAT (84%), with a few falling at level 2
49%, SAT 32%, CAT 29%, Gates 12%, (13%) and a very small number at
and ASAT 11%. levels 3 and 4. Most text implicit
Our second question, on assessing questions were at level 2 (63%), with
qualitative development of compre- the remainder divided almost equally
hension skills, was addressed by between levels 1 and 4, and a very
considering whether the tests for small number at level 3.
higher grades showed an increasing Again, all tests contained questions
percentage of questions requiring assessing different aspects of reading
higher level thinking skills. In Figure 2 comprehension. Taking just the two
the bar graph shows the percentage largest categories, text explicit ques-
of questions in level 2/3 and level 4. tions at level 1 and text implicit
None of the tests showed a systematic questions at level 2, the following
increase in the percentage of questions results were obtained: ASAT 86% and
at level 2/3, but the CAT, MAT, and 9%, CAT 61% and 14%, CTBS 44%
SAT all showed a systematic increase and 18%, Gates 87% and 5%, MAT
in the percentage of level 4 questions.48% and 31%, and SAT 63% and 9%.
Only the SAT followed a staircase The MAT contained by far the highest
pattern for all higher level questions percentage of text implicit questions
combined. at level 2. The CTBS and the SAT
included a substantial number of text
Combined results explicit questions at level 2, 22% and
Although the two coding systems 13% respectively.
explored different features of test Considering an increase only in the
questions, there was still considerable percentage of text implicit questions
overlap. Most text explicit questions at level 2, it appears possible to look
also tended to be level 1 questions at the development of comprehension

Comprehension questions 317

This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
Figure 2
Percentage of higher level questions across grades in the six tests

abilities with one, or perhaps two, of and complemented one another.


the tests. The CAT showed a progres- Basically, the results indicated that
sive but very small increase of these all of the standardized tests do, in
questions across grade levels (from fact, include questions involving dif-
13% to 14% to 16%). The CTBS ferent aspects of reading comprehen-
showed 16% in both the third and sion. The questions covered different
sixth grade forms, with an increase kindstoof question-answer relationships
22% in the ninth grade form. and required different kinds of thinking
skills. However, the percentage of
Summary and conclusions questions involving text implicit ques-
The two coding systems used in this tions and higher level thinking skills
study provided different kinds of varied considerably from test to test.
information about questions in read- The results also showed that these
ing comprehension tests. They have, six tests differed greatly in the extent
however, suggested a convenient to which they reflected a develop-
perspective for describing and con- mental progression in reading com-
trasting tests of reading achievement.prehension, at least along those
The Pearson and Johnson coding dimensions important in the two
system focused on the sources of coding systems. When the test ques-
information required to answer a tions were analyzed to see if the
question correctly, and the Crowell percentage of text implicit questions
and Au system on the types of and of questions requiring higher
level thinking skills increased across
thinking skills required. Taken together,
the sets of results were consistent the forms for the different grades,

318 Journal of Reading January 1983

This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms
some tests showed the expected References
American School Achievement Test. Indianapolis, Ind.:
staircase pattern, but most did not. Bobbs-Merrill, 1975.
The factors considered in the two California Achievement Tests. Monterey, Calif.: CTB/
McGraw-Hill, 1970.
coding systems evidently were not Comprehensive Test of Basic Skills. Monterey, Calif.:
CTB/McGraw-Hill, 1969.
uppermost in the minds of the test Crowell, Doris C, and Kathryn H. Au. "A Scale of Ques-
tions to Guide Comprehension Instruction." The
developers. There are, of course, Reading Teacher, vol. 34 (January 1981a), pp 389-
many other factors to be considered 93.
Crowell, Doris C, and Kathryn H. Au. "Developing Chil-
in assessing standardized tests, but dren's Comprehension in Listening, Reading and
Television Viewing." Elementary School Journal , vol.
the results presented here should be 92 (November 1981b), pp. 51-57.
useful for educators who consider Gates-MacGinitie Reading Tests. New York, N.Y.:
Teachers College Press, 1972.
both different question-answerMetropolitan
rela- Achievement Tests. New York, N.Y.:
Harcourt Brace Jovanovich, 1970.
tionships and thinking skills to be Pearson, P. David, and Dale Johnson. Teaching Reading
Comprehension. New York, N.Y.: Holt, Rinehartand
important elements in the evaluation Winston, 1978.

of students' reading comprehension. Raphael, Taffy E. 'Teaching Children Question-answering


Strategies." The Reading Teacher, in press.
Stanford Achievement Test. New York, NY.: Harcourt
Brace Jovanovich, 1973.

Humor and metaphor conference


The second biennial conference on linguistic humor will be held in Phoenix,
Arizona, from March 31 to April 2, 1983. The theme of the conference is
"Far-fetched Metaphors: The Humor of Linguistic Deviance." The conference is
sponsored by the Western Humor and Irony Membership (WHIM). For more
information, contact Don. L.F. Nilsen, English Department, Arizona State
University, Tempe, Arizona 85287, USA.

An old journal with a new title


The February 1982 issue of Journal of Reading noted on page 487 a resource
journal, Language Teaching without mentioning that it was formerly Language
Teaching and Linguistics: Abstracts. The subtitle is "The International
Abstracting Journal for Language Teachers and Applied Linguists/' For readers
outside the U.S. and Canada, a subscription to Language Teaching is £20 for
institutions and £10 for individuals. Back volume prices for subscribers in the U.S.
and Canada are: Volumes 1-10 (1969-1977), US$52.50 each, and Volumes 11-14
(1978-1981), US$60.00 each. Sample topics include linguistics and language
teaching, sociolinguistics, discourse analysis, error analysis, language develop-
ment, foreign language learning and teaching, teaching reading skills,
bilingualism, and syllabus design. Write to Cambridge University Press, 32 East
57th Street, New York, New York 10022, USA.

Comprehension questions 319

This content downloaded from 142.66.3.42 on Wed, 10 Aug 2016 23:45:18 UTC
All use subject to http://about.jstor.org/terms

You might also like