On The Relationship Between Test Anxiety and Test Performance

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/234735784

On the Relationship between Test Anxiety and Test Performance

Article  in  Measurement and Evaluation in Counseling and Development · January 1994

CITATIONS READS

46 3,179

2 authors:

Menucha Birenbaum Fadia Nasser-Abu Alhija


Tel Aviv University Tel Aviv University
109 PUBLICATIONS   3,282 CITATIONS    80 PUBLICATIONS   2,388 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

A virtual learning environment based on principles of assessment for learning (AfL) and gamification View project

After the Bill: The Leisure World of Adolescents View project

All content following this page was uploaded by Fadia Nasser-Abu Alhija on 12 March 2018.

The user has requested enhancement of the downloaded file.


Title:

On the relationship between test anxiety and test performance.

Authors:

Birenbaum, Menucha & Nasser, Fadia

Source:

Measurement & Evaluation in Counseling & Development. Apr94, Vol. 27 Issue 1, p293. 9p. 2
Charts.

Document Type:

Article

Subject Terms:

*TEST anxiety
*ACADEMIC achievement
*HIGH school students

Geographic Terms:

ISRAEL

Abstract:

Investigates the relationship between test anxiety and test performance. Sample consisting
of tenth graders from two Arab high schools in Israel; Two types of test-anxious students,
those lacking test-taking skills and those who lack study skills.

Full Text Word Count:

4619

ISSN:

0748-1756

Accession Number:

9503280856

PlumPrint No Results Found

HTML Full Text

ON THE RELATIONSHIP BETWEEN TEST ANXIETY AND TEST PERFORMANCE

Contents

METHOD

RESULTS

DISCUSSION
REFERENCES

An investigation of the relationships between test anxiety and test performance indicated
two types of test-anxious students: those who lack test-taking skills and those who lack
study skills.

Tests have become a powerful tool for decision making in our competitive society, with
individuals of all ages being frequently evaluated with respect to their achievement and
abilities. Consequently, test anxiety (TA) has become one of the most disruptive factors in
school and other instructional settings. It is one of the variables most commonly associated
with underachievement and has been shown to affect students' ability to profit from
instruction (Tobias, 1980). Empirical studies show that TA is a major debilitating factor on all
academic levels, from elementary to higher education (Sarason, 1984).

Effects of TA on performance in situations involving evaluative stress have been thoroughly


investigated (for a review see Hembree, 1988: Seipp, 1991). The most common finding is of
a negative relationship between the two: High test-anxious students tend to score lower
than low test-anxious students. This pattern is more pronounced with respect to the
cognitive aspect of TA than to the emotional one, and is more evident in complex than in
simple tasks (e.g., Eysenck, 1982, 1985).

Two contradicting models have been suggested for explaining the effect of TA on
performance. One refers to TA as an interfering agent (e.g., Wine, 1971, 1980; Eysenck,
1988) whereas the other relates TA to a deficit in study skills (e.g., Paulman & Kennelly,
1984: Wittmaier, 1972). According to the interference model, under evaluative stress
students with high levels of TA tend to divide their attention between task demands and
negative self-preoccupation, whereas, those with low levels of TA tend to devote a greater
proportion of their attention to task demands. According to the deficit model, the low
performance of high test-anxious students results from their deficient knowledge of the
material and from their awareness that they are poorly prepared for the test. The difference
between the two models lies in the knowledge acquisition stage. According to the deficit
explanation, high test-anxious students have deficiencies in the acquisition stage because of
ineffective study skills, and therefore have less knowledge of the relevant material. On the
other hand, according to the interference explanation, high test-anxious students
successfully acquire the material, but they have difficulty in retrieving it during the
examination.

Others have suggested that the two models complement, rather than contradict, each other.
Benjamin, McKeachie, Lin and Holinger (1981) claimed that the problems of high test-
anxious students are dual in nature--their first difficulty involves learning (encoding) the
information and their second problem is retrieving it in the test situation. Benjamin et al.
further suggested a causal sequence in which "ability lower than that of one's peers may
lead to anxiety about achievement. This anxiety in turn results in the use of less effective
study habits such as repetitive reading and rote memorization. This in turn results in less
effective processing of information and poor test performance, which is further damaged by
anxiety and worry during the examination" (p. 823).

The information processing model advanced by Tobias (1985) has suggested that a limited
cognitive processing capacity accounts for the effects of both interference and study or test-
taking skills deficit. According to Tobias, students who are high in both make "maximum
demands on cognitive capacity, possibly exceeding available capacity for dealing with the
task" (p. 139). In a more recent study Naveh-Benjamin and his associates (Naveh-Benjamin,
McKeachie, & Lin, 1987) have modified their earlier information processing explanation by
relaxing the claim for a universal deficit in all stages of processing. They suggest a
differentiation between two types of test-anxious students: (a) Those with poor study habits
who have problems in encoding, organizing and retrieving the information, and (b) those
with good study habits who have only a major problem in retrieving the information during
the examination.

In most studies, performance has been measured by the test score, which is commonly
indexed by the number of correct responses. Such an index quantifies the product but
provides little information regarding its quality. To study the qualitative aspects of
performance, diagnostic tests need to be used (e.g., Birenbaum & Tatsuoka, 1987; Brown &
Burton, 1978, Matz, 1982; Sleeman, 1984; VanLehn, 1990). The current study investigates
the relationships between the cognitive component of TA and the quality of performance on
a diagnostic test in mathematics. Such a test allows the mar-rules applied by students to be
tracked and allows consistency with which the rules are applied to be studied (e.g.,
Tatsuoka, Birenbaum, & Arnold, 1989; Payne & Squibb, 1990). Thus, the effects predicted by
the interference model, which are attributed to problems in retrieval, and those predicted
by the deficit model that are because of problems in information organization and encoding
(occurring at the learning stage), may be inferred from an analysis of students' responses on
a diagnostic test. The deficit model will gain support if the higher test-anxious students
exhibit more serious errors and a more consistent mar-rule application, whereas the
interference model will gain support if the more anxious students exhibit less serious errors,
have lower consistency rates, and achieve lower scores on complex items. To test the
hypothesis that each of the two models refers to a different group of students (i.e., the
deficit model refers to a lower achievement group than does the interference model), we
examined the relationships between TA and the measures of performance within three
levels of achievement.

METHOD

Participants

The sample consisted of 431 tenth graders (ages 15 to 16) from 15 classes of two Arab high
schools in the central district of Israel. Of the participants, 51% were girls. The two schools
are among the largest in the Arab sector in Israel (800 students in one school and 900 in the
other). All students in the two schools are Muslims. The mean socioeconomic status of the
students' families in each school is very close to the national SES mean (Nasser, 1989).

Instruments and Procedure

The TA measure. The Test Irrelevant Thoughts scale (TIT; Sarason, 1984) of the Reactions To
Tests Questionnaire (RTT: Sarason, 1984) was used as a measure of the cognitive component
of TA. According to previous analyses, this scale is the least related to the emotional
components of test anxiety (Birenbaum, 1990). The Arabic version of the RTT questionnaire
was a translation of the Hebrew version developed by Birenbaum and Montag (1986). The
translation was done by the second author who is a bilingual high school teacher. The back
translation was done by a university professor of Arabic. The Cronbach Alpha coefficient for
the scale in the current study was 0.83. No significant sex differences were observed in the
mean responses (mean scores for boys was 1.53 for girls was 1.57). The entire RTT
questionnaire was administered to the subjects before taking the mathematics test.

Diagnostic test in exponents. A 38-item diagnostic test in multiplication and division of


quantities with exponents was developed by Nasser (1989) based on a detailed task analysis.
The test comprised two task-wise parallel subtests of 19 items each. Seventeen items in
each subtest were open-ended and two were multiple-choice items with four options each.
Students were asked to show all of the steps of their work toward the solution of the test
items.

The internal consistency of the 38-item test as measured by Cronbach's Alpha coefficient
was 0.96. The item difficulty indices (percent correct) ranged from 0.11 to 0.74 with an
average of 0.48. The item discrimination indices (item-total correlations) ranged from 0.34
to 0.74 with an average of 0.60. The zero order correlation between the test scores (number
correct) and the students' previous semester mathematics course grades was 0.71.

Defining complex and simple items. Complex items are those involving nonequivalent bases,
which by means of the task analysis and the item analysis, were shown to be the more
difficult ones. The complex-item category included 10 items with a mean item difficulty level
of 0.20. The simple-item category consisted of 28 items with equivalent bases. The mean
item difficulty level for this category was 0.58.

The error analysis. Based on a detailed examination of the procedures followed by the
students in solving the test items, 42 mar-rules (bugs, or systematic errors resulting "from
faulty rules of operation invented by students as they assimilate school mathematics into
their mental framework," (Tatsuoka, Birenbaum, & Arnold, 1989, p. 351) were identified. Six
of them were second-order bugs (i.e., a combination of two mar-rules); the rest were first-
order bugs. The following are examples of first-order bugs:

an . am Arrow right (a + a)n + m an/am Arrow right (a + a)[sup n - m)

an . am Arrow right amn an/am Arrow right am/n

an . am Arrow right a. n + a. m an/am Arrow right a.n-a. m

The test items were then systematically answered by the test developer using those
erroneous rules. A bug matrix of order b x n (b being the number of bugs and n being the
number of test items) was constructed. The entries of this matrix were the answers to the
test items produced by the mar-rules. The students' actual answers were then matched to
the entries in the bug matrix and coded accordingly. Of the actual responses, 94.1 % were
matched to identified bugs or to the correct rule. The coded responses included 44 different
codes: one indicating the correct answer, one indicating unidentified errors and the rest
indicating the various identified bugs.

Measuring the rule application consistency rate. Response-codes for parallel items were
compared. Matches and mismatches were counted across the 19 pairs of parallel items for
each test taker, and classified according to the following primary categories:

A = matched correct (1,1).

B = one correct and one error (1,0; 0,1).

C = matched bug.
D = unmatched errors (unequal bugs; unidentified errors).

Defining error seriousness. The 42 bugs were classified into three categories of error
seriousness. The most serious error category included bugs representing a profound
misunderstanding (e.g., 5[sup3]5[sup4] = 5 . 3 . 5 . 4). The next category represented less
serious errors (e.g., X[sup5] . X[sup2] = (X . X)[sup5-2).The nonserious error category
included errors that occur at the final stages of the solution according to the task
specification chart (e.g., X[sup5] X[sup2] = X5-2). Two judges, both qualified math teachers,
independently classified the 42 identified bugs into the three categories of error seriousness.
The two classifications matched on 36 bugs (85.7%). After discussing the remaining 6 bugs
with an expert in math education a consensus was reached regarding their classification.

Achievement grouping. The mathematics course grade for the previous semester was used
to classify students into three achievement levels. Students were graded by their teachers
on a seven-point scale in which 4 = not sufficient; 5 = almost sufficient; 6 = sufficient; 7 =
almost good; 8 = good; 9 = very good; 10 = excellent. Students with grades 4-5,6-7,8-10 were
grouped into the low, medium and high achievement levels, respectively.

Procedure

The students were tested during their regular class sessions. The mathematics test was a
scheduled test for the topic, and students prepared for it the way they did for other
mathematics tests. Parents' consent was obtained before the administration of the TA
questionnaire. The subjects and their parents were told that the purpose of the study was to
gain a better understanding of the relationship between test anxiety and test performance,
in order to design intervention programs to benefit those who need help. The research
instruments were administered to students who attended school on the day their class was
tested. The TA questionnaire was administered first, followed by the mathematics test. The
participants were assured that their responses to the TA questionnaire would not be
released to the school authorities without their consent, and that they would only be used
for research purposes.

RESULTS

The correlation coefficient between TIT and academic achievement in the entire sample was
-0.34 (p < .01). The coefficient for boys was -0.41 and for girls was -0.29 (p<.01). The two
coefficients did not differ significantly (Z = 1.42). The mean scores on TIT for the low,
medium, and high achievement levels were 1. 75,1.52, 1.33 (the respective standard
deviations were: 0.58, 0.49, 0.41). All groups significantly differed from one another
according to Scheffe's procedure (at the 0.05 level). The percentage of boys and girls in the
three achievement groups did not differ significantly.

A two-way ANOVA with TIT as the dependent variable and achievement level and sex as the
independent variables, yielded a significant main effect for achievement level only (F =
23.80, p < .0001). The main effect for sex and the interaction of sex by achievement level
yielded insignificant F values: 0.077 =.78); 0.479 (p = .62). Two-way ANOVA results for the six
performance measures yielded a similar pattern of significant main effect for achievement
level and insignificant main effect for sex and for its interaction with achievement level. In all
these analyses, the nonorthogonality of the design, resulting from the unequal number of
subjects in each cell, was treated using the conservative approach for decomposing the
sums of squares. According to this approach, which is referred to as the "regression"
approach in the SPSSX ( 1986) ANOVA program, the effect being tested is required to add
unique information with respect to all other parameters in the model. Because of the
insignificant effect for sex in our study, further analyses were carried out on the entire
sample.

To test the significance of the differences among the three achievement groups with respect
to the various performance variables, one-way ANOVAs followed by Scheffe's test were
employed. Table 1 presents the means and standard deviations of the test performance
measures along with the ANOVA F tests and the Scheffe contrasts. As can be seen in the
table, the three achievement levels significantly differ in all performance measures. The low
achievers scored the lowest on complex, as well as on simple, items: they had more errors of
all types of seriousness, and they were the most consistent in their mar-rule applications.

Table 2 presents the correlations between TIT and the performance measures in the three
achievement levels. Table 2 shows that in the low achievement group, TIT correlated
significantly negatively with nonserious errors and significantly positively with serious errors,
and with a high percentage of consistent bugs. This indicated that the students with greater
test anxiety tended to commit fewer nonserious but more serious errors, and to be more
consistent in their rule application than were the students with lower test anxiety on that
achievement level. On the intermediate achievement level, TIT correlated significantly
negatively with scores on complex items and on errors of medium seriousness, and
significantly positively with nonserious and serious errors. This indicated that the more test-
anxious students tended to solve fewer complex items correctly and to commit fewer errors
of medium seriousness. They did, however, make more nonserious and serious errors
compared with students with lower test anxiety on that achievement level. On the high
achievement level, TIT correlated significantly negatively with scores on complex items and
significantly positively with scores on nonserious errors. This indicated that the students
with high test anxiety tended to solve fewer complex items correctly and to commit more
nonserious errors compared with lower test anxious students on that achievement level.

Note, however, that the magnitude of the correlations is relatively low. The previously
mentioned findings resulted from tests of the null hypothesis r = 0 within each achievement
level for each independent variable If inferences were to be made for the null hypotheses
for each dependent variable across achievement levels (i.e., r 1 = r 2; r 1 = r 3; r 2 = r 3) one
would have rejected these hypotheses only with respect to seriousness of errors.

DISCUSSION

The results of this study indicated that within the high achievement group, students who
scored high on the cognitive component of TA tended to perform worse and to commit
more nonserious errors on complex items when compared with less anxious students. In the
low achievement group, high test-anxious students committed fewer nonserious errors but
more serious ones, and they were more consistent in their mar-rule application when
compared with low test-anxious students.

The results of this study with respect to the high achievement group seem to support the
interference model of test anxiety. In the high achievement group, high test-anxious
students tend to be more occupied by test-irrelevant thoughts, leaving a smaller portion of
the mental processor for handling complex tasks. The results for the low achievement group
seem to support the deficit model, however. The results may indicate that attentional
interference had a weaker effect during the test but a stronger effect during study, affecting
the encoding and organization of the material before the test-taking situation. Therefore,
one could expect the low achievement, high test-anxious group to perform the same under
evaluative and non-evaluative conditions, whereas the high achievement, high test-anxious
group will perform better in a non-evaluative condition than they will in an evaluative one.

Taken together, the results of this study lead to the conclusion that the two models--deficit
and interference--complement rather than contradict each other. Unlike the claim made by
Benjamin et al. (1981), however, the results do not characterize one type of student. Our
results are more in accordance with the more recent contention of Naveh-Benjamin et al.
(1987) who proposed the notion of two types of test-anxious students. The results of the
current study seem to identify those types more precisely on the basis of the quality of their
performance.

The outcome of this study calls for the differential treatment of high and low achievement
students who suffer from high levels of test anxiety. Although the former group could
benefit from training in test-taking skills, the latter group would seem to benefit more from
the acquisition of more effective learning skills. An intervention for improving test-taking
skills should focus on practicing effective strategies for responding to different test formats
(i.e., constructed vs. choice response). Empirical studies have shown that such interventions
improve test performance (e.g., Callenbach, 1973; Wahlstrom & Boersma, 1968; Kirkland &
Hollandsworth, 1980). The intervention for improving learning skills should include training
in effective strategies for reading comprehension (e.g., Brown, 1980) and for encoding and
organizing the instructional material (e.g., Mueller, 1980; Naveh-Benjamin et al., 1987;
Naveh-Benjamin, Lin & McKeachie, 1989). In addition to the differential treatment, both
high and low achievement students who suffer from test anxiety could benefit from therapy
modes that focus specifically on cognitive change, including cognitive coping techniques
such as cognitive restructuring. Such treatment approaches proved to be more effective in
improving test performance and in lowering test anxiety than applied relaxation techniques
(e.g., systemic desensitization and its variants) that focus on emotionality reduction (Allen,
Elias & Zlotlow, 1980; Denney, 1980; Wine, 1980). Moreover, because of the pervasive
atmosphere of evaluation in school, it is recommended that the total school ecology be
taken into consideration for a test anxiety reduction intervention to be successful (Phillips,
Pitcher, Worsham & Miller, 1980). A step in this direction could involve holding workshops
for teachers that would address their attitudes and behaviors toward testing and evaluation,
and introduce the notion of test anxiety and its treatment, along with advice about how to
identify test-anxious students so that they can be helped by the teacher or by the school
counselor.

From a measurement perspective, the results of this study once again demonstrated the
advantage and usefulness of a diagnostic assessment that is based on the identification of
the underlying rules of operation. It was demonstrated that students at the same
achievement level performed differently with respect to error seriousness and consistency
rate, as a function of their test-anxiety level. Therefore, it seems that introducing test-
anxiety measures in the context of achievement assessment may help validate a student's
test performance, as well as provide valuable information for designing effective
individualized remediation programs.

The fact that the magnitude of the correlations between TA and the performance measures
in this study was relatively low may be due in part to an impression management tendency
of some low achievers to report higher levels of TA than they really experienced. Such a
tendency can be attributed to a self-defense mechanism according to which test anxiety is
viewed by the student as a better excuse for low ability than failure (i.e., a less ego-
threatening one--the lesser of the two evils). Further research incorporating a measure of
dissimulation motivation, such as the Lie scale (Eysenck & Eysenck, 1975; Eysenck, Eysenck &
Shaw, 1975), would be needed to test this speculation.

Finally, the ability to generalize the results of this study to other populations does not seem
to be seriously affected by the fact that the data were collected from a minority group.
According to Crocker, Schmitt and Tang (1988), the relationship between test anxiety and
test performance does not seem to be affected by race. Further investigation is
recommended regarding the effects of test anxiety on performance with respect to other
subject matter areas, and to various item formats. Moreover, for a better understanding of
the relationship between TA and test performance, an in-depth study of the intervening
variables (e.g., study habits, test-taking behavior, personality, social and school related
factors) is needed.

TABLE 1

Means and Standard Deviations of the Performance Measures in

Three Achievement Levels

Achievement Level

Low Medium High

(N = 162) (N = 147) (N = 122)

Measure M SD M SD M SD

Complex items .41 1.09 1.45 2.02 4.63 3.76

Simple items 9.10 6.79 17.93 8.18 23.30 5.89

Percentage 40.55 24.19 26.10 18.98 13.72 15.92

consistent bugs

Nonserious errors 6.14 4.84 4.11 4.87 2.14 28.19

Intermediate

errors 12.11 7.29 10.88 7.57 6.24 25.10

Serious errors 8.15 11.24 2.77 5.33 1.22 33.52

Achievement Level
ANOVA Contrast

Measure F Scheffe

Complex items 111.32[*] L<M<H

Simple items 147.01[*] L<M<H

Percentage 61.41[*] L>M>H

consistent bugs

Nonserious errors 28.19[*] H>M>L

Intermediate

errors 25.10[*] M>L;H>L

Serious errors 33.52[*] H>N;H>L

Note: L = Low achievement level; M = Medium achievement level; H = High achievement


level.

[*] p < .0001.

TABLE 2

Pearson Product Moment Correlations Between Test Anxiety (TIT)

and the Performance Measures in Three Achievement Levels

Achievement Level

Measure Low Medium High

Complex items -.12 -.17[*] -.20[*]

Simple items -.14 .16 -.10

Percentage consistent bugs -.20[*] .14 .17

Nonserious errors -.15[*] .19[*] .18[*]

Intermediate errors -.10 -.17[*] .11

Serious errors .2[**] .27[**] .02

[*] p < .05.

[**] p < .01, two-tailed.

REFERENCES
Allen, G. J., Elias, M. J., & Zlotlow. S. F. (1980). Behavioral interventions for alleviating test
anxiety: A methodological overview of current therapeutic practices. In 1. G. Sarason (Ed.),
Test anxiety: Theory, research, and applications (pp. 155-186). Hillsdale, NJ: Erlbaum.

Benjamin, M., McKeachie, W. J., Lin, Y. G.. & Holinger. D. P. (1981). Test anxiety: Deficits in
information processing. Journal of Educational Psychology, 73. 816-824,

Birenbaum M. (1990). Test anxiety components: Comparisons of different measures. Anxiety


Research. 3, 149-159.

Birenbaum, M.. & Montag, l. (1986). Reactions to tests--an Hebrew version of Sarason's RTT
questionnaire. Unpublished manuscript, Tel Aviv University, School of Education, Unit for
Measurement and Evaluation Research, lsrael (in Hebrew).

Birenbaum, M.. & Tatsuoka, K. K. (1987). Open-ended versus multiple choice response
formats--it does make a difference. Applied Psychological Measurement, 11, 385-395.

Brown, A L. (1980). Metacognitive development and reading. in P. J. Spiro, B. C. Bruce, &


Brewer, W. F. (Eds.). Theoretical issues in reading comprehension (pp453-483). Hillsdale, NJ:
Erlbaum.

Brown, J. S., & Burton, R. B. (1978). Diagnostic models for procedural bugs in basic
mathematical skills. Cognitive Science, 2, 155- 192.

Callenbach, C. (1973). The effects of instruction and practice in content-independent test-


taking techniques upon the standardized reading test scores of selected second grade
students. Journal of Educational Measurement, 10, 25-30.

Crocker, L., Schmitt, A., Tang, L. (1988). Test anxiety and standardized achievement test
performance in middle school years. Measurement and Evaluation in Counseling and
Development, 20, 149-157.

Denney, D. R. (1980). Self-control approaches to the treatment of test anxiety. In 1. G.


Sarason (Ed.). Test anxiety: Theory, research, and applications? (pp. 209-244). Hillsdale, NJ:
Erlbaum.

Eysenck, M. W. (1982). Attention and arousal: Cognition and performance. Berlin, Germany:
Springer.

Eysenck, M. W. (1985). Anxiety and cognitive-task performance. Personality and Individual


Differences, 6, 579-586.

Eysenck, M. W. (1988). Anxiety and attention. Anxiety Research, 1, 9-15.

Eysenck, S. B. G., & Eysenck, H. J. (1975). Manual of the EPQ (Eysenck Personality
Questionnaire). London, England. Hodder & Stoughton Educational and Instructional Testing
Services.

Eysenck, S. B. G., Eysenck, H. J., & Shaw, L. (1975). The modification of personality and lie
scale scores by special "honesty" instructions. British Journal of Social and Clinical
Psychology, 13, 41-50

Hembree, R. (1988). Correlates, causes, effects, and treatment of test anxiety. Review of
Educational Research, 58,47-77.
Kirkland, K., & Hollandsworth, J. C. (1980). Effective test taking: skills-acquisition versus
anxiety reduction techniques. Journal of Consulting and Clinical Psychology, 48, 431-439.

Matz, M. (1982). Toward a process model for high school algebra errors. In D. Sleeman & J.
S. Brown (Eds.), Intelligent tutoring systems. New York: Academic Press.

Mueller, J. H. (1980). Test anxiety and the encoding and retrieval of information. In l.G.
Sarason (Ed.), Test anxiety: Theory, research, and applications (pp. 63-86). Hillsdale, NJ:
Erlbaum.

Nasser, F. (1989) . Effects of sex, test anxiety and item sequence on performance on
diagnostic test in exponents. Unpublished master's thesis. Tel Aviv University. Israel (in
Hebrew).

Naveh-Benjamin, M., Lin, Y-G. & McKeachie, W. J. (1989). Development of cognitive


structures in three academic disciplines and their relations to students' study skills. anxiety.
and motivation: Further use of the ordered-tree technique. Journal of Higher Education
Studies, 4, 10-15.

Naveh-Benjamin, M., McKeachie, W. J., & Lin, Y-G. (1987). Two types of test-anxious
students: Support for an information processing model. Journal of Educational Psychology.
79,131-136.

Paulman, R. G., & Kennelly, K. J. (1984). Test anxiety and ineffective test taking: Different
names, same construct. Journal of Educational Psychology. 76,279-288.

Payne, S. J., & Squibb, H. R. (1990). Algebra mar-rules and cognitive accounts of error.
Cognitive Science, 14,445-481. Phillips, B. N., Pitcher, G. D., Worsham, M. E., Miller, S. C.
(1980). In 1. G. Sarason (Ed.), Test anxiety: Theory, research, and applications (pp. 327-348).
Hillsdale, NJ: Erlbaum.

Sarason, I. G. (1984). Stress, anxiety, and cognitive interference: Reactions to tests. Journal
of Personality and Social Psychology, 46, 929-938.

Seipp, B. (1991). Anxiety and academic performance: A meta-analysis of findings. Anxiety


Research. 4, 27-41.

Sleeman, D. (1984). An attempt to understand students' understanding of basic algebra.


Cognitive Science, 8,387-412.

SPSSX Users Guide. (1986). Des Moines, IA: Prentice-Hall.

Tatsuoka. K K., Birenbaum, M., & Arnold, J. (1989). On the stability of students' rules of
operation for solving arithmetic problems. Journal of Educational Measurement, 26, 351-
361.

Tobias, S. (1980). Anxiety and instruction, In 1. G. Sarason (Ed.), Test anxiety: Theory,
research, and applications (pp. 289-310). Hillsdale, NJ: Erlbaum.

Tobias, S. (1985). Test anxiety: Interference, defective skills, and cognitive capacity.
Educational Psychologist, 20, 135-142.

VanLehn, K (1990). Mind bugs: The origins of procedural misconceptions. Cambridge, MA


The MIT Press.
Wahlstrom, M., & Boersma, F. J. (1968). The influence of test-wiseness upon achievement.
Educational and Psychological Measurement, 28, 413420.

Wine, J. D. (1971). Test anxiety and direction of attention. Psychological Bulletin, 76,92-104.

Wine, J. D. (1980). Overview. In I. G. Sarason (Ed.), Test anxiety: Theory, research and
applications (pp. 349-385). Hillsdale, NJ: Erlbaum.

Wittmaier, B. (1972). Test anxiety and study habits. Journal of Educational Research, 65,
852-854.

~~~~~~~~

By MENUCHA BIRENBAUM FADIA NASSER

Menucha Birenbaum is a senior lecturer at the School of Education. Tel Aviv University,
Israel. Fadia Nasser is a math and science teacher at Tira High School in Tira, Israel.

Copyright of Measurement & Evaluation in Counseling & Development is the property of


Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted
to a listserv without the copyright holder's express written permission. However, users may
print, download, or email articles for individual use.

Result List

Refine Search

Result 1 of 1

View:

Detailed Record

HTML Full Text

Related Information

Additional Resources

Cited References

Times Cited in this Database (1)

Similar Results

Find Similar Resultsusing SmartText Searching.

Tools

Google Drive

Add to folder
Print

E-mail

Save

Cite

Export

Create Note

Permalink

Share

Listen

Translate

Top of Page

Mobile Site

iPhone and Android apps

EBSCO Support Site

Privacy Policy

Terms of Use

Copyright

© 2018 EBSCO Industries, Inc. All rights reserved.

‫תחתית הטופס‬

View publication stats

You might also like