Professional Documents
Culture Documents
Kramrski Dudai
Kramrski Dudai
Kramrski Dudai
net/publication/250144958
CITATIONS READS
54 2,063
2 authors, including:
Bracha Kramarski
Bar Ilan University
114 PUBLICATIONS 4,125 CITATIONS
SEE PROFILE
All content following this page was uploaded by Bracha Kramarski on 07 April 2015.
BRACHA KRAMARSKI
VERED DUDAI
Bar-Ilan University
ABSTRACT
This exploratory study investigated 100 Israeli 9th graders who used two
different group-metacognitive support methods in online mathematical
inquiry—group feedback guidance (GFG) and self-explanation guidance
(SEG)—compared to a control group (CONT). The study evaluated each
method’s effects on students’: (a) mathematical inquiry ability: problem
solving, explanations, mathematical feedback in online forum discussions,
and transfer ability; and (b) self-regulated learning (SRL) measures (self-
report questionnaires and metacognitive feedback in online forum discus-
sions). Metacognitive support methods, based on IMPROVE self-questioning
strategies, appeared in pop-up screens and provided the two experimental
groups with differential cues for problem-solving processes. Mixed quanti-
tative and qualitative analyses showed that GFG students outperformed
SEG students in most mathematical and SRL measures and the CONT
students in all measures. In addition, SEG students outperformed the CONT
students in mathematical problem-solving but not on mathematical transfer
ability or SRL.
In recent years, the role of self-regulated learning (SRL) in education has elicited
much interest. Research has focused on students’ SRL and subject-matter knowl-
edge as means to attain successful learning. Students are considered self-regulated
377
SRL refers to a cyclical and recursive process that utilizes feedback mech-
anisms for students to understand, control, and adjust their learning accordingly
(e.g., Butler & Winne, 1995; Zimmerman, 2000). In terms of cognitive and
metacognitive processes, self-regulated students are good strategy users. They
plan, set goals, select strategies, organize, monitor, and evaluate at various
points during the acquisition process (Pintrich, 2000; Zimmerman, 2000).
Research has indicated that self-explanation and feedback are both important
metacognitive strategies in the SRL process. Several studies have shown that
students learn better when they explain instructional materials to themselves
(e.g., Aleven & Koedinger, 2002; Renkl 2002), or when students explain their
own problem-solving steps (e.g., Kramarski & Mevarech, 2003). Chi (2000)
believes that explanation helps participants organize knowledge, thus augment-
ing incomplete “mental models.” She suggests that children’s provision of
explanations enables them to combine new information with existing knowl-
edge to gain more complete representations. Also, some evidence indicates that,
depending on their explanations’ quality, students learn more when they provide
explanations than when they receive explanations (Webb, 1989).
According to Butler and Winne (1995), feedback serves a multidimensional
role in aiding knowledge construction. Feedback may encourage students to
engage in reflection on why their solution or explanation is wrong, and thereby
to update their solution strategies. Feedback stimulates students to adopt more
metacognitive strategies in their learning tasks. Research on feedback in
computer-based learning environments has shown differential effects for feedback
strategies on students’ learning. Corrective feedback helps immediate learning,
whereas guided and metacognitive feedback helps ensure deep understanding
and the ability to transfer knowledge (Aleven & Koedinger, 2002; Azevedo
& Bernard, 1995; Kramarski & Zeichner, 2001; Moreno, 2004). However,
most studies examined effects of metacognitive feedback provided by external
agents like computers, whereas little research was conducted on effects of
student exchanges of group-metacognitive feedback in online mathematical
inquiry discussion.
Furthermore, the literature indicates that learning either through self-
explanations or feedback is difficult. Not all students use these strategies spon-
taneously. It remains an open question as to how guidance methods that scaffold
or emphasize self-explanation or feedback compare to guidance methods that do
SUPPORT FOR ONLINE MATHEMATICAL INQUIRY / 381
aspects. Such aspects include practice of tasks and group discussion between
peers of comparable expertise, thus making monitoring and regulation processes
overt (Brown & Campione, 1994).
In general, research reported that metacognitive support with IMPROVE self-
questioning demonstrated positive effects on students’ learning outcomes and
SRL processes in different learning environments (e.g., Kramarski & Mevarech,
2003; Kramarski & Mizrachi, 2006). Most studies regarding self-questioning
(e.g., Ge & Land, 2003; Kauffman, Ge, Xie, & Chen, 2008; King, 1991;
Kramarski & Zoltan, 2008; Schoenfeld, 1992) examined this strategy’s effects
while directing self-questions throughout the solution process (e.g., “What is the
problem?”). However, no research on supportive group-metacognition explored
two differential self-questioning guidance strategies—self-explanation and group
feedback in online mathematical inquiry—compared to a control group.
METHOD
Participants
Participants were 100 ninth-grade students (47 boys, 53 girls) attending three
classes within one junior high school in central Israel. We assigned each class
to one of the three instructional methods: SEG, GFG, or CONT. The three classes
were heterogeneous in terms of math ability. No statistical differences in
mathematical knowledge were found between the three groups at pretest (see
Results section).
Instruction Methods
The training procedure for both metacognitive strategies was applied over 5
weeks with small groups of four students each within each class, for a total of six
lessons per group. The procedure comprised two stages, both implemented in the
computer lab: pre-study preparation and study guidance.
Stage 1: Pre-study preparation—A week before the study, each of the two
groups (SEG and GFG) was separately exposed to a 1-day, 90-minute preparation
for online inquiry in mathematics (comprising two class lessons) that addressed
problem solving, explanations, and forum discussions.
In the first lesson (45 min.), focusing on problem solving, each group was
trained with the IMPROVE metacognitive self-questioning method to enhance
the problem-solving process within small groups (Kramarski & Gutman, 2006;
Kramarski & Mevarech, 2003; Kramarski & Mizrachi, 2006). The teacher
verbally explained the importance of IMPROVE metacognitive self-questioning
by using pop-up comprehension, connection, strategy, and reflection questions
(see Figure 1). She modeled the use of such questions during problem solving
of authentic tasks. Students in both groups practiced the use of these questions
with computerized materials in an online discussion within small four-member
groups. Students practiced these questions during the inquiry process.
384 / KRAMARSKI AND DUDAI
In the second lesson (45 min.), which focused either on constructing explan-
ations or on providing/receiving online feedback, the two groups’ training
differed. In the SEG group, the teacher verbally explained the importance of
constructing mathematical knowledge with explanations. The SEG students
practiced self-questioning techniques related to explanations and justifications
by using pop-ups (see Figure 2), for example: (1) “Is my use of mathematical
expressions correct?;” (2) “Would another argument be appropriate?;” (3) “ What
is my conclusion?;” and (4) “Is my explanation clear?”
Similarly, the GFG group was exposed verbally to the importance of group
feedback for constructing mathematical knowledge, and the teacher explicitly
modeled how to provide such feedback. Using pop-ups (see Figure 3), students
practiced self-questioning techniques related to the provision/receipt of feedback,
for example:
1. “Did I read my friend’s solution?”;
2. “Did I check if my friend’s explanation is correct and clear?”;
3. “How can I respond to my friend regarding the correctness of his/her
solution/explanation?”; and
4. “How can I modify my friend’s solution and explanations?”
In addition, both groups received computerized examples. The SEG students
received examples of explanations, and the GFG students received feedback
SUPPORT FOR ONLINE MATHEMATICAL INQUIRY / 385
downloaded mathematical tasks from the school website. Tasks were based on
the Programme for International Student Assessment’s (PISA, 2003) con-
ceptual framework for solving authentic tasks that required different levels of
algebraic abilities.
Each student in both groups was asked to solve the weekly task, send his/her
solution to the forum, provide feedback for the forum partners’ solutions, receive
feedback for his/her own solution, and adjust his/her solution (if needed) accord-
ing to the peers’ suggestions. During the solution process, students received
the metacognitive self-questions (focusing either on the self-explanation or the
feedback strategy) as an electronic message by a pop-up screen at certain times.
Students were asked to use these prompts during the solution.
The teacher encouraged each student to provide feedback to the other three
students in the forum about their inquiry process. At the end of each weekly
meeting, each group sent their solutions and feedback exchanges to the teacher,
who then gave online feedback individually within 24 hours regarding the
whole inquiry process: accuracy of student’s solution, problem-solving process,
explanations, and feedback provision in the forum.
Control Group
The CONT group was also exposed to a pre-study preparation that referred
to mathematical problem solving using a socio-cognitive perspective. Before the
study, the teacher held in-class discussions that referred to the importance of
mathematical inquiry, authentic problem solving, mathematical explanations, and
discussing solutions with peers, but students were not explicitly exposed to any
metacognitive guidance methods. Their study guidance stage began in the first
week of the 5-week study period, with two class lessons on the first day, for a
total of six lessons. The CONT students practiced the same tasks as the two
metacognitive online groups in the computer lab. Students downloaded their tasks
from the school website, and then each task was presented frontally in the class
and solved using paper and pencil.
Students were encouraged to solve the tasks cooperatively, and they were
allowed to ask for peers’ help online (e-mail, forum). They submitted each
solution individually to the teacher electronically by e-mail or online forum.
The teacher then gave online feedback individually within 24 hours regarding
accuracy of students’ solution and problem-solving process.
Measures
Five kinds of measures were administered. All three groups completed
a paper-and-pencil mathematical pre-knowledge test (pretest), an authentic
mathematical problem-solving task (posttest), a paper-and-pencil transfer test
(at posttest), and a pre/post SRL self-report measure. For GFG and SEG students
only, data were collected on two types of student feedback (mathematical and
metacognitive) given during the posttest of the online inquiry discussion of the
authentic mathematical problem-solving task.
Mathematical Measures
either 1 (correct answer) or 0 (incorrect answer), and a total score ranging from
0 to 10. We translated the scores into percentages. Cronbach alpha reliability
coefficient was .81.
knowledge and skills learned in the instructional unit to the more formal, less
authentic tasks typifying their usual math tasks in school. In addition, the test
examined high-order skills (problem posing) that students didn’t practice. Thus,
the test can be considered as far-transfer for the context (formal tasks) and skills
(posing a problem) for all groups. The test addressed three kinds of skills:
procedural (9 items), problem-solving (8 items), and constructing a mathematical
problem using a given formula (1 item). In addition, students’ ability to provide
mathematical explanations was assessed (9 items). Cronbach alpha reliability
for the entire test was .86.
For each item, students received 1 point for providing the correct solution (or
0 for an incorrect solution). Thus, total scores ranged from 0 to 9 for solution
accuracy of procedural skills, 0 to 8 for problem-solving skills, and 0 to 1 for
posing a mathematical problem. Students also received an explanation grade
of either 1 (entirely correct argument) or 0 (incorrect argument) for each item.
Thus, a total score ranged from 0 to 9 for explanations’ accuracy. We converted
the scores into percentages.
SRL Measures
was 0.87. The Appendix (right-hand column) exemplifies scoring for meta-
cognitive feedback categories based on excerpts from online discussion.
Procedure
Instruction began in classrooms in the second academic semester after obtaining
consent from teachers and the school principal, and continued for 5 weeks. At
pretest, all students completed the paper-and-pencil pre-knowledge mathematics
test. The SRL self-report was administered to all students at pretest and posttest by
the teachers in their classrooms. GFG and SEG students’ feedback (mathematical
and metacognitive) during the online forum discussion of the authentic task was
analyzed. At posttest, a near-transfer authentic mathematical inquiry task (PISA,
2003) was administered to all students: online for GFG and SEG students and
as a paper-and-pencil task for CONT students. Also at posttest, all students
completed in a paper-and-pencil test a more formal mathematical problem-solving
(far) transfer tasks. Students were told that the questionnaire’s purpose was to
determine the effectiveness of learning materials in mathematics.
RESULTS
Mathematical Measures
Pre-Knowledge Mathematical Test
To examine the GFG, SEG, and CONT students’ mathematical problem solv-
ing and mathematical explanation skills, we performed one-way ANOVAs on
students’ solution accuracy, explanation’s mathematical accuracy, and explan-
ation quality scores for the authentic mathematical inquiry task. Table 1 presents
means and standard deviations of task scores by guidance method.
Results indicated significant differences between the groups (GFG, SEG, and
CONT) in accuracy of mathematical problem solving, F(2, 97) = 7.96, p < .001,
h2 = 0.25. Effect-size analysis indicated that the GFG students significantly
outperformed both the SEG students (Cohen’s d = 0.58) and the CONT students
(Cohen’s d = 0.92) in their accuracy of mathematical problem-solving. The SEG
students also outperformed the CONT students (Cohen’s d = 0.35). Significant
differences also emerged between the groups in providing accurate mathematical
explanations, F(2, 97) = 9.73, p < .001, h2 = 0.28. The GFG students significantly
outperformed both the SEG students (Cohen’s d = 0.63) and the CONT students
(Cohen’s d = 1.33). Also, the SEG students outperformed the CONT students
(Cohen’s d = 0.70).
SUPPORT FOR ONLINE MATHEMATICAL INQUIRY / 391
Method
Group Self-
feedback explanations Control
Task (n = 32) (n = 32) (n = 36)
Solution accuracy
M 86.71 74.48 67.43
SD 19.90 23.87 19.30
Explanation accuracy
(mathematical > non-mathematical)
M 49.88 37.79 24.33
SD 21.09 20.17 16.54
Explanation quality
(conceptual > procedural)
M 68.3 60.4 50.2
SD 14.3 13.8 15.2
aRange 0-100.
Further analysis of only those explanations that were correct indicated sig-
nificant differences between groups on argument quality, F(2, 97) = 3.97, p < .05,
h2 = 0.16. The GFG students significantly outperformed both the SEG students
(Cohen’s d = 0.53) and the CONT students (Cohen’s d = 1.26), providing
more conceptual and fewer procedural arguments. Also, SEG students outper-
formed CONT students (Cohen’s d = 0.71).
Mathematical Feedback
To compare the GFG and SEG groups’ ability to provide effective mathematical
feedback during online inquiry discussion, we performed a multivariate analysis
of variance (MANOVA), followed by an ANOVA for each feedback category.
Table 2 presents means and standard deviations of online mathematical feedback
by guidance method.
The MANOVA on providing mathematical feedback yielded significant dif-
ferences between groups, F(4, 59) = 7.68, p < .01, h2 = 0.24. Further ANOVA
analysis indicated significant differences between groups in four of the five
feedback categories: mathematical terms, F(1, 62) = 5.03, p < .05, h2 = 0.16;
mathematical representations, F(1, 62) = 11.15, p < .001, h2 = 0.32; mathematical
392 / KRAMARSKI AND DUDAI
Method
Mathematical terms
M 7.08 14.09
SD 15.02 19.2
Mathematical representations
M 26.04 18.93
SD 12.71 7.60
Mathematical explanations
M 25.06 16.15
SD 18.19 13.48
Non-mathematical statements
M 16.61 19.80
SD 10.90 15.47
aScores are percentages and calculated as total references provided for each category
divided by the total statements in the forum, multiplied by 100.
explanations, F(1, 62) = 7.46, p < .001, h2 = 0.28; and final solution accuracy,
F(1, 62) = 4.90, p < .05, h2 = 0.14. The GFG students significantly outperformed
the SEG students in giving feedback (see Table 2) that referred to mathematical
representations (Cohen’s d = 0.70) and to mathematical explanations (Cohen’s
d = 0.56). However, the SEG group outperformed the GFG group in referring
to mathematical terms (Cohen’s d = 0.40) and in accuracy of the final
solution (Cohen’s d = 0.32). No differences between groups emerged regard-
ing non-mathematical statements, F(1, 62) = 2.58, p > .05, Cohen’s d = 0.24,
h2 = 0.09.
To examine the GFG, SEG, and CONT students’ ability to transfer mathe-
matical learning to a formal (far) transfer test, we performed a MANOVA,
SUPPORT FOR ONLINE MATHEMATICAL INQUIRY / 393
Method
Group Self-
feedback explanations Control
Skill (n = 32) (n = 32) (n = 36)
Procedural
M 89.17 88.78 86.72
SD 14.30 15.52 14.38
Problem solving
M 86.43 71.39 68.53
SD 19.90 26.20 22.45
Problem posing
M 72.03 52.03 48.34
SD 14.62 15.13 16.71
Mathematical explanations
M 56.10 41.53 32.72
SD 14.70 15.00 16.32
aRange 0-100.
394 / KRAMARSKI AND DUDAI
SRL Measures
Self-Report Questionnaire
Method
Group Self-
feedback explanations Control
(n = 32) (n = 32) (n = 36)
Cognitive
Metacognitive
Monitoring
M 3.81 4.13 3.90 4.03 3.87 4.00
SD 0.28 0.32 0.31 0.30 0.38 0.35
Evaluation
M 4.06 4.21 4.01 4.11 3.98 4.02
SD 0.36 0.33 0.34 0.35 0.37 0.38
Total
students’ perceptions of each SRL component: cognitive strategies and the two
metacognitive strategies—monitoring and evaluation, F(2, 97) = 14.64, 7.56,
and 5.65, respectively, p < .01; h2 = 0.23, 0.21, and 0.16, respectively. However,
significant interactions emerged between method and time: for monitoring,
F(2, 97) = 4. 78, p < .05, h2 = 0.19, and for evaluation, F(2, 97) = 3.97, p < .05,
h2 = 0.16. Findings indicated that at posttest, GFG students reported signifi-
cantly more improvement in their use of self-monitoring and evaluation strategies
(Cohen’s d = 0.86, 0.40, respectively) compared to SEG students (Cohen’s
d = 0.45, 0.29, respectively) and CONT students (Cohen’s d = 0.34, 0.11,
respectively). However, no significant differences emerged between the SEG
and CONT students in their self-perceived monitoring and evaluation (Cohen’s
d = 0.09, 0.26, respectively). No significant interactions emerged between method
and time in improving the use of self-regulated cognitive strategies while problem
solving, F(2, 97) = 0.35, p > .05.
Metacognitive Feedback
DISCUSSION
The purpose of this exploratory study was to comprehensively investigate
online mathematical inquiry (problem solving, explanations, and feedback using
forum discussions) and transfer ability and SRL processes among students
exposed to one of two metacognitive support methods (SEG or GFG), as com-
pared to a control group. We found that students exposed to online guidance
in providing and receiving group feedback (GFG), based on the IMPROVE
self-questioning strategy, significantly outperformed students of the other two
groups: SEG students who also studied online but focused on metacognitive
guidance regarding explanation skills, and CONT students who received no
special metacognitive guidance. Better performance was notable in the GFG
396 / KRAMARSKI AND DUDAI
Method
Planning
M 32.84 39.48
SD 19.02 18.97
Monitoring
M 25.72 22.96
SD 9.12 8.61
Debugging
M 16.36 14.83
SD 4.11 5.14
Evaluation
M 25.08 22.73
SD 5.17 6.68
aScores are percentages and calculated as total references provided for each category
divided by the total statements in the forum, multiplied by 100.
group for mathematical problem-solving, and for the ability to provide higher-
order conceptual explanations based on logical-formal conclusions. Furthermore,
the SEG students outperformed the CONT students on the same measures.
Students’ online mutual mathematical feedback while attempting to solve an
authentic mathematical task indicated a significant difference between the GFG
and SEG groups’ ability to articulate their thoughts in mathematical terms. The
GFG students displayed a higher tendency to refer to mathematical representations
and explanations. The SEG students referred more often to mathematical terms,
and to the solution’s accuracy.
Moreover, GFG students outperformed SEG students and CONT students
in their ability to (far) transfer their learning to various tasks that required
high-order skills in a paper-and-pencil test, such as the ability to pose a mathe-
matical problem. No significant differences between the SEG and CONT students
emerged in transferring their problem-solving skills to new tasks. However,
the SEG students did outperform the CONT students in one measure, the quality
of their explanations for the new tasks, providing conceptual rather than merely
procedural justifications for their solutions to transfer tasks.
SUPPORT FOR ONLINE MATHEMATICAL INQUIRY / 397
support is removed (e.g., Biswas, Schwartz, Leelawong, & Vye, 2005; Davis &
Linn, 2000; King, 1991; Kramarski & Gutman, 2006; Kramarski & Mizrachi,
2006; Moreno, 2004). Our outcomes indicated positive results for both meta-
cognitive support methods (GFG and SEG) on the paper-and-pencil transfer
measures. The GFG students succeeded more in problem-solving processes and
conceptual mathematical explanations compared to the other groups; however,
the SEG students succeeded more in conceptual mathematical explanations
compared to the CONT students. The SEG group’s mathematical explanations
advantage can be explained by the fact that providing explanations can be con-
sidered near transfer as it resembles the type of training SEG students received.
In contrast, the CONT students exhibited the lowest gain in transfer ability
despite the fact that the transfer test was provided in the same paper-and-pencil
format in which they trained (near-transfer).
As conceptualized by Cooper and Sweller (1987), three variables contribute
to transfer ability: learners must master strategies for problem solving, develop
categories for sorting tasks that lead to similar solutions, and know how to
relate to previous knowledge for solving novel tasks. It seems that the SRL and
mathematical conceptual understandings that were empowered by the GFG meta-
cognitive support were a springboard for transferring students’ learning ability
to various tasks that required high-order skills (posing a problem) in a new
situation (paper-and-pencil) and in another context (formal task).
APPENDIX
Example of Online Discussion Feedbacka on Mathematical
and Metacognitive Aspects
Mathematical Metacognitive
Feedback excerpt feedback type feedback type
“. . . with regard to the pattern, you wrote that each Mathematical Debugging
line of apple trees increases by 8 trees (i.e., n + 8). terms;
However, the pattern should be n*8. Please correct Mathematical
the mistake.” representation
“You are great. You modified your solution. Now Mathematical Evaluation
this is a good piece of work. The table you added representation;
helped to understand the algebraic pattern, the Mathematical
wording is excellent. explanation;
Non-mathematical
I really enjoyed your work.” statement
aFeedback refers to the question: Is it possible that the apple trees’ number equals
the conifer trees’ number, for any number (n) of rows of apple trees? Explain how you
found your answer.
402 / KRAMARSKI AND DUDAI
REFERENCES
Aleven, V., & Koedinger, K. R. (2002). An effective metacognitive strategy: Learning by
doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26,
147-179.
Artz, A., & Yaloz-Femia, S. (1999). Mathematical reasoning during small-group problem
solving. In L. Stiff & F. Curio (Eds.), Developing mathematical reasoning in grades
K-12: 1999 yearbook of National Council of Teachers of Mathematics (pp. 115-126).
Reston, VA: National Council of Teachers of Mathematics.
Azevedo, R. (2005). Computer environments as metacognitive tools for enhancing learn-
ing. Educational Psychologist, 40(4), 193-197.
Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback
in computer-based instruction. Journal of Educational Computing Research, 13(2)
111-127.
Azevedo, R., & Cromley, J. G. (2004). Does training of self-regulated learning facili-
tate student’s learning with hypermedia? Journal of Educational Psychology, 96(3),
523-535.
Biswas, G., Schwartz, D., Leelawong, K., & Vye, N. (2005). Learning by teaching: A
new agent paradigm for educational software. Applied Artificial Intelligence, 19(3),
363-392.
Brown, A. L. (1987). Metacognition, executive control, self-regulation and other mysteri-
ous mechanisms. In F. Weinert & R. Kluwe (Eds.), Metacognition, motivation and
understanding (pp. 65-115). Hillsdale, NJ: Erlbaum.
Brown, A. L., & Campione, J. C. (1994). Guided discovery in a community of learners.
In K. McGilly (Ed.), Classroom lessons: Integrating cognitive theory with classroom
practice (pp. 229-270). Cambridge, MA: MIT Press.
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical
synthesis. Review of Educational Research, 65(3), 245-281.
Chi, M. T. H. (2000). Self-explaining expository texts: The dual processes of generating
inferences and repairing mental models. In R. Glaser (Ed.), Advances in instructional
psychology (pp. 161-237). Mahwah, NJ: Erlbaum.
Cooper, G., & Sweller, T. (1987). Effects of schema acquisition and rule automation
on mathematical problem-solving transfer. Journal of Educational Psychology, 79,
347-362.
Davis, E. A., & Linn, M. C. (2000). Scaffolding students’ knowledge integration: Prompt
for reflection in KIE. International Journal of Science Education, 22(8), 819-837.
Flavell, J. (1979). Metacognition and cognitive monitoring: A new area of cognitive-
developmental inquiry. American Psychologist, 34, 906-911.
Ge, X., Chen, C-H., & Davis, K. A. (2005). Scaffolding novice instructional designers’
problem-solving processes using question prompts in a web-based learning environ-
ment. Journal of Educational Research, 33(2), 219-248.
Ge, X., & Land, S. M. (2003). Scaffolding students’ problem-solving processes in
an ill-structured task using question prompts and peer interactions. Educational
Technology Research and Development, 51(1), 21-38.
Goos, M., Galbraith, P., & Renshaw, P. (2002). Socially mediated metacognition:
Creating collaborative zones of proximal development in small group problem solving.
Educational Studies in Mathematics, 49, 193-223.
SUPPORT FOR ONLINE MATHEMATICAL INQUIRY / 403
Kauffman, D. F., Ge, X., Xie., K, & Chen., C. H. (2008). Prompting in web-based environ-
ments: Supporting self-monitoring and problem solving in college students. Journal
of Educational Computing Research, 38(2), 115-137.
King, A. (1991). Effects of training in strategic questioning on children’s problem-solving
performance. Journal of Educational Psychology, 83(3), 307-317.
King, A. (1992). Facilitating elaborative learning through guided student-generated ques-
tioning. Educational Psychologist, 27(1), 111-126.
Kramarski, B., & Gutman, M. (2006). How can self-regulated learning be supported in
mathematical e-learning environments? Journal of Computer Assisted Learning, 22,
24-33.
Kramarski, B., & Mevarech, Z. R. (2003). Enhancing mathematical reasoning in the
classroom: Effects of cooperative learning and metacognitive training. American
Educational Research Journal, 40(1), 281-310.
Kramarski, B., & Mizrachi, N. (2006). Online discussion and self-regulated learning:
Effects of instructional methods on mathematical literacy. Journal of Educational
Research, 99(4), 218-230.
Kramarski, B., & Zeichner, O. (2001). Using technology to enhance mathematical reason-
ing: Effects of feedback and self-regulation learning. Educational Media International,
38(2/3), 77-82.
Kramarski, B., & Zoltan, S. (2008). Using errors as springboards for enhancing mathe-
matical reasoning with three metacognitive approaches. The Journal of Educational
Research, 102(2), 137-151.
Lampert, L. (1990). When the problem is not the question and the solution is not the
answer: Mathematical knowing and teaching. American Educational Research
Journal, 27(1), 29-63.
Lin, X. (2001). Designing metacognitive activities. Educational Technology Research
and Development, 49, 23-40.
McClain, K., & Cobb, P. (2001). An analysis of development of sociomathematical norms
in one first-grade classroom. Journal for Research in Mathematics Education, 32,
236-266.
Mevarech, Z. R., & Kramarski, B. (1997). IMPROVE: A multidimensional method for
teaching mathematics in heterogeneous classrooms. American Educational Research
Journal, 34, 365-394.
Montague, M., & Bos, C. S. (1990). Cognitive and metacognitive characteristics of
eighth-grade students’ mathematical problem solving. Learning and Individual
Differences, 2, 371-388.
Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of explanatory
versus corrective feedback in discovery-based multimedia. Instructional Science, 32,
99-113.
National Council of Teachers of Mathematics (NCTM). (2000). Principles and standards
for school mathematics. Reston, VA: Author.
Nussbaum, E. M., & Sinatra, G. M. (2002). On the opposite side: Argument and conceptual
engagement in physics. Paper presented at the meeting of the American Educational
Research Association, New Orleans, LA.
Oh, S., & Jonassen, D. H. (2007). Scaffolding online argumentation during problem
solving. Journal of Computer Assisted Learning, 2, 95-110.
404 / KRAMARSKI AND DUDAI