Professional Documents
Culture Documents
Jonassen - Designing Effective Supports For Causal Reasoning
Jonassen - Designing Effective Supports For Causal Reasoning
DOI 10.1007/s11423-006-9021-6
DEVELOPMENT ARTICLE
Abstract Causal reasoning represents one of the most basic and important
cognitive processes that underpin all higher-order activities, such as concep-
tual understanding and problem solving. Hume called causality the ‘‘cement
of the universe’’ [Hume (1739/2000). Causal reasoning is required for making
predictions, drawing implications and inferences, and explaining phenomena.
Causal relations are usually more complex than learners understand. In order
to be able to understand and apply causal relationships, learners must be able
to articulate numerous covariational attributes of causal relationships,
including direction, valency, probability, duration, responsiveness, as well as
mechanistic attributes, including process, conjunctions/disjunctions, and
necessity/sufficiency. We describe different methods for supporting causal
learning, including influence diagrams, simulations, questions, and different
causal modeling tools, including expert systems, systems dynamics tools, and
causal modeling tools. Extensive research is needed to validate and contrast
these methods for supporting causal reasoning.
D. H. Jonassen (&)
University of Missouri, 221C Townsend Hall, Columbia, MO 65211, USA
e-mail: Jonassen@missouri.edu
I. G. Ionas
University of Missouri, 111 London Hall, Columbia, MO 65211, USA
e-mail: IGIonas@mizzou.edu
123
288 D. H. Jonassen, I. G. Ionas
Introduction
Concepts (AKA schemas) are the bases of human understanding and rea-
soning. Concepts represent human interpretations of things in the world and
of ideas they construct. Concept categorization is the most pervasive cognitive
process in everyday life (Rehder, 2003) as we access terms to describe what we
are thinking or interpret what others say to us.
Concepts are rarely accessed or applied individually. Rather, most forms of
understanding require humans to learn and apply dynamic combinations of
concepts (Jonassen, 2006a). Those combinations of concepts define various
propositions. Propositions are sets of concepts that are connected by asso-
ciative and dynamic relationships. The most common type of conceptual
proposition that underlies all thinking is causal (Keil, 1989; Carey, 2002).
Causal propositions include a set of conditions and a set of effects connected
by a causal relationship. For example, we are all aware that applying force to
the accelerator in our cars (causal condition or agent) results in the car
accelerating (causal event or effect). If our goal is to learn how automobiles
function, however, that proposition is oversimplified. So we must learn the
complex causal propositions that describe the causal relationships among the
accelerator cable, fuel injectors, pistons, transmission, wheel rotation, and so
on. The ability to reason from any set of concepts depends on the viability of
the propositions that are formed by combinations of concepts (Jonassen,
2006a). In this paper, we describe the components and complexities of causal
reasoning as well as methods for designing instruction to support causal rea-
soning.
Causal reasoning, based on causal propositions, provides the foundation for
most complex forms of reasoning and conceptual development in all domains.
In this paper, we first describe the cognitive purposes for learning to reason
causally. As shown in Fig. 1, causal reasoning enables us to make predictions,
implications, inferences, and explanations, all of which are necessary for
conceptual change and problem solving. Next, we explicate the complexity of
causal reasoning by describing the covariational and mechanistic attributes of
causal relationships (see Fig. 1). Finally, we describe various methods for
helping learners to articulate causal relationships in the domains they are
studying.
123
Causality 289
Prediction
123
290 D. H. Jonassen, I. G. Ionas
Implications
Inference
When an outcome or state exists for which the causal agent is unknown, then
an inference is required. That is, reasoning backwards from effect to cause
requires the process of inference. A primary function of inferences is diag-
nosis. Diagnostic causal reasoning is predominantly deterministic because
only a determinable number of causes can be inferred to produce the effect.
That is, the effect is already known with a fair amount of certainty and
therefore only a limited number of causes can be inferred for the specific
conditions in which it occurred. Diagnosis is the identification of a cause, an
origin, or a reason for something that has occurred. In medicine, diagnosis
seeks to identify the cause or origin of a disease or disorder as determined by
medical diagnosis. For example, based on symptoms, historical factors, and
test results of patients that are thought to be abnormal, a physician attempts to
infer the cause(s) of that illness state. Medical specialties are based in part on
an understanding of different subsystems in the human body and the cause–
effect relationships that determine various states. Likewise, automobile
mechanics specialize in certain makes of cars because of an increased
awareness of the continuously increasing complexity of causal (functional)
connections in different makes of automobiles. Inferences are required to
associate effects with possible causes.
Why is it important to learn how to reason causally? That is, what kinds of
higher-order activities require that learners make predictions, implications,
and inferences?
Explanation
123
Causality 291
(Klahr, 2000; Zimmerman, 2000; Kuhn, 2002; Kuhn & Dean, 2004). Scientific
explanations make very heavy use of what Aristotle called formal and efficient
causes, which describe the essences of things and the forces that made those
things. Reasoning in the social sciences and humanities addresses human
goals, intentions, motives or purposes and is also subject to formal (teleo-
logical) causes that describe the goals or intentions of causal agents.
Explaining any entity requires more than an awareness of the parts of that
entity. Explanations require functional knowledge of the entity or system
being explained. Functional knowledge includes the comprehension of the
function and structure of the interrelationships among the components in any
system and the causal relationships between them (Sembugmoorthy &
Chandrasekaran, 1986). You cannot fully explain any entity or event without
understanding it causally. For example, the process of diagnosing disease
states requires that physicians’ explanations of diseases include causal net-
works that depict the combination of inferences needed to reach a diagnosis
(Thagard, 2000b). Explanation of phenomena cannot occur without the abil-
ities to predict, implicate, and infer.
Conceptual change
Problem solving
123
292 D. H. Jonassen, I. G. Ionas
and hypothetical behavior (Yoon & Hammer, 1988). Amsel, Langer, and
Loutzenhiser (1991) found that that causal reasoning was essential for all
forms of problem solving, however lawyers’ organization of causal inference
rules is different than psychologists’ and novice adults. Different kinds of
problems in different domains call on different forms of causal reasoning.
In next section, we describe the nature of causality and the attributes of causal
relationships that enable the predictions, implications, and inferences, and
explanations (see Fig. 1). These attributes describe covariational (quantita-
tive) and mechanistic (qualitative) attributes of causal relationships. We set
the stage by providing background descriptions of causal relationships and of
covariational and mechanistic views.
Causality is the relationship that is ascribed between two or more entities
where one incident, action, or the presence of certain conditions determines
the occurrence or nonoccurrence of another entity, action, or condition. Hume
was one of the first modern philosophers to explore causality. He identified
the important attributes of causation (Hume, 1739/2000, p. 116):
1. ‘‘The cause and effect must be contiguous in space and time.
2. The cause must be prior to the effect.
3. There must be a constant union betwixt the cause and effect.’’
That is, causation arises from the empirical relations of contiguity, temporal
succession, and constant conjunction. Spatiotemporal contiguity refers to the
repeated and consistent associations among causes and effects in space or
time. If kicking a ball always results in ball movement, humans induce a causal
relationship. Temporal succession (AKA temporal priority) claims that causes
always precede effects, not the other way around (Waldmann & Hagmeyer,
2001). Constant conjunction means that they always appear together, that is,
there is a subjectively perceived ‘‘necessary connection’’ between the cause
and the effect. Balls do not move without being kicked.
Although causality is usually induced empirically, empirical descriptions
are insufficient for understanding causality. Contemporary accounts of cau-
sality emphasize three main principles that validate a causal relationship,
including covariation (co-occurrence) principle, priority principle, and
mechanism principle (Bullock, Gelman, & Baillargeon, 1982). Covariation is
the degree or extent to which one element consistently affects another, which
describes the empirical relationships between cause and effect. Covariation is
expressed quantitatively in terms of probabilities and strength of relationship.
The mechanism principle describes causal relationships qualitatively, in terms
of the conceptual mechanisms that describe why a cause results in an effect.
The covariational and the mechanism principles are the two most common
conceptual frameworks for studying causal reasoning (Ahn, Kalish, Medin, &
Gelman, 1995; Thagard, 2000a). While significant differences can be seen
123
Causality 293
between the two main theoretical directions related to scientific inquiry, recent
work (Bunge, 1979; Ahn & Kalish, 2000; Glennan, 2002) shows that instead of
being separate descriptions of causal relationships, covariational and mecha-
nistic explanations are reciprocal. Both are necessary for understanding; neither
is sufficient. Although learners can induce a correlation (covariation) between
two variables using statistical methods, failure to provide an explanatory
mechanism that shows how and why the covariation occurs, the relationship will
not be understood (Hedstrom & Swedberg, 1998; Mahoney, 2001).
In order to construct each type of explanation, several dimensions of
covariation and mechanism are required. A major goal of this paper is to
explicate the complexity of causal reasoning. In order to be able to reason
causally, learners must be able to explain the multiple facets of causality,
including covariational and mechanistic attributes of any causal proposition.
These attributes are shown in Fig. 1. In order to manifest deep level learning,
students must learn how to explain and apply each of the covariational and
mechanistic attributes shown in Fig. 1 for any of the causal relationships they
are studying. Being able to explain these attributes will enable them to apply
those attributes in order to make predictions, implications, and inferences.
These attributes will be described next. Later, we will describe different
instructional supports for learning and applying causal relationships (influence
diagrams, simulations, questions, and modeling tools), as indicated in Fig. 1.
Covariation
Temporal succession
123
294 D. H. Jonassen, I. G. Ionas
Direction
123
Causality 295
large is the effect of the cause on the effect? An increase/decrease in the cause
C will have a slight/small/significant/or great increase/decrease on effect E.
Valency describes the strength or amount of effect of the causal relationship.
The strength of that relationship may be expressed semantically (as above) or
quantitatively in terms of actual amounts, percentage increase/decrease or
changes in variance expressed in ANOVAs or structural equation modeling,
the most common quantitative representation of valency. In order to under-
stand and apply causal relationships, learners must also be able to describe the
valency of any causal relationship.
Probability of causality
Duration
123
296 D. H. Jonassen, I. G. Ionas
Immediacy/delay
How readily does cause produce the effect? Another covariational indicator
of causality includes the assumptions about temporal delays between causes
and effects. Effects may be immediate or delayed. Different temporal
assumptions about causal delays may lead to dramatically different causal
judgments. Hagmayer and Waldmann (2002) showed that temporal assump-
tions guide the choice of appropriate statistical indicators of causality, by
selecting the potential causes among a set of competing candidates, and by
influencing the level of aggregation of events. Learners’ prior assumptions
about temporal lags guide their selection and interpretation of probable
causes and so must also be described by learners.
Cyclical causality
Mechanisms
123
Causality 297
123
298 D. H. Jonassen, I. G. Ionas
Causal process
Conjunction/disjunction
123
Causality 299
Necessity/sufficiency of causes
Resolving explanations
123
300 D. H. Jonassen, I. G. Ionas
We have explicated the processes of causal reasoning and have concluded that
in order to understand the causal relationships that support explanations and
problem solving, learners must be able to completely describe those rela-
tionships covariationally in terms of direction, probability, valency, duration,
and responsiveness and mechanistically in terms of causal explication, con-
junctions/disjunctions, and necessity/sufficiency. In this section, we describe
instructional methods for supporting the learning of those causal attributes.
There are three classes of methods that may be used to enhance causal
learning: direct instruction that conveys causal relationships, exploring causal
relationships in simulations, and learner modeling of causal relationships (see
Fig. 1). No direct comparisons of these methods have been made.
123
Causality 301
Exploring causality
Students may also explore causal relationships through the use of simulations
in microworlds. Simulations are environments where values for components of
a system are manipulated by learners in order to test the effects of one or
more variables on another. The manipulations that are available are deter-
mined by some underlying model of the system, ‘‘the main task of the learner
being to infer, through experimentation, characteristics of the model under-
lying the simulation’’ (de Jong & van Jooligan, 1998, p. 179). When learners
interact with the simulation, change values of (input) variables and observe
the results on the values of other (output) variables, they are testing the
covariational effects of factors. That is, they are exploring the extent of effects
of causal agents on other factors. Because the learner seldom has access to the
underlying model, learners must infer parts of the model through manipu-
lating the environment. These are known as black box systems. Because of the
limitations on learner interaction with the model, simulations can support
123
302 D. H. Jonassen, I. G. Ionas
123
Causality 303
Modeling causality
Rather than using direct instruction to convey the meaning of causal rela-
tionships or questions to coach understanding, a number of tools and envi-
ronments may be used by students to construct models of content or problems.
These models convey the student’s understanding of causal relationships.
Expert systems
123
304 D. H. Jonassen, I. G. Ionas
Systems modeling tools (e.g., Stella, Ven Sim, and PowerSim) is the only class
of tools that integrate covariational and mechanistic attributes. These tools
enable learners to model both covariational and mechanistic attributes of
causal relationships. These tools also enable learners to convey cyclical rela-
tionships as loops, where a cause changes an effect, which in turn changes
(regulates) the causal state (see Fig. 5).
When using aggregate modeling tools such as these, learners use a simple set
of building block icons to construct a map of a process: stocks, flows, con-
verters, and connectors (see Fig. 5). Stocks illustrate the level of causal agents
in the simulation. In Fig. 5, moisture, fumes, pollutants, and gas are mechanistic
descriptions of causes and/or effects. Flows convey the effects of these agents
on the others. Emitting after combustion, evaporating, absorbing, and
demanding are flows that represent the mechanisms of effects. The causal
agent, emitting after combustion, has a positive influence on fumes, which
causes a positive influence on pollutants. Converters are coefficients or ratios
that influence flows. Efficiency rate of gas is a converter that controls both
emitting after combustion and burning to run car. Converters are used to add
complexity to the models to better represent the complexity in the real-world.
123
Causality 305
Finally, connectors are the lines that show the directional effect of factors on
each other by the use of arrows. Students produce simple equations in the
stocks and flows to convey the amount of covariation among the elements.
Once a model has been built, Stella enables learners to run the model that they
have created and observe the output in graphs, tables, or animations in order to
test the assumptions and relationships. The iterative testing and revising of the
model to insure that it predicts theoretically viable outcomes is to date one of
the complete methods for modeling causality that is available. Hogan and
Thomas (2001) found that the best systems modelers focused on the whole
picture of model, modeling outputs and interactions rather than inputs and
individual relationships while building and testing models. No empirical
research has examined the effects of systems modeling on causal reasoning.
Most research has employed case studies. For example, Steed (1995, Unpub-
lished dissertation) showed how Stella modeling portrayed diverse dimensions
of information and helped high-school students shift their thinking by allowing
them to compare different representations (different models).
Systems modeling tools, we believe, are the most powerful and effective
means for modeling complex causal relationships. Temporal succession and
direction are conveyed as connectors. Valency and probability are represented
as simple formulae in the flows, and durations and immediacy are conveyed by
different kinds of stocks that regulate the inflow or outflow. Using loops,
learners can easily convey reciprocity in their models. The model in Fig. 5 also
shows the complexity of conjunctive causes that produce smog and pollution.
Necessity and sufficiency may also be conveyed using logic formulae (if-then-
else) to describe the flows. Although systems models are the most powerful
way to represent complex causal relationships, the learning curve required for
these tools is steep.
123
306 D. H. Jonassen, I. G. Ionas
Summary
References
Ahn, W., & Kalish, C. W. (2000). The role of mechanism beliefs in causal reasoning. In F. C. Keil,
& R. A. Wilson (Eds.), Explanation and cognition (pp. 199–225). Cambridge, MA: MIT Press.
Ahn, W., Kalish, C. W., Medin, D. L., & Gelman, S. (1995). The role of covariation versus
mechanism information in causal attribution. Cognition, 54, 299–352.
Amsel, E., Langer, R., & Loutzenhiser, L. (1991). Do lawyers reason differently from psycholo-
gists? A comparative design for studying expertise. In R. J. Sternberg, & P. A. Frensch (Eds.),
Complex problem solving: Principles and mechanisms (pp. 223–250). Hillsdale, NJ, England:
Lawrence Erlbaum Associates Inc.
Anderson, J. (1990). The adaptive character of thought. Hillsdale, NJ: Erlbaum.
Biswas, G., Schwartz, D., Bransford, J. D., & The Teachable Agents Group at Vanderbilt (2001).
Technology support foor complex problem solving: From SAD environment to AI. In K. D.
Farbus, & P. J. Feltovoch (Eds.), Smart machines in education: The learning revolution in
educational technology. Menlo Park, CA: AAAI/MIT Press.
Brewer, W. F., Chinn, C. A., & Samarapungavan, A. (2000). Explanation in scientists, children. In
F. C. Keil, & R. A. Wilson (Eds.), Explanation and cognition (pp. 279–298). Cambridge, MA:
MIT Press.
Bullock, M., Gelman, R., & Baillargeon, R. (1982). The development of causal reasoning. In W.
Friedman (Eds.), The developmental psychology of time (pp. 209–254). New York: Academic
Press.
Bunge, M. (1979). Causality and modern science (3rd ed.). New York: Dover Publications.
Carey, S. (1995). On the origin of causal understanding. In D. Sperber, D. Premack, & A. J.
Premack (Eds.), Causal cognition: A multidisciplinary debate (pp. 268–302). Oxford, England:
Clarendon Press.
Carey, S. (2002). The origin of concepts: Continuing the conversation. In N. L. Stein, P. J. Bauer,
& M. Rabinowitz (Eds.), Representation, memory, and development: Essays in honor of Jean
Mandler (pp. 43–52). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
Cheng, P. W. (1997). From covariation to casation: A causal power theory. Psychological Review,
104(2), 367–405.
Cheng, P. W., & Nisbett, R. E. (1993). Pragmatic constraints on causal deduction. In R. E. Nisbett
(Eds.), Rules for reasoning (pp. 207–227). Hillsdale, NJ: Lawrence Erlbaum Associates.
123
Causality 307
Cheng, P. W., & Novick, L. R. (1992). Covariation in natural causal induction. Psychological
Review, 99(2), 365–382.
Corrigan, R., & Denton, P. (1996). Causal understanding as a developmental primitive. Devel-
opmental Review, 16, 162–202.
de Jong, T., & van Joolingen, W. R. (1998). Scientific discovery learning with computers simu-
lations of conceptual domains. Review of Educational Research, 68(2), 179–201.
Fugelsang, J. A., & Thompson, V. A. (2003). A dual-process model of belief and evidence
interactions in causal reasoning. Memory & Cognition, 31(5), 800–815.
Glennan, S. (2002). Rethinking mechanistic explanation. Philosophy of Science, 69, S342–S353.
Graesser, A. C., Baggett, W., & Williams, K. (1996). Question-driven explanatory reasoning.
Applied Cognitive Psychology, 10, S17–S31.
Graesser, A. C., Langston, M. C., & Lang, K. L. (1992). Designing educational software around
questioning. Journal of Artificial Intelligence in Education, 3, 235–241.
Graesser, A. C., Swamer, S. S., Baggett, W. B., & Sell, M. A. (1996). New models of deep
comprehension. In B. K. Britton, & A. C. Graesser (Eds.), Models of understanding text.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Guenther, R. K. (1998). Human cognition. Upper Saddle River, NJ: Prentice Hall.
Hagmayer, Y., & Waldmann, M. R. (2002). How temporal assumptions influence causal judg-
ments. Memory and Cognition, 30(7), 1128–1137.
Hedstrom, P., & Swedberg, R. (Eds.). (1998). Social mechanisms: An analytical approach to social
theory. Cambridge, MA: Cambridge University Press.
Hogan, K., & Thomas, D. (2001). Cognitive comparisons of students’ systems modeling in ecology.
Journal of Science Education and Technology, 10(4), 319–345.
Howard, R. A., & Matheson, J. E. (1989). Influence diagrams. In R. A. Howard, & J. E. Matheson
(Eds.), Readings on the principles and applications of decision analysis (pp. 721–762). Menlo
Park, CA: Strategic Decisions Group.
Hume, D. (1739/2000). A treatise of human nature. Oxford, UK: Oxford University Press.
Hung, W., & Jonassen, D. H. (2006). Conceptual understanding of causal reasoning in physics.
International Journal of Science Education, 28(5), 1–21.
Jonassen, D. H. (2004). Learning to solve problems: An instructional design guide. San Francisco,
CA: Pfeiffer/Jossey-Bass.
Jonassen, D. H. (2006a). On the role of concepts in learning and instructional design. Educational
Technology: Research and Development, 54(2), 177–196.
Jonassen, D. H. (2006b). Facilitating case reuse during problem solving. Technology, Instruction,
Cognition, and Learning. 3, 51–62.
Kant, I. (1996). Critique of pure reason. Indianapolis, IN: Hackett Publishing Company (Trans-
lated by Werner S. Pluhar; Original work published in 1781).
Keil, F. C. (1989). Concepts, kinds, and cognitive development. Cambridge, MA: MIT Press.
Kelley, H. H. (1973). The process of causal attribution. American Psychologist, 28, 107–128.
Klahr, D. (2000). Exploring science: The cognition and development of discovery processes.
Cambridge, MA: MIT Press.
Koslowski, B., Okagaki, L., Lorenz, C., & Umbach, D. (1989). When covariation is not enough:
The role of causal mechanism, sampling method, and sample size in causal reasoning. Child
Development, 60, 1316–1327.
Kuhn, T. (1977). The essential tension. Chicago: University of Chicago Press.
Kuhn, D. (2002). What is scientific reasoning and and how does it develop. In U. Goswami (Eds.),
Handbook of childhood cognitive development (pp. 371–393). Oxford, UK: Blackwell.
Kuhn, D., & Dean, D. (2004). Connecting scientific reasoning and causal inference. Journal of
Cognition and Development, 5(2), 261–288.
Mahoney, J. (2001). Beyond correlational analysis: Recent innovations in theory and method.
Sociological Forum, 16(3), 575–593.
Marini, M. M., & Singer, B. (1988). Causality in the social sciences. Sociological Methodology, 18,
347–409.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Printice-Hall.
Patel, V. L., Arocha, J. F., & Zhang, J. (2005). Thinking and reasoning in medicine. In K. J.
Holyoak, & R. G. Morrison (Eds.), The Cambridge handbook of thinking and reasoning (Vol
xiv, pp. 727–750, 858). New York, NY: Cambridge University Press.
123
308 D. H. Jonassen, I. G. Ionas
Rapus, T. L. (2004). Integrating information about mechanism and covariation in causal reason-
ing. Dissertation Abstracts International, 65(2-B), 1047.
Rehder, B. (2003). Categorization as causal reasoning. Cognitive Science, 27(5), 709–748.
Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton, NJ:
Princeton University Press.
Salmon, W. C. (1989). Four decades of scientific explanation. In P. P. Kitcher, & W. C. Salmon,
(Eds.), Scientific explanation: Minnesota studies in the philosophy of science (Vol. 13, pp. 3–
219). Minneapolis, MN: University of Minnesota Press.
Schlottmann, A. (2001). Perception versus knowledge of cause and effect in children: When seeing
is believing. Current Directions in Psychological Science, 10(4), 111–115.
Sembugmorthy, V., & Chandrasekeran, B. (1986). Functional representations of devices and
compilation of diagnostic problem-solving systems. In J. Kolodner, & C. K. Riesbeck (Eds.),
Experience, memory, and reasoning (pp. 47–53). Hillsdale, NJ: Lawrence Erlbaum Associates.
Shapiro, B. R., van den Broek, P., & Fletcher, C. R. (1995). Using story-based causal diagrams to
analyze disagreements about complex events. Discourse Processes, 20, 51–77.
Steyvers, M., Tenenbaum, J. B., Wagenmakers, E. J., & Blum, B. (2003). Inferring causal networks
from observations and interventions. Cognitive Science, 27(3), 453–489.
Thagard, P. (2000a). Probabilistic networks and explanatory coherence. Cognitive Science Quar-
terly, 1(1), 91–114.
Thagard, P. (2000b). Explaining disease: Correlations, causes, and mechanisms. In F. C. Keil, & R.
A. Wilson (Eds.), Explanation and cognition (pp. 254–276). Cambridge, MA: MIT Press.
Verschueren, N., Schroyens, W., & d’Ydewalle, G. (2004). The interpretation of the concepts
‘necessity’ and ‘sufficiency’ in forward unicausal relations. Current Psychology Letters:
Behaviour, Brain and Cognition, 14(3), 1–28.
Waldmann, M. R., & Hagmayer, Y. (2001). Estimating causal strength: The role of structural
knowledge and processing effort. Cognition, 82(1), 27–58.
Waldmann, M. R., & Holyoak, K. J. (1992). Predictive and diagnostic learning within causal
models: Asymmetries in cue competition In Proceedings of the twelfth annual cognitive science
society (pp. 190–197). Hillsdale, NJ: Lawrnce Erlbaum Associates.
Waldmann, M. R., Holyoak, K. J., & Fratianne, A. (1995). Causal models and the acquisition of
category structure. Journal of Experimental Psychology: General, 124, 181–206.
Wellman, H. M., & Gelman, S. A. (1998). Knowledge acquisition in foundational domains. In W.
Damon, D. Kuhn, & R. S. Siegler (Eds.), Handbook of child psychology: Cognition, percep-
tion and language (pp. 523–573). New York: Wiley.
White, P. A. (1989). A theory of causal processing. British Journal of Psychology, 80, 161–188.
Yoon, W. C., & Hammer, J. M. (1988). Deep-reasoning fault diagnosis: An aid and a model. IEEE
Transactions on Systems, Man, and Cybernetics, 18(4), 659–676.
Zimmerman, C. (2000). The development of scientific reasoning skills. Developmental Review, 20,
99–149.
David Jonassen is Distinguished Professor of Education at the University of Missouri in the areas
of Learning Technologies and Educational Psychology.
Ioan Gelu Ionas is a PhD candidate in Learning Technologies at the University of Missouri.
123