Download as pdf or txt
Download as pdf or txt
You are on page 1of 92

JOHN DEWEY

John Dewey (1859–1952) was one of American pragmatism’s early founders, along with Charles Sanders
Peirce and William James, and arguably thethe most prominent American intellectual for the first half of
the twentieth century. Dewey’s educational theories and experiments had a global reach, his
psychological theories had a sizable influence in that growing science, and his writings about democratic
theory and practice deeply influenced debates in academic and practical quarters for decades. In addition,
Dewey developed extensive and often systematic views in ethics, epistemology, logic, metaphysics,
aesthetics, and philosophy of religion. Because Dewey typically took a genealogical approach that
couched his own view within the larger history of philosophy, one may also find a fully developed
metaphilosophymeta philosophy in his work.
Dewey’s pragmatism—or, “cultural naturalism”,,” which he favored over “pragmatism” and
“instrumentalism”—may be understood as a critique and reconstruction of philosophy within the larger
ambit of a Darwinian worldview (Lamont 1961; MW4: 3). Following James’ lead, Dewey argued that
philosophy had become an overly technical and intellectualistic discipline, divorced from assessing the
social conditions and values dominating everyday life (FAE, LW5: 157–58). He sought to reconnect
philosophy with the mission of education-for-living (philosophy as “the general theory of education”), a
form of social criticism at the most general level, or “criticism of criticisms” (EN, LW1: 298; see also DE,
MW9: 338).
Set within the larger picture of Darwinian evolutionary theory, philosophy should be seen as an activity
undertaken by interdependent organisms-in-environments. This standpoint, of active adaptation, led
Dewey to criticize the tendency of traditional philosophies to abstract and reify concepts derived from
living contexts. As did other classical pragmatists, Dewey focused criticism upon traditional dualisms of
metaphysics and epistemology (e.g., mind/body, nature/culture, self/society, and reason/emotion) and
then reconstructed their elements as parts of larger continuities. For example, human thinking is not a
phenomenon which is radically outside of (or external to) the world it seeks to know; knowing is not a
purely rational attempt to escape illusion in order toto discover what is ultimately “real” or “true”. Rather,
human knowing is among the ways organisms with evolved capacities for thought and language cope
with problems. Minds, then, are not passively observing the world; rather, they are actively adapting,
experimenting, and innovating; ideas and theories are not rational fulcrums to get us beyond culture, but
rather functionfunction experimentally within culture and are evaluated on situated, pragmatic bases.
Knowing is not the mortal’s exercise of a “divine spark”,,” either; for while knowing (or inquiry, to use
Dewey’s term) includes calculative or rational elements, it is ultimately informed by the body and
emotions of the animal using it to cope.
In addition to academic life, Dewey comfortably wore the mantle of public intellectual, infusing public
issues with lessons found through philosophy. He spoke on topics of broad moral significance, such as
human freedom, economic alienation, race relations, women’s suffrage, war and peace, human freedom,
and educational goals and methods. Typically, discoveries made via public inquiries were integrated back
into his academic theories, andtheories and aided their revision. This practice-theory-practice rhythm
powered every area of Dewey’s intellectual enterprise, and perhaps explainsexplains why his
philosophical theories are still discussed, criticized, adapted, and deployed in many academic and
practical arenas. Use of Dewey’s ideas continues apace in aesthetics and art criticism, education,
environmental policy, information theory, journalism, medicine, political theory, psychiatry, public
administration, sociology, and of course in the philosophical areas to which Dewey contributed.

• 1. Biographical Sketch
o Short Chronology of the Life and Work of John Dewey
• 2. Psychology
o 2.1 Associationism, IntrospectionismIntrospections, and Physiological Psychology
o 2.2 The “Reflex Arc” and Dewey’s Reconstruction of Psychology
o 2.3 Instincts/Impulses
o 2.4 Perception/Sensation
o 2.5 Acts and Habits
o 2.6 Emotion
o 2.7 Sentiency, Mind, and Consciousness
▪ Sentiency
▪ Mind
▪ Consciousness
• 3. Experience and Metaphysics
o 3.1 The Development of “Experience”
o 3.2 Traditional Views of Experience and Dewey’s Critique
o 3.3 Dewey’s Positive Account of Experience
o 3.4 Metaphysics
o 3.5 The Development of “Metaphysics”
o 3.6 The Project of Experience and Nature
o 3.7 Empirical Metaphysics and Wisdom
o 3.8 Criticisms of Dewey’s Metaphysics
• 4. Inquiry and Knowledge
o 4.1 The Organic Roots of Instrumentalism
o 4.2 Beyond Empiricism, Rationalism, and Kant
o 4.3 Inquiry, Knowledge, and Truth
• 5. Philosophy of Education
o 5.1 Experiential Learning and Teaching
o 5.2 Traditionalists, Romantics, and Dewey
o 5.3 Democracy Through Education
• 6. Ethics
• 7. Political Philosophy
• 8. Art and Aesthetic Experience
• 9. Religion, Religious Experience and A Common Faith
o 9.1 Dewey’s Religious Background
o 9.2 Aligning Naturalism and Religion
o 9.3 “Religion” vs. “Religious”
o 9.4 Faith and God
o 9.5 Religion as Social Intelligence—a Common Faith
• Bibliography
o Dewey
▪ Collections
▪ Abbreviations of Dewey works frequently cited
▪ Individual works
o Other sources
• Academic Tools
• Other Internet Resources
• Related Entries

1. Biographical Sketch
John Dewey leadled an active and multifarious life. He is the subject of numerous biographies and an
enormous literature interpreting and evaluating his extraordinary body of work: forty books and
approximately seven hundred articles in over one hundred and forty journals.
Dewey was born in Burlington, Vermont on October 20, 18591859, to Archibald Dewey, a merchant, and
Lucina Rich Dewey. Dewey was the third of four sons; the first, Dewey’s namesake, died in infancy. He
grew up in Burlington, was raised in the Congregationalist Church, and attended public schools. After
studying Latin and Greek in high school, Dewey entered the University of Vermont at fifteen and
graduated in 1879 at nineteen. After college, Dewey taught high school for two years in Oil City,
Pennsylvania. Subsequent time spent in Vermont studying philosophy with former professor H.A.P.
Torrey, along with the encouragement of the editor of the Journal of Speculative Philosophy, W.T. Harris,
helped Dewey decide to attend graduate school in philosophy at Johns Hopkins University in 1882.
There, his study included logic with Charles S. Peirce (which Dewey found too “mathematical”, and did
not pursue), the history of philosophy (especially with George Sylvester Morris, and physiological and
experimental psychology with Granville Stanley Hall (who trained with Wilhelm Wundt in Leipzig and
William James at Harvard).[1]
Though many years later Dewey attributed important credit to Peirce’s pragmatism for his mature views,
during graduate school, Peirce had no sizable impact. Dewey’s main graduate school influences—Neo-
Hegelian idealism, Darwinian biology, and Wundtian experimental psychology—created a tension, which
he sought to resolve. Was the world fundamentally biological, functional, and material or was it, rather,
inherently creativecreative, and spiritual? In no small part, Dewey’s career was launched by his attempt to
mediate and harmonize these views. While they shared the idea of “organism”,,” Dewey also saw in both
—- and rejected—any aspects deemed overly abstract, atomizing, or reductionistic. His earliest attempts
to create a “new psychology” (aimed at merging experimental psychology with idealism) sought a method
by which experience could be understood as integrated and whole. As a result, Dewey’s early approach
was a modified, English absolute idealism. Two years after matriculating, Dewey completed graduate
school in 1884 with a dissertation criticizing Kant from an Idealist position (“The Psychology of Kant”);
it remains lost.
While scholars still debate the degree to which Dewey’s mature philosophy retained early Hegelian
influences, it is clear that thethe personal influence on Dewey was profound. New England’s religious
culture, Dewey recalled, imparted an “isolation of self from the world, of soul from body, [and] of nature
from God”,,” and he reacted with “an inward laceration” and “a painful oppression”. His study (with
George Sylvester Morris) of British Idealist T.H. Green and G.W.F. Hegel afforded Dewey personal and
intellectual healing:
Hegel’s synthesis of subject and object, mattermatter, and spirit, the divine and the human, was, however,
no mere intellectual formula; it operated as an immense release, a liberation. Hegel’s treatment of human
culture, of institutions and the arts, involved the same dissolution of hard-and-fast dividing walls, and had
a special attraction for me. (FAE, LW5: 153)
Philosophically, Dewey’s early encounters with Hegelianism informed his career-long quest to integrate,
as dynamic wholes, the various dimensions of experience (practical, imaginative, bodily, psychical) that
philosophy and psychology had defined as discrete.
Dewey’s family and reputation as a philosopher and psychologist grew while at various universities,
including the University of Michigan (1886–88, 1889–1894) and the University of Minnesota (1888–89).
AtIn Michigan, Dewey developed long-term professional relationships with James Hayden Tufts and
George Herbert Mead. In 1886, Dewey married Harriet Alice Chipman; they had six children and adopted
one. Two of the boys died tragically young (two and eight). Chipman had a significant influence on
Dewey’s advocacy for women and his shift away from religious orthodoxy. During this period, Dewey
wrote articles critical of British idealists from a Hegelian perspective; he read and taught
James’ Principles of Psychology (1890), and) and called his own view “experimental idealism”
(1894a, The Study of Ethics, EW4: 264).
In 1894, at Tuft’s urging, President William Rainey Harper offered Dewey the position of head of the
Philosophy Department at the University of Chicago, which at that time included both Psychology and
Pedagogy. Attracted by the prospect of putting these disciplines into active collaboration, Dewey
accepted the offer, and began to build the department by hiring G.H. Mead from Michigan and J.R.
Angell, a former student at Michigan (who also studied with James at Harvard). Dubbed, by James, the
“Chicago School” Dewey along with Tufts, Angell, MeadMead, and several others develop
“psychological functionalism”. He also published the seminal “Reflex Arc Concept in Psychology”
(1896, EW5; hereafter RAC), and broke from transcendental idealism and from his church.
At Chicago, Dewey founded The Laboratory School, which provided a site to test his psychological and
educational theories. Dewey’s wife Alice was the school’s principal from 1896–1904. Dewey became
active in Chicago’s social and political causes, including Jane Addams’ Hull House, and Addams became
a close personal friendfriend of the Dewey’s. Dewey and his daughter and biographer Jane Dewey
credited Addams with helping him develop his views on democracy, education, and philosophy;
nevertheless, the significance of Dewey’s intellectual debt to Addams is still being uncovered
(“Biography of John Dewey”, Dewey 1939a; see also SeigfriedSiegfried 1999, Fischer 2013).
In 1904, conflicts related to the Laboratory School lead Dewey to resign his Chicago positions and move
to the philosophy department at Columbia University in New York City; there, he also established an
affiliation with Columbia’s Teacher’s College. Among Dewey’s important influences at Columbia were
F.J.E. Woodbridge, Wendell T. Bush, W.P. Montague, Charles A. Beard (political theory) and Franz
Boas (anthropology). Dewey remained at Columbia until retirement 1930, going on to produce eleven
more books.
In addition to a raft of important academic publications, Dewey wrote for many non-academic audiences,
notably via the New Republic; he was active in leading, supporting, or founding a number of important
organizations including the American Civil Liberties Union, the American Association of University
Professors, the American Philosophical Association, the American Psychological Association, and the
New School for Social Research. Dewey was spoke out in support of both progressive politics and social
change during the first part of the twentieth century. His renown as a philosopher and educator lead to
numerous invitations; he inaugurated the Paul CarusCares Lectures (revised and published as Experience
and Nature, 1925), gave the 1928 Gifford Lectures (revised and published as The Quest for Certainty,
1929), and gave the 1933–34 Terry Lectures at Yale (published as A Common Faith, 1934a). He traveled
for two years in Japan and China, and alsoand made notable trips to Turkey, Mexico, the Soviet Union,
and South Africa.
Soon, Dewey began developing his own psychological theories; extant accounts of behavior, he argued,
were flawed because they were premised upon outdated and false philosophical assumptions. (He
eventually judged that larger questionsthose larger questions about the meaning of human existence
reached deep into cultural practices and exceeded the resources of psychology; such questions required
philosophical investigations of experience in the fields of art, politics, ethics, and religion, etc.) Dewey’s
psychological work reconstructed the components of human conduct (instincts, perceptions, habits, acts,
emotions, and conscious thought) and these proved integral to later, mature statements about experience.
They also informed his lifelong contention that mind, contrary to long tradition, is not fundamentally
subjective and isolated, but social and interactive, made through natural and cultural environments.
2.1 Associationism, IntrospectionismIntrospections, and Physiological Psychology
Dewey entered the field of psychology while it was dominated by introspectionismintrospections (arising
from associationism, a.k.a., “mentalism”) and the newer physiological psychology (imported from
Germany). Earlier British empiricists, such as John Locke and David Hume, accounted for intelligent
behavior with (1) internally inspected (“introspected”) entities, including perceptual experiences (e.g.,
“impressions”), and (2) thoughts or ideas (e.g., “images”). These accrue toward intelligence by way of an
elaborate process of associative learning. Discovery-by-introspection was indispensable for many
empiricists, and for many physiological and experimental psychologists (e.g., Wundt) as well.
2.2 The “Reflex Arc” and Dewey’s Reconstruction of Psychology
Thus, Dewey sought an account of psychological experience mindful both of experimental
limits and culture’s pervasive influences. William James’s tour de force, The Principles of
Psychology (1890), modeled how he might explain the conscious and intelligent self without appeals to a
transcendental Absolute. As Dewey recalled, Principles’ emphatically biological conception of mind gave
his thinking “a new direction and quality” and “worked its way more and more into all my ideas and acted
as a ferment to transform old beliefs” (FAE, LW5: 157). Rather than measuring psychic phenomena
against preexisting abstractions, James showed how one might employ a “radical empiricism” that starts
from the phases and elements of actual, lived experience. The goal would be to understand experience’s
functional origins from a perspective that was, typically, coherentcoherent, and whole.
2.4 Perception/Sensation
Dewey’s methodological lesson regarding instincts was twofold; first, one cannot premise an empirical
science on unquestioned, metaphysical posits; even basic terms must be open to revision or deletion;
second, strictly analytical methods using simple elements to build up complex behavior are often
inadequate to explain the meaning of psychological phenomena. This lesson also applies to perception
and sensation. Dewey attacked the view, common in his day, that a perception (1) was simply and
externally caused, (2) completely occupied a mental state, and (3) was passively received into an empty
mental space.
2.5 Acts and Habits
Later writings develop the argument of “Reflex Arc”,,” namely that complex behavior cannot be
explained by building up simpler constituents. “Acts” provide a better starting point, of organisms in
environments (HNC, MW14: 105). Acts are transactional: we act with and on things, in contexts, amidst
conditions. Acts are fundamental to understanding behavior because they are selective. By directing
movement and organizing situations, they manifest interest. This combination—of selectivity and
interest—make activities meaningful. For example, our ancestors acted selectively regarding how to
satisfy instinctive hunger; such selectivity created the conditions for a more elaborate interest in the taste
of food, and, much later, in dining customs and cuisine.
2.6 Emotion
Like “habit”, Dewey redescribed “emotion” as a basic form of involvement present in “coordinated
circuits” of activity. Where habits are controlled responses to problematic situations, emotion, by contrast,
is not predominantly controlled or organized; rather, it is an organism’s resonance with a situation, a
“perturbation from clash or failure of habit” (HNC, MW14: 54). As with the other aspects of
psychological life, Dewey’s account reconstructed emotion as fundamentally transactional with other
experiences typically analyzed as discrete (the “rational” or “physical” dimensions, e.g.).
2.7 Sentiency, Mind, and Consciousness
Dewey’s accounts of sentiency, mind, and consciousness build upon those of impulse, perception, act,
habit, and emotion. These are complex topics, but a cursory view can complete this sketch of Dewey’s
psychology.
Mind
Dewey rejected both traditional accounts of mind-as-substance (or container) and more contemporary
schemes reducing mind to brain states (EN, LW1: 224–225). Rather, mind is activity, a range of dynamic
processes of interaction between organism and world. Consider the range connoted by mind: as memory (I
am reminded of X); attention (I keep her in mind, I mind my manners); purpose (I have an aim
in mind); care or solicitude (I mind the child); paying heed (I mind the traffic stop). “Mind”, then, ranges
over many activities: intellectual, affectional, volitional, or purposeful. It is
Consciousness
Like mind, consciousness is also a verb—the brisk transitioning of felt, qualitative events. Dewey was
profoundly influenced by James’s metaphor of consciousness as a constantly moving “stream of thought”
(FAE, LW5: 157). In the end, however, Dewey did not believe a fully adequate account of consciousness
could be captured in words. Talk about consciousness is elliptical—it is “vivid” or “conspicuous” or
“dull”—and such characterizations are never quite adequate. Because the experience of consciousness is
ever-evanescent, we cannot fix it as we do for the objects of our attention—as, for example, “powers”,
“things”, or “causes”. Dewey, then, did not define consciousness, but evoked it using contrasts and
instances. Consider these contrasts in Experience and Nature, (EN, LW1: 230)
Mind is Consciousness is
A whole system of meanings as Awareness or perception of meanings (of actual
embodied in organic life events in their meaning)
Contextual and persistent: a constant Focal and transitive
background
Structural and substantial: a constant A punctuated series of hereshers and nowsknows
foreground
Enduring luminosity Intermittent flashes of varying intensities
A continuous transmission of messages The occasional interception and singling out of a
message that makes it audible
Given the processual, active nature of our psychology, Dewey was forced to depict consciousness with a
dynamic, organismic vocabulary. Consciousness is thinking-in-motion, an ever-reconfiguring event series
that is qualitatively felt as experience transforms. Whereas mind is a “stock” of meanings, consciousness
is realization-and-reconstruction of meanings, enabling activities to be reorganized and redirected (EN,
LW1: 233). Consciousness is drama; mind is the indispensable back story. This back story is not radically
subjective; it is social, constituted by communities past and present.
Dewey also tried to get at consciousness performatively, so to speak; he provoked the reader to consider
the nature of consciousness while reading. Here, again, he utilized notions of “focus” and “fringe”,
emphasizing how the latter is indispensable for mental orientation (EN, LW1: 231). As physical balance
controls walking, mind’s meanings constantly adjust and direct present, focal interpretation. The
progressive advance of vivid consciousness is enabled by mind’s pervasive and persistent system of
meanings.
Having concluded this review of Dewey’s psychology, we turn now to his account of experience.

3. Experience and Metaphysics


3.1 The Development of “Experience”
Dewey’s notion of “experience” evolved over the course of his career. Initially, it contributed to his
idealism and psychology. After he developed instrumentalism in Chicago during the 1890’s, Dewey
moved to Columbia, revising and expanding the concept in 1905 with a historically significant essay,
“The Postulate of Immediate Empiricism” (PIE, MW3). Further developments show up in “The Subject-
matter of Metaphysical Inquiry” (1915, MW8) and the “Introduction” to Essays in Experimental
Logic (1916, MW10) which consolidated and advanced his view that “experience” was more than just
way to rebut subjectivism in psychology, but was also factored into metaphysical accounts of existence
and nature (DykhuizenDelhaize 1973: 175–76). Dewey advanced this in the1923 CarusCares Lectures,
revised and expanded in his metaphysical magnum opus, Experience and Nature (1925, revised edition,
1929; EN, LW1). Many other significant elaborations on experience follow, notably in Art as
Experience (1934b, AE, LW10).[8] Because experience is pivotal across his philosophical oeuvre,
interested readers should track its functions in other sections of this entry; here, experience is treated
concisely, with a focus upon Dewey’s philosophical method and metaphysics.
3.2 Traditional Views of Experience and Dewey’s Critique
Understanding Dewey’s view of experience requires, first, some notion of what he rejected. It was typical
for many philosophers to construe experience narrowly, as the private contents of consciousness. These
contents might be perceptions (sensing), or reflections (calculating, associating, imagining) done by the
subjective mind. Some, such as Plato and Descartes, denigrated experience as a flux which confused or
diverted rational inquiry. Others, such as Hume and Locke, thought that experience (as atomic sensations)
provided the mind at least some resources for knowing, albeit with reduced ambitions. Both general
philosophical approaches agreed that percepts and concepts were different and in tension; they agreed that
sensation was perspectival and context-relative; they also agreed that this relativity problematized the
assumed mission of philosophy—to know with certainty—and differed only about the degree of the
problem.
3.3 Dewey’s Positive Account of Experience
So far, these details, and the section on psychology, limn an outline of Dewey’s view: experience is
processual, transactional, socially mediated, and not categorically prefigured as “rational” or “emotional”.
Here are three additional, positive characterizations: first, experience as experimental; second, experience
as primary (“had”) vs. secondary (“known”); and third, experience as methodological for philosophy.
3.4 Metaphysics
Much which is central to Dewey’s metaphysics has been discussed—the transactional organism-
environment setting, mind, consciousness, and experience. Accordingly, this section will focus on the
development of Dewey’s conception of “metaphysics”, the main project in Experience and Nature, how a
so-called empirical metaphysics intended to reconnect with the ancient idea of philosophy as wisdom, and
finally it will sketch some of the criticisms Dewey’s metaphysics received.
3.5 The Development of “Metaphysics”
Debate over a definite meaning for the term “metaphysics”, was just as alive in Dewey’s day as in ours.
From the beginning, Dewey sought to critique and reconstruct metaphysical concepts (e.g., reality, self,
consciousness, time, necessity, and individuality) and systems (e.g., Spinoza, Leibniz, Kant, and Hegel).
Like his fellow pragmatists Peirce, James, and Mead, Dewey sought to transform not eradicate
metaphysics. As described earlier, Dewey’s early metaphysical views were closest to idealism, but
engagements with experimental science and instrumentalism convinced him to abandon traditional
metaphysics’ project of giving an ultimate and complete account of reality überhauptbewrapt.
His dormant interest in metaphysics was revivified at Columbia by his colleague F. J. E. Woodbridge,
3.6 The Project of Experience and Nature
Experience and Nature provides both extended criticism of past metaphysical approaches, especially their
quest for certainty and assumption of an Appearance/Reality framework, and a positive, general theory
regarding how human existence is situated in nature. It is proffered as empirical, descriptive, and
hypothetical; it eschews claims of special access beyond “experience in unsophisticated forms”, which,
Dewey argued, give us “evidence of a different world and points to a different metaphysics” (EN, LW1:
47). EN looks to existing characteristics of human culture, anthropologically, to see what they reveal,
more generally, about nature. The isolation, analysis, and description of the “generic traits of existence”
and their relations to one another, is one significant outcome.
3.7 Empirical Metaphysics and Wisdom
Since Dewey is a pragmatist and melioristmeliorism, it is worth asking: How can metaphysics contribute
to the world beyond academic philosophy? Dewey’s larger ambition was to return philosophy to an older,
ancient mission—the pursuit of wisdom. And while Dewey describes philosophy as inherently critical, a
“criticism of criticisms”, where does that leave even an empirical, hypothetical, naturalistic metaphysics?
(EN, LW1: 298) Dewey raises the issue, prophylactically:
3.8 Criticisms of Dewey’s Metaphysics
Dewey received and responded to many criticisms of his metaphysical views, from 1905’s “Postulate of
Immediate Empiricism” and on. Critics often overlooked that his position was aiming to undercut
prevailing metaphysical genres; often, his view was just aligned with one or another existing position. (He
was taken, variously, as in league with realism, idealism, relativism, subjectivism, etc. See Hildebrand
2003.) One recurrent criticism was that the view advanced in PIE (that “things are what they are
experienced as”) could not possibly yield a metaphysics because it merely bespoke the kind of subjective,
immediate experience inimical to a more mediated and objective account. Similar reactions arrived
twenty years late to EN from critics attacking the contrast there between “experience” and “nature”.[19]

4. Inquiry and Knowledge


4.1 The Organic Roots of Instrumentalism
The interactional, organic model Dewey developed in his psychology informed his theories of learning
and knowledge. Given this new ecological framework, a range of traditional epistemological proposals
and puzzles (premised on metaphysical divisions such as appearance/reality, mind/world) lost credibility.
“So far as the question of the relation of the self to known objects is concerned”, Dewey wrote, “knowing
is but one special case of the agent-patient, of the behaver-enjoyer-sufferer situation” (“Brief Studies in
Realism”, MW6: 120). As we have already seen in psychology, Dewey’s wholesale repudiation of the
tradition’s basic metaphysical framework required extensive reconstructions in every other area; one
popular name for Dewey’s reconstruction of epistemology (or “theory of inquiry”, as Dewey preferred to
call it) was “instrumentalism”.[22]
4.3 Inquiry, Knowledge, and Truth
In the context of instrumentalism, what do “logic” and “epistemology” amount to? Dewey remains
focused on these subject matters but insisted on a more empirical approach. How, he asked, does
reasoning and learning actually happen?[27] Dewey addresses the nature of logic in his major 1938
study, Logic: The Theory of Inquiry (LTI, LW12); his term for logic is the “inquiry into
inquiry”. LTI undertakes the systematic process of collecting, organizing, and explicating the actual
conditions of different kinds of inquiry; the aim of this reconstructed logic, as outlined in the 1917 “The
Need for a Recovery of Philosophy”, is pragmatic and ameliorative: to provide an “important aid in
proper guidance of further attempts at knowing” (MW10: 23).

5. Philosophy of Education
It is probably fair to say that, around the world, Dewey remains as well knowknown for his educational
theories (see entry on philosophy of education, section Rousseau, Dewey, and the progressive movement)
as for his philosophical ones. However, a closer look at Dewey’s body of work shows how often these
theories align. Dewey recognized this, reflecting that his 1916 magnum opus in education, Democracy
and Education (DE, MW9) “was for many years that [work] in which my philosophy, such as it is, was
most fully expounded” (FAE, LW5: 156). DE argued that philosophy itself could be understood as “the
general theory of education”. In lieu of philosophy’s increasing tendencies to become hyper-specialized
and technical, he urged a greater investment in the problems affecting everyday life. In effect this was a
call to see philosophy from the standpoint of education. Dewey wrote,
5.1 Experiential Learning and Teaching
Dewey’s “Reflex Arc” paper applied functionalism to education. “Reflex” argued that human experience
is not a disjointed sequence of fits and starts, but a developing circuit of activities. Learning deserves to
be framed in this way: as a cumulative, progressive process where inquirers move from the dissatisfying
phase of doubt toward another marked by the satisfying resolution of a problem. “Reflex” also shows that
the subject of a stimulus (e.g., the pupil) is not a passive recipient of, say, a sensation but an agent
who takes it amidst other ongoing activities in a larger environmental field.
5.2 Traditionalists, Romantics, and Dewey
Dewey’s educational philosophy emerged amidst a fierce 1890’s debate between educational “romantics”
and “traditionalists”. Romantics (also called “New” or “Progressive” education by Dewey), urged a
“child-centered” approach; they claimed that the child’s natural impulses provided education’s proper
starting point. As active and creative beings, education should not fetter growth—even instruction should
be subordinated to content if necessary. Traditionalists (called “Old” education by Dewey) pressed for a
“curriculum-centered” approach. Children were empty cabinets which curriculum fills with civilization’s
lessons. Content was supreme, and instruction should discipline children to ensure they are receptive.
5.3 Democracy Through Education
Dewey’s efforts to connect child, school, and society were motivated by more than just a desire for better
pedagogical methods. Because character, rights, and duties are informed by and contribute to the social
realm, schools were critical sites to learn and experiment with democracy. Democratic life consists not
only in civic and economic conduct, but more crucially in habits of problem solving, compassionate
imagination, creative expression, and civic self-governance. The full range of roles a child might assume
in life is vast; once this is appreciated, it is incumbent upon society to make education its highest political
and economic priority. During WWII, Dewey wrote,

7. Political Philosophy
Dewey’s political philosophy, like other areas, presumes that that individuals subsist transactionally with
their social environment, and can use inquiry to solve problems in hypothetical and experimental
ways.[44] As elsewhere, theory is instrumental; concepts and systems are tools initially devised to function
in particular, practical circumstances. While those proving valuable can be retained for reuse, all are
considered fallible and capable of reconstruction. Dewey rejected approaches relying upon non-
empirical, a priori assumptions (e.g., about human nature, historical progress, etc.) or which propose
ultimate, often monocausal, explanations. Dewey criticized and reconstructed core political concepts
(individual, freedom, right, community, public, state, and democracy) along naturalist and experimentalist
lines; besides numerous articles (for academic and lay audiences), Dewey’s political thought can be found
in books such as The Public and Its Problems (1927b, LW2), Individualism, Old and New (1930f,
LW5), Liberalism and Social Action (1935, LW11), and Freedom and Culture (1939d, LW13).
Because Democracy and Education (1916b, DE, MW9) emphasizes profound connections between
education, society, and democratic habits—it also merits study as a “political” work.
8. Art and Aesthetic Experience
In his magnum opus on aesthetics, Art as Experience (AE, LW10: 31) Dewey stated that art, as a
conscious idea, is “the greatest intellectual achievement in the history of humanity” (31).[51] Such high
praise, especially from a philosopher with expertise in (what remains) philosophy’s “main” subjects (e.g.,
knowledge, ethics, science) deserves notice. Dewey began writing about aesthetics from early in his
career—on art’s relevance to psychology (1887, EW2) and education (1897c, EW5), on why the
distinction between “fine” and “practical” art should be rejected (1891, EW3: 310-311), and on
Bosanquet (1893, EW4). His own theory emerged in Experience and Nature (1925a, EN, LW1) and
flourished in AE (1934b), proposing that aesthetics is central to philosophy’s proper mission: to render
everyday experience more fulfilling and meaningful.

9. Religion, Religious Experience and A Common Faith


The whole story of man shows that there are no objects that may not deeply stir engrossing emotion. One
of the few experiments in the attachment of emotion to ends that mankind has not tried is that of devotion,
so intense as to be religious, to intelligence as a force in social action.(A Common Faith, 1934a, LW9:
52–53)

9.1 Dewey’s Religious Background


Dewey grew up in a religious family; his mother was especially devout and pressured her sons to live up
to a similar devotion. His family church was Congregationalist; a bit later, including in college, Liberal
Evangelicalism proved to be a more acceptable form of Christianity. At twenty-one, while living in Oil
City, Pennsylvania, Dewey had a “mystic experience” which he reported to friend Max Eastman:
There was no vision, not even a definable emotion—just a supremely blissful feeling that his worries
[about whether he prayed sufficiently in earnest] were over. (DykhuizenDelhaize 1973: 22)

9.2 Aligning Naturalism and Religion


The challenge A Common Faith took on seems, in retrospect, insurmountable. He wished to reconstruct
religion in a way which harmonized it with his empiricism and naturalism, while showing how the power
of religious experience and belief could be transformed in ways which supported and advanced a secular
conception of democracy. Religions vary, of course, but to a large degree they posit transcendent, eternal,
unobservable entities and reveal themselves in ways which are not, shall we say, open to verification.
Empirical experience (regardless of its specific construal) is seen as inferior—whether castigated as flux,
illusion, uncertainty, or confusion, it must be left behind. In short, Dewey had squared himself against the
metaphysics, epistemology, and seemingly the morality, of major religions.
9.3 “Religion” vs. “Religious”
Dewey’s strategy, then, was to pry “religious” experience away from religion, to show how religious
experience may be framed within a natural and social context.[57] Because Dewey can find no clear,
univocal meaning for “religion”, the alternative was to look more closely at religious experience (ACF,
LW9: 7). To mention just two conclusions, Dewey found that whichever qualities exhibited by religious
experience (feelings of peace, wholeness, security, etc.), none offered evidence for the supernatural. In
other words, a religious experience’s report can no more count as a judgment about causation than
pointing to a book’s description can prove a physical law (ACF, LW9ff.).
9.4 Faith and God
Dewey effort to naturalize religion (by shifting the focus toward a non-transcendental understanding of
“religious experience”) also required a reinterpretation of other traditional notions—such as “faith” and
“God”. Faith, typically, is juxtaposed with reason. Faith requires neither empirical inquiry nor
verification; one has faith in the evidence of (transcendent, ultimate) things not seen. Also, faith typically
connotes intellectual acceptance, again without proof, of religious propositions (such as “God exists and
loves mankind”).
9.5 Religion as Social Intelligence—a Common Faith
As a pragmatist, a melioristmeliorism, and a humane democrat, Dewey sought a way to harness the
undeniable power of religion and religious experience toward an end beneficial to all. Religion, he
understood, provides people with a story about the larger universe and how they fit within it. He knew it
was not enough to criticize religion, because this leaves powerful human needs unmet. Dewey did not
propose swapping out old religious institutions for new ones; he hoped, rather, that the emancipation of
religious experience from institutional and ideological shackles might direct its energies toward
something like a “common faith”, a passion for imaginative intelligence pursuing moral goods. The
methods of inquiry and criticism are not mysteries; they are deeply familiar already. The necessary turn
would come when religious persons realize that inquiry could be extended to enhance religious
experience and values (ACF, LW9: 23). If it could be appreciated how many celebrated accomplishments
were due not to God but to intelligent, human collaboration, then perhaps the idea of community could
inspire a non-sectarian, common faith.[59]
Dewey’s call for a common faith was, he thought, deeply democratic. Why? Because the idea of the
supernatural was, by definition, suspicious of experience (as an adequate guide) and, consequently,
suspicious of empirical methods of inquiry. Unchecked by lived experience or experiment,
supernaturalism can produce especially deep divisions. The approach of Dewey’s common faith is, in
contrast, intertwined with experimental inquiry—drawing upon lived experience and constantly renewed
through open communication. This is why Dewey’s exhortation to trade traditional religious faith for a
common faith in empirical intelligence is of a piece with his experimental ideal of democracy.
John Flavell
Theories of Learning in Educational Psychology John Flavell Metacognition Theory Biography John
Flavell of Stanford University is regarded as a foundation researcher in metacognition. He was influenced
by the work of Jean Piaget. One of Flavell's significant accomplishments was the publication of his book,
The Developmental Psychology of Jean Piaget (Flavell, 1963). While many recent researchers now
challenge certain aspects of Piaget's theories, many ideas that he proposed have found their way into the
conventional wisdom of metacognition. Included among those is the notion of intentionality.
Intentionality presupposes thinking that is deliberate and goal-directed, and involves planning a sequence
of actions. Theory Flavell (1971) used the term metamemory in regard to an individual's ability to manage
and monitor the input, storage, search and retrieval of the contents of his own memory. Flavell invited the
academic community to come forth with additional metamemory research, and this theme of
metacognitive research has continued more than thirty years later. He implied with his statements that
metacognition is intentional, conscious, foresighted, purposeful, and directed at accomplishing a goal or
outcome. These implications have all been carefully scrutinized in subsequent research, and in some cases
have been the subjects of controversy among researchers in metacognition. For example, Reder & Schunn
(1996) and Kentridge and Heywood (2000) argue that metacognitive processes need not operate in a
person's conscious awareness. In his 1976 article, Flavell recognized that metacognition consisted of both
monitoring and regulation aspects. It was here that the term metacognition was first formally used in the
title of his paper. He defined metacognition as follows: "In any kind of cognitive transaction with the
human or non-human environment, a variety of information processing activities may go on.
Metacognition refers, among other things, to the active monitoring and consequent regulation and
orchestration of these processes in relation to the cognitive objects or data on which they bear, usually in
service of some concrete goal or objective." (p.232). Hacker (1998) offered a more comprehensive
definition of metacognition, to include the knowledge of one's own cognitive and affective processes and
states as well as the ability to consciously and deliberately monitor and regulate those processes and
states. Flavell (1976) also identified three “metas” that children gradually acquire in the context of
information storage and retrieval. These were: (a) The child learns to identify situations in which
intentional, conscious storage of certain information may be useful at some time in the future; (b) the
child learns to keep current any information which may be related to active problem-solving, and have it
ready to retrieve as needed; and (c) the child learns how to make deliberate systematic searches for
information which may be helpful in solving a problem, even when the need for it has not been foreseen.
Flavell (1979) was another seminal paper. In this work Flavell acknowledged the explosion of interest and
work in areas related to metacognition, such as oral skills of communication, persuasion and
comprehension, reading, writing, language acquisition, memory, attention, problem-solving, social
cognition, affective monitoring, and self-instruction. In the 1979 paper, Flavell proposed a formal model
of metacognitive monitoring to include four classes of phenomena and their relationships. The four
classes included (a) metacognitive knowledge, (b) metacognitive experiences, (c) tasks or goals, and (d)
strategies or activities. Flavell's 1979 model will be further described in the section on the theories of
metacognition. The first attempt to generate a formal model of metacognition was presented by Flavell
(1979). He acknowledged the significance of metacognition in a wide range of applications which
included reading, oral skills, writing, language acquisition, memory, attention, social interactions, self-
instruction, personality development and education. Flavell mentioned that components of metacognition
can be activated intentionally, as by a memory search aimed at retrieving specific information, or
unintentionally, such as by cues in a task situation. Metacognitive processes can operate consciously or
unconsciously and they can be accurate or inaccurate. They can also fail to be activated when needed, and
can fail to have adaptive or beneficial effect. Metacognition can lead to selection, evaluation, revision or
deletion of cognitive tasks, goals, and strategies. They can also help the individual make meaning and
discover behavioral implications of metacognitive experiences. In his 1979 paper, Flavell proposed a
formal model of metacognitive monitoring which included four classes of phenomena and their
relationships. The four classes were (a) metacognitive knowledge, (b) metacognitive experiences, (c)
tasks and goals, and (d) strategies or actions. Each of these will be discussed in detail. Figure 1 is a
concept map showing the components of Flavell's model. The first of Flavell's (1979) classes was
metacognitive knowledge, which he defined as one's knowledge or beliefs about the factors that
effectaffect cognitive activities. The distinction between cognitive and metacognitive knowledge may lie
in how the information is used, more than a fundamental difference in processes. Metacognitive activity
usually precedes and follows cognitive activity. They are closely interrelated and mutually dependent.
Metacognitive knowledge can lead the individual to engage in or abandon a particular cognitive enterprise
based on its relationship to his interests, abilities and goals. Flavell described three categories of these
knowledge factors: 1) Person variables 2) task variables, and 3) strategy variables. These are the three
categories in which Flavell proposed that individuals have metacognitive knowledge. The person category
of knowledge includes the individual's knowledge and beliefs about himself as a thinker or learner, and
what he believes about other people's thinking processes. Flavell gave examples of knowledge such as a
person believing that he can learn better by listening than by reading, or that a person perceives her friend
to be more socially aware than she is. One's beliefs about himself as a learner may facilitate or impede
performance in learning situations. The task category of metacognitive knowledge encompassed all the
information about a proposed task that is available to a person (Flavell, 1979). This knowledge guides the
individual in the management of a task, and provides information about the degree of success that he is
likely to produce. Task information can be plentiful or scarce, familiar or unfamiliar, reliable or
unreliable, interesting or not, organized in a useable or unusable fashion. Task knowledge informs the
person of the range of possible acceptable outcomes of the cognitive enterprise and the goals related to its
completion. Knowledge about task difficulty and mental or tangible resources necessary for its
completion also belong to this category. The strategy category of metacognitive knowledge involved
identifying goals and sub-goals and selection of cognitive processes to use in their achievement (Flavell,
1979). Flavell also emphasized that these types of variables overlap and the individual actually works
with combinations and interactions of the metacognitive knowledge that is available at that particular
time. He also stated that metacognitive knowledge is not fundamentally different than other knowledge,
but its object is different. He also mentioned that metacognitive knowledge may be activated consciously
or unconsciously by the individual. This question of consciousness later became a subject of controversy
among researchers in metacognition. Metacognitive experiences, Flavell's (1979) second class of
phenomena included the subjective internal responses of an individual to his own metacognitive
knowledge, goals, or strategies. These may be fleeting or lengthy, and can occur before, during, or after a
cognitive enterprise. As monitoring phenomena, these experiences can provide internal feedback about
current progress, future expectations of progress or completion, degree of comprehension, connecting
new information to old, and many other events. New or difficult tasks, or tasks performed under stress
tend to provoke more experiential interaction, while familiar tasks may tend to provoke less
metacognitive experience. According to Flavell (1979). Metacognitive experience can also be a “stream
of consciousness” process in which other information, memories, or earlier experiences may be recalled
as resources in the process of solving a current-moment cognitive problem. Metacognitive experience also
encompasses the affective response to tasks. Success or failure, frustration or satisfaction, and many other
responses effect the moment-to-moment unfolding of a task for an individual, and may in fact determine
his interest or willingness to pursue similar tasks in the future. Flavell underscored the overlapping nature
of metacognitive knowledge and metacognitive experience. Metacognitive goals and tasks are the desired
outcomes or objectives of a cognitive venture. This was Flavell's third major category. Goals and tasks
include comprehension, committing facts to memory, or producing something, such as a written
document or an answer to a math problem, or of simply improving one's knowledge about something.
Achievement of a goal draws heavily on both metacognitive knowledge and metacognitive experience for
its successful completion (Flavell, 1979). Metacognitive strategies are designed to monitor cognitive
progress. Metacognitive strategies are ordered processes used to control one's own cognitive activities and
to ensure that a cognitive goal (for example, solving a math problem, writing an effective sentence,
understanding reading material) have been met. A person with good metacognitive skills and awareness
uses these processes to oversee his own learning process, plan and monitor ongoing cognitive activities,
and to compare cognitive outcomes with internal or external standards. Flavell (1979) indicated that a
single strategy can be invoked for either cognitive or metacognitive purposes and to move toward goals in
the cognitive or metacognitive domains. He gave the example of asking oneself questions at the end of a
learning unit with the aim of improving knowledge of the content, or to monitor comprehension and
assessment of the new knowledge. Flavell (1987) elaborated on several aspects of the theory he proposed
in 1979. In the category of metacognitive knowledge, he suggested subcategories of person variables; he
defined intra-individual variables such as knowledge or beliefs about the interests, propensities, aptitudes,
abilities, and the like, of oneself or of another person. Inter-individual variables provide comparisons
between or among people in a relativistic manner. The universal subcategory deals with generalizations a
person forms about learning and learners in general. Flavell underscored the importance of cultural
influences on the formation of beliefs about learning. Flavell (1987) offered additional description of task
variables, reflecting that individuals learn about the implications that various tasks carry with them.
Personal experience builds up sets of expectations about which tasks will be rigorous or difficult, and
which will be less taxing. Different kinds of information require different kinds of processing and place
different demands on the learner. Strategy variables are interlocked with one's goals or objectives in the
learning process (Flavell, 1987). It is important to distinguish between cognitive strategies, such as
summing a column of numbers, and metacognitive strategies, such as evaluating whether the correct
answer has been obtained. Flavell (1987) also offered clarification on the term metacognitive experience.
He defined metacognitive experience as affective or cognitive awareness that is relevant to one's thinking
processes. He described a variety of examples such as feeling that one is not understanding something,
feeling that something is difficult or easy to remember, solve, or comprehend, and feeling that one is
approaching or failing to approach a cognitive goal. Metacognitive experiences arise when they are
explicitly demanded by a situation, such as when one is asked why he chose a particular answer or a
particular way of doing something. Unfamiliar and novel situations and expectations also generate
metacognitive experiences
David Ausubel
David Paul Ausubel was an American psychologist whose most significant contribution to the fields of
educational psychology, cognitive science, and science education. Ausubel believed that understanding
concepts, principles, and ideas are achieved through deductive reasoning. Similarly, he believed in the
idea of meaningful learning as opposed to rote memorization. The most important single factor
influencing learning is what the learner already knows. This led Ausubel to develop an interesting theory
of meaningful learning and advance organizers. Learning Theory Ausubel's believes that learning of new
knowledge relies on what is already known. That is, construction of knowledge begins with our
observation and recognition of events and objects through concepts we already have. We learn by
constructing a network of concepts and adding to them. Ausubel also stresses the importance of reception
rather than discovery learning, and meaningful rather than rote learning. He declares that his theory
applies only to reception learning in school settings. He didn’tdid not say, however, that discovery
learning doesn’t work; but rather that it was not efficient. In other words, Ausubel believed that
understanding concepts, principles, and ideas are achieved through deductive reasoning Ausubel was
influenced by the teachings of Jean Piaget. Similar toLike Piaget’s ideas of conceptual schemes, Ausubel
related this to his explanation of how people acquire knowledge. Meaningful learning Ausebel’s theory
also focuses on meaningful learning. According to his theory, to learn meaningfully, individuals must
relate new knowledge to relevant concepts they already know. New knowledge must interact with the
learner’s knowledge structure. Meaningful learning can be contrasted with rote learning. he believed in
the idea of meaningful learning as opposed to rote memorization. The latter can also incorporate new
information into the pre-existing knowledge structure but without interaction. Rote memory is used to
recall sequences of objects, such as phone numbers. However, it is of no use to the learner in
understanding the relationships between the objects. 2 Because meaningful learning involves a
recognition of the links between concepts, it has the privilege of being transferred to long-term memory.
The most crucial element in meaningful learning is how the new information is integrated into the old
knowledge structure. Accordingly, Ausubel believes that knowledge is hierarchically organized; that new
information is meaningful to the extent that it can be related (attached, anchored) to what is already
known. The rote-meaningful learning continiumcontinuum showing the requirements of meaningful
learning Advance Organizers Ausubel advocates the use of advance organizers as a mechanism to help to
link new learning material with existing related ideas. Advance organizers are helpful in the way that they
help the process of learning when difficult and complex material are introduced. This is satisfied through
two conditions: 1one. The student must process and understand the information presented in the
organizer-- this increases the effectiveness of the organizer itself. 3 2. The organizer must indicate the
relations among the basic concepts and terms that will be used Ausubel’s theory of advance organizers
fall into two categories: comparative and expository Comparative Organizers The main goal of
comparative organizers is to activate existing schemas and is used as reminders to bring into the working
memory of what you may not realize is relevant. A comparative Organizer is also used both to integrate
as well as discriminate. It “integrates new ideas with basically similar concepts in cognitive structure, as
well as increase discriminability between new and existing ideas which are essentially different but
confusablyconfusable similar” Expository Organizers “In contrast, expository organizers provide new
knowledge that students will need to understand the upcoming information”. Expository organizers are
often used when the new learning material is unfamiliar to the learner. They often relate what the learner
already knows with the new and unfamiliar material—this in turn is aimed to make the unfamiliar
material more plausible to the learner. Ausubel Learning Model Ausubel believed that learning proceeds
in a top-down or deductive manner. Ausubel's theory consists of three phases. The main elements of
ausubel teaching method are shown below in the table Ausubel’s Model of Meaningful Learning Phase
One Advance Organizer Phase Two Presentation of Learning task or Material Phase Three Strengthening
Cognitive Organization Clarify aim of the lesson Make the organization of the new material explicit
Relate new information to advance
Present the lesson Make logical order of learning material explicit Promote active reception learning.
Relate organizer to students’ prior knowledge Present material in terms of basic similarities and
differences by using examples, and engage students in meaningful learning activities.
Ausubel’s theory is concerned with how individuals learn large amounts of meaningful material from
verbal/textual presentations in a school setting (in contrast to theories developed in the context of
laboratory experiments). According to Ausubel, learning is based upon the kinds of superordinate,
representational, and combinatorial processes that occur during the reception of information. A primary
process in learning is subsumption in which new material is related to relevant ideas in the existing
cognitive structure on a substantive, non-verbatim basis. Cognitive structures represent the residue of all
learning experiences; forgetting occurs because certain details get integrated and lose their individual
identity.

David P. Ausubel (1918 - )-) contributed much to cognitivelearningcognitive learning theory in his
explainationexplanation of meaningful verbal learning which he sawassaw as the predominant method of
classroom learning.Tolearning. To Ausubel, meaning was aphenomenon of consciousness and not of
behavior. The external world acquiresmeaningacquires meaning when it is converted into the "content of
consciousness." HebelievedHe believed that a signifier (ie.i.e. word) has a meaning when its effect upon
thelearnerthe learner is equivalent to the effect of the object it signifies.Brunersignifies. Bruner
believedwhenbelieved when there is "...some form of representational equivalence
betweenlanguagebetween language (or symbols) and mental content," then there is meaning.
HebelievedHe believed there are two processes involved in cognitive learning:
the receptionprocessreception process and the discovery process. What he
termed receptionprocessesreception processes are almost exclusively used in meaningful verbal learning.
Concept formation and problem solving are more likely, according to Ausubel,
toinvloveinvolve discovery processes Ausubel felt discovery learning techniques are often
uneconomical,inefficientuneconomical, inefficient, and ineffective. He felt most school learning is verbal
learning(learning (receptive learning).Subsequent research has shown that verbal learning is most
effective forrapidfor rapid learning and retention and that discovery learning is most effective
infacilitatingin facilitating transfer.

David Ausubel was a cognitive learning theorist who focused on the learning of school subjects and who
placed considerable interest on what the student already knows as being the primary determiner of
whether and what he/she learns next. Ausubel viewed learning as an active process, not simply
responding to your environment. Learners seek to make sense of their surroundings by integrating new
knowledge with that which they have already learned.

Ausubel was leery of the research on learning done in labs often using stimuli that were not typical of
school subjects. For example, at the time Ausubel was writing a large amount of the research on learning
involved having students memorize non-sense terms such as "sdrgpstrip" or paired associates such as
"table-banana" since these were likely new and unfamiliar to learners. For Ausubel this was simply rote
learning that remained isolated from other knowledge the learner had acquired. It was not potentially
meaningful while schools subjects were potentially meaningful. Rote learning was unlike the learning of
school subjects, so Ausubel sought to study how we learn content, like school subjects, that is potentially
meaningful. He wrote often about "meaningful learning" and this is why he rejected the research on rote
learning as appropriate if we want to improve learning in schools.

The key concept for Ausubel is the cognitive structure. He sees this as the sum of all the knowledge we
have acquired as well as the relationships among the facts, concepts and principles that make up that
knowledge. Learning for Ausubel is bringing something new into our cognitive structure and attaching it
to our existing knowledge that is located there. This is how we make meaning, and this was the focus of
his work.

David Ausubel, a noted American psychologist who specialized in education and learning behaviors,
introduced the Subsumption Theory back in 1963. It centers on the idea that learners can more effectively
acquire new knowledge if it is tied to their existing knowledge base, and that only unique information that
stands out within the lesson is committed to memory. In this article, I’ll delve into the basics of the
Subsumption Theory, and I’ll share 4 tips on how you can use it in your eLearning course design.

According to Ausubel’s Subsumption Theory, a learner absorbs new information by tying it to existing
concepts and ideas that they have already acquired. Rather than building an entirely new cognitive
structure, they are able to relate it to information that is already present within their minds.

When an idea is forgotten, it is simply because the specific details and associated thoughts get lost in the
crowd and can no longer be differentiated from other pieces of information. Based upon this theory,
meaningful learning can only occur once the subsumed cognitive structures have been fully developed.

The Subsumption Theory: Basic Principles

To utilize the Subsumption Theory in eLearning, it is important to identify the two types of subsumption
that exist: correlative and derivative.

Both of these are forms of rote learning, which has the ability to gradually construct new cognitive
behaviors and structures within the minds of your learners. After these cognitive structures are built, the
learner then has the power to use them during meaningful learning activities and exercises.

Here are the differences between the two types of subsumption, as suggested by Ausubel.

• Correlative
A learner collects new information that extends from their existing knowledge base or elaborates
upon previously acquired information.

• Derivative
A learner derives new information directly from their cognitive structures, or identifies
relationships between concepts within their existing knowledge base. This can be achieved in a
variety of ways, from shifting information around in the hierarchal structure to linking ideas
together to create new meanings.

4 Tips To Apply Ausubel’s Subsumption Theory In eLearning

Implementing Ausubel’s subsumption theory in your training programs isn’t as complex as it may sound.
With these four tips, you’ll be able to watch your learners’ performance soar!

1) Lead off with the key takeaways

Begin your eLearning course with a general overview that highlights everything the learners need to
know by the end, and then, sequence online material from general to specific, a process that Ausubel calls
“progressing differentiation”. This will help the learner to automatically categorize the eLearning content
and figure out where it belongs in their cognitive structure.

For example, if you let them know that the eLearning course covers animal genus concepts, they can
immediately access their animal classification knowledge in order to build upon it, and apply it when
participating in the eLearning course.

Offer greater detail as you progress through the eLearning module, so that your learners can begin to
differentiate it from the other pieces of information they have already collected. Remember, the key
to knowledge retention is connecting the concepts then making them stand out from the crowd so that
they are not easily forgotten.

2) Encourage learners to apply previously acquired knowledge

Speaking of connecting, the Subsumption Theory relies heavily on the idea that learners gather
information most effectively when they tie new concepts to existing cognitive structures. This works both
ways, however. They can also apply information they have already learned in order to improve
comprehension and knowledge retention.

In many respects, it’s a two-way street that gives learners the opportunity to acquire new knowledge
while they are committing “old” knowledge to their long-term memory banks. Whenever possible,
integrate eLearning scenarios and simulations that allow them to apply existing knowledge while
discovering new concepts and ideas.

Also, highlight how new and familiar ideas compare and contrast so that they can create that all-important
cognitive connection.

3) Include both receptive and discovery-based activities

Although Ausubel found no special advantages of discovery-learning, as he believed that it has the same
effect on learning being more time-consuming, it is not a bad idea to include both receptive
and discovery-based online activities into your eLearning course design, as each serves its own unique
eLearning purpose.

While receptive online activities help learners acquire and retain new information, discovery-based
activities allow them to understand how information can be applied in different situations and contexts.
In the real world, they won’t be taking written assessments to test their knowledge. Instead, they will have
to apply it in a wide range of settings to overcome challenges and solve real-life problems. Thus, you
need to ensure that they are not only learning the information, but that they can also apply it outside of
the virtual classroom.

4) Make it meaningful

Despite the fact that The Subsumption Theory deals primarily with rote learning principles, its primary
goal is to create meaningful learning experiences. Meaningful learning occurs when an individual is able
to create connections between what they learned and what they already know within their cognitive
structures of their minds.

Essentially, they tie it into existing knowledge and commit it to memory, so that they can draw upon it at
a later time. One of the most effective ways to make your eLearning course meaningful is to make it
personal.

Integrate problem-solving online activities that focus on past experiences, and integrate stories that trigger
their emotions. Use real-world examples that stress the benefits of learning the subject matter and help
them relate to ideas or concepts.

Using Ausubel’s Subsumption Theory In Your eLearning Courses

Knowing as much as possible about how your learners acquire and retain subject matter is an integral part
of instructional design for any eLearning course. Ausubel’s Subsumption Theory gives you the ability
to create a meaningful connection between new ideas and pre-existing knowledge, so that your learners
gain the opportunity to remember the key takeaways and get the most benefit out of your eLearning
course.

Ausubel's theory is concerned with how individuals learn large amounts of meaningful material from
verbal/textual presentations in a school setting (in contrast to theories developed in the context of
laboratory experiments).

Assumptions about Learning


According to Ausubel, learning is based upon the kinds of superordinate, representational, and
combinatorial processes that occur during the reception of information. A primary process in learning is
subsumption in which new material is related to relevant ideas in the existing cognitive structure on a
substantive, nonverbatim basis. Cognitive structures represent the residue of all learning experiences;
forgetting occurs because certain details get integrated and lose their individual identity.

Major Tenets
A major instructional mechanism proposed by Ausubel is the use of advance organizers:
"These organizers are introduced in advance of learning itself, and are also presented at a higher level of
abstraction, generality, and inclusiveness; and since the substantive content of a given organizer or series
of organizers is selected on the basis of its suitability for explaining, integrating, and interrelating the
material they precede, this strategy simultaneously satisfies the substantive as well as the programming
criteria for enhancing the organization strength of cognitive structure." (1963, p. 81).
Ausubel emphasizes that advance organizers are different from overviews and summaries which simply
emphasize key ideas and are presented at the same level of abstraction and generality as the rest of the
material. Organizers act as a subsuming bridge between new learning material and existing related ideas.
Jean Piaget's
Background and Key Concepts of Piaget's Theory

Jean Piaget's theory of cognitive development suggests that intelligence changes as children grow. A
child's cognitive development is not just about acquiring knowledge, the child has to develop or construct
a mental model of the world.
Cognitivepass
children development
through a series
occursofthrough
stages.the interaction of innate capacities and environmental events, and

Piaget's theory of cognitive development proposes 4 stages of development.


Sensorimotor stage: birth to 2 years
Preoperational stage: 2 to 7 years
Concrete operational stage: 7 to 11 years
Formal operational stage: ages 12 and up
The sequence of the stages is universal across cultures and follow the same invariant (unchanging) order.
All children go through the same stages in the same order (but not all at the same rate).

How Piaget Developed the Theory


Piaget was employed at the Binet Institute in the 1920s, where his job was to develop French versions of
questions on English intelligence tests. He became intrigued with the reasons children gave for their
wrong answers to the questions that required logical thinking.
He believed that these incorrect answers revealed important differences between the thinking of adults
and children.
Piaget branched out on his own with a new set of assumptions about children’s intelligence:
• Children’s intelligence differs from an adult’s in quality rather than in quantity. This means that
children reason (think) differently from adults and see the world in different ways.
• Children actively build up their knowledge about the world. They are not passive creatures
waiting for someone to fill their heads with knowledge.
• The best way to understand children’s reasoning was to see things from their point of view.

What Piaget wanted to do was not to measure how well children could count, spell or solve problems as a
way of grading their I.Q. What he was more interested in was the way in which fundamental concepts like
the very idea of number, time, quantity, causality, justice and so on emerged.

Piaget studied children from infancy to adolescence using naturalistic observation of his own three babies
and sometimes controlled observation too. From these he wrote diary descriptions charting their
development.
He also used clinical interviews and observations of older children who were able to understand questions
and hold conversations

Stages of Cognitive Development


Jean Piaget's theory of cognitive development suggests that children move through four different stages of
intellectual development which reflect the increasing sophistication of children's thought
Each child goes through the stages in the same order, and child development is determined by biological
maturation and interaction with the environment.
At each stage of development, the child’s thinking is qualitatively different from the other stages, that is,
each stage involves a different type of intelligence.

Piaget’s Four Stages

Stage Age Goal

Sensorimotor Birth to 18-24 months Object permanence

Preoperational 2 to 7 years old Symbolic thought

Concrete operational Ages 7 to 11 years Logical thought

Formal operational Adolescence to adulthood Scientific reasoning


Although no stage can be missed out, there are individual differences in the rate at which children
progress through stages, and some individuals may never attain the later stages.
Piaget did not claim that a particular stage was reached at a certain age - although descriptions of the
stages often include an indication of the age at which the average child would reach each stage.

The Sensorimotor Stage


Ages: Birth to 2 Years
Major Characteristics and Developmental Changes:

• The infant learns about the world through their senses and through their actions (moving around
and exploring its environment).
• During the sensorimotor stage a range of cognitive abilities develop. These include: object
permanence; self-recognition; deferred imitation; and representational play.
• They relate to the emergence of the general symbolic function, which is the capacity to represent
the world mentally
• At about 8 months the infant will understand the permanence of objects and that they will still
exist even if they can’t see them and the infant will search for them when they disappear.

During this stage the infant lives in the present. It does not yet have a mental picture of the world stored in
its memory therefore it does not have a sense of object permanence.
If it cannot see something then it does not exist. This is why you can hide a toy from an infant, while it
watches, but it will not search for the object once it has gone out of sight.
The main achievement during this stage is object permanence - knowing that an object still exists, even if
it is hidden. It requires the ability to form a mental representation (i.e., a schema) of the object.
Towards the end of this stage the general symbolic function begins to appear where children show in their
play that they can use one object to stand for another. Language starts to appear because they realize that
words can be used to represent objects and feelings.
The child begins to be able to store information that it knows about the world, recall it and label it.
Learn More: The Sensorimotor Stage of Cognitive Development

The Preoperational Stage


Ages: 2 - 7 Years
Major Characteristics and Developmental Changes:

• Toddlers and young children acquire the ability to internally represent the world through
language and mental imagery.
• During this stage, young children can think about things symbolically. This is the ability to make
one thing, such as a word or an object, stand for something other than itself.
• A child’s thinking is dominated by how the world looks, not how the world is. It is not yet
capable of logical (problem solving) type of thought.
• Infants at this stage also demonstrate animism. This is the tendency for the child to think that non-
living objects (such as toys) have life and feelings like a person’s.
By 2 years, children have made some progress towards detaching their thought from physical world.
However have not yet developed logical (or 'operational') thought characteristic of later stages.
Thinking is still intuitive (based on subjective judgements about situations) and egocentric
(centredcentered on the child's own view of the world).
Learn More: The Preoperational Stage of Cognitive Development

The Concrete Operational Stage


Ages: 7 - 11 Years

Major Characteristics and Developmental Changes:

• During this stage, children begin to thinking logically about concrete events.
• Children begin to understand the concept of conservation; understanding that, although things
may change in appearance, certain properties remain the same.
• During this stage, children can mentally reverse things (e.g. picture a ball of plasticine returning
to its original shape).
• During this stage, children also become less egocentric and begin to think about how other people
might think and feel.

The stage is called concrete because children can think logically much more successfully if they can
manipulate real (concrete) materials or pictures of them.
Piaget considered the concrete stage a major turning point in the child's cognitive development because it
marks the beginning of logical or operational thought. This means the child can work things out internally
in their head (rather than physically try things out in the real world).
Children can conserve number (age 6), mass (age 7), and weight (age 9). Conservation is the
understanding that something stays the same in quantity even though its appearance changes.
But operational thought only effective here if child asked to reason about materials that are physically
present. Children at this stage will tend to make mistakes or be overwhelmed when asked to reason about
abstract or hypothetical problems.
Learn More: The Concrete Operational Stage of Development

The Formal Operational Stage


Ages: 12 and Over
Major Characteristics and Developmental Changes:

• Concrete operations are carried out on things whereas formal operations are carried out on ideas.
Formal operational thought is entirely freed from physical and perceptual constraints.
• During this stage, adolescents can deal with abstract ideas (e.g. no longer needing to think about
slicing up cakes or sharing sweets to understand division and fractions).
• They can follow the form of an argument without having to think in terms of specific examples.
• Adolescents can deal with hypothetical problems with many possible solutions. E.g. if asked
‘What would happen if money were abolished in one hour’s time? they could speculate about
many possible consequences.

From about 12 years children can follow the form of a logical argument without reference to its content.
During this time, people develop the ability to think about abstract concepts, and logically test
hypotheses.
This stage sees emergence of scientific thinking, formulating abstract theories and hypotheses when faced
with a problem.
Learn More: The Formal Operational Stage of Development

Piaget's Theory Differs From Others In Several Ways:


Piaget's (1936, 1950) theory of cognitive development explains how a child constructs a mental model of
the world. He disagreed with the idea that intelligence was a fixed trait, and regarded cognitive
development as a process which occurs due to biological maturation and interaction with the
environment.
Children’s ability to understand, think about and solve problems in the world develops in a stop-start,
discontinuous manner (rather than gradual changes over time).
▪ It is concerned with children, rather than all learners.
▪ It focuses on development, rather than learning per se, so it does not address learning of information or
specific behaviors.
▪ It proposes discrete stages of development, marked by qualitative differences, rather than a gradual
increase in number and complexity of behaviors, concepts, ideas, etc.
The goal of the theory is to explain the mechanisms and processes by which the infant, and then the child,
develops into an individual who can reason and think using hypotheses.
To Piaget, cognitive development was a progressive reorganization of mental processes as a result of
biological maturation and environmental experience.
Children construct an understanding of the world around them, then experience discrepancies between
what they already know and what they discover in their environment.

Schemas
Piaget claimed that knowledge cannot simply emerge from sensory experience; some initial structure is
necessary to make sense of the world.
According to Piaget, children are born with a very basic mental structure (genetically inherited and
evolved) on which all subsequent learning and knowledge are based.
Schemas are the basic building blocks of such cognitive models, and enable us to form a mental
representation of the world.
Piaget (1952, p. 7) defined a schema as: "a cohesive, repeatable action sequence possessing component
actions that are tightly interconnected and governed by a core meaning."
In more simple terms Piaget called the schema the basic building block of intelligent behavior – a way of
organizing knowledge. Indeed, it is useful to think of schemas as “units” of knowledge, each relating to
one aspect of the world, including objects, actions, and abstract (i.e., theoretical) concepts.
Wadsworth (2004) suggests that schemata (the plural of schema) be thought of as 'index cards' filed in the
brain, each one telling an individual how to react to incoming stimuli or information.
When Piaget talked about the development of a person's mental processes, he was referring to increases in
the number and complexity of the schemata that a person had learned.
When a child's existing schemas are capable of explaining what it can perceive around it, it is said to be in
a state of equilibrium, i.e., a state of cognitive (i.e., mental) balance.
Piaget emphasized the importance of schemas in cognitive development and described how they were
developed or acquired. A schema can be defined as a set of linked mental representations of the world,
which we use both to understand and to respond to situations. The assumption is that we store these
mental representations and apply them when needed.

Examples of Schemas
A person might have a schema about buying a meal in a restaurant. The schema is a stored form of the
pattern of behavior which includes looking at a menu, ordering food, eating it and paying the bill. This is
an example of a type of schema called a 'script.' Whenever they are in a restaurant, they retrieve this
schema from memory and apply it to the situation.
The schemas Piaget described tend to be simpler than this - especially those used by infants. He described
how - as a child gets older - his or her schemas become more numerous and elaborate.
Piaget believed that newborn babies have a small number of innate schemas - even before they have had
many opportunities to experience the world. These neonatal schemas are the cognitive structures
underlying innate reflexes. These reflexes are genetically programmed into us.
For example, babies have a sucking reflex, which is triggered by something touching the baby's lips. A
baby will suck a nipple, a comforter (dummy), or a person's finger. Piaget, therefore, assumed that the
baby has a 'sucking schema.'
Similarly, the grasping reflex which is elicited when something touches the palm of a baby's hand, or the
rooting reflex, in which a baby will turn its head towards something which touches its cheek, are innate
schemas. Shaking a rattle would be the combination of two schemas, grasping and shaking.

The Process of Adaptation


Jean Piaget (1952; see also Wadsworth, 2004) viewed intellectual growth as a process
of adaptation (adjustment) to the world. This happens through assimilation, accommodation, and
equilibration.

Assimilation
Piaget defined assimilation as the cognitive process of fitting new information into existing
cognitive schemas, perceptions, and understanding. Overall beliefs and understanding of the
world do not change as a result of the new information.
This means that when you are faced with new information, you make sense of this information by
referring to information you already have (information processed and learned previously) and try
to fit the new information into the information you already have.
For example, a 2-year-old child sees a man who is bald on top of his head and has long frizzy hair
on the sides. To his father’s horror, the toddler shouts “Clown, clown” (Siegler et al., 2003).

Accommodation
Psychologist Jean Piaget defined accommodation as the cognitive process of revising existing
cognitive schemas, perceptions, and understanding so that new information can be incorporated.
This happens when the existing schema (knowledge) does not work, and needs to be changed to
deal with a new object or situation.
In order to make sense of some new information, you actual adjust information you already have
(schemas you already have, etc.) to make room for this new information.
For example, a child may have a schema for birds (feathers, flying, etc.) and then they see a
plane, which also flies, but would not fit into their bird schema.
In the “clown” incident, the boy’s father explained to his son that the man was not a clown and
that even though his hair was like a clown’s, he wasn’t wearing a funny costume and wasn’t
doing silly things to make people laugh.
With this new knowledge, the boy was able to change his schema of “clown” and make this idea
fit better to a standard concept of “clown”.

Equilibration
Piaget believed that all human thought seeks order and is uncomfortable with contradictions and
inconsistencies in knowledge structures. In other words, we seek 'equilibrium' in our cognitive
structures.
Equilibrium occurs when a child's schemas can deal with most new information through
assimilation. However, an unpleasant state of disequilibrium occurs when new information
cannot be fitted into existing schemas (assimilation).
Piaget believed that cognitive development did not progress at a steady rate, but rather in leaps
and bounds. Equilibration is the force which drives the learning process as we do not like to be
frustrated and will seek to restore balance by mastering the new challenge (accommodation).
Once the new information is acquired the process of assimilation with the new schema will
continue until the next time we need to make an adjustment to it.
Educational Implications
Piaget (1952) did not explicitly relate his theory to education, although later researchers have explained
how features of Piaget's theory can be applied to teaching and learning.
Piaget has been extremely influential in developing educational policy and teaching practice. For
example, a review of primary education by the UK government in 1966 was based strongly on Piaget’s
theory. The result of this review led to the publication of the Plowden report (1967).
Discovery learning – the idea that children learn best through doing and actively exploring - was seen as
central to the transformation of the primary school curriculum.
'The report's recurring themes are individual learning, flexibility in the curriculum, the centrality of play
in children's learning, the use of the environment, learning by discovery and the importance of the
evaluation of children's progress - teachers should 'not assume that only what is measurable is valuable.'
Because Piaget's theory is based upon biological maturation and stages, the notion of 'readiness' is
important. Readiness concerns when certain information or concepts should be taught. According to
Piaget's theory children should not be taught certain concepts until they have reached the appropriate
stage of cognitive development.
According to Piaget (1958), assimilation and accommodation require an active learner, not a passive one,
because problem-solving skills cannot be taught, they must be discovered.
Within the classroom learning should be student-centered and accomplished through active discovery
learning. The role of the teacher is to facilitate learning, rather than direct tuition. Therefore, teachers
should encourage the following within the classroom:
o Focus on the process of learning, rather than the end product of it.
o Using active methods that require rediscovering or reconstructing "truths."
o Using collaborative, as well as individual activities (so children can learn from each other).
o Devising situations that present useful problems, and create disequilibrium in the child.
o Evaluate the level of the child's development so suitable tasks can be set.

Critical Evaluation

Support

• The influence of Piaget’s ideas in developmental psychology has been enormous. He changed
how people viewed the child’s world and their methods of studying children.

He was an inspiration to many who came after and took up his ideas. Piaget's ideas have
generated a huge amount of research which has increased our understanding of cognitive
development.

• Piaget (1936) was one of the first psychologists to make a systematic study of cognitive
development. His contributions include a stage theory of child cognitive development, detailed
observational studies of cognition in children, and a series of simple but ingenious tests to reveal
different cognitive abilities.
• His ideas have been of practical use in understanding and communicating with children,
particularly in the field of education (re: Discovery Learning).

Criticisms

• Are the stages real? Vygotsky and Bruner would rather not talk about stages at all, preferring to
see development as a continuous process. Others have queried the age ranges of the stages. Some
studies have shown that progress to the formal operational stage is not guaranteed.
For example, Keating (1979) reported that 40-60% of college students fail at formal operation
tasks, and DasenDanes (1994) states that only one-third of adults ever reach the formal
operational stage.

• Because Piaget concentrated on the universal stages of cognitive development and biological
maturation, he failed to consider the effect that the social setting and culture may have on
cognitive development.

Dasen (1994) cites studies he conducted in remote parts of the central Australian desert with 8-14
year old8–14-year-old Indigenous Australians. He gave them conservation of liquid tasks and
spatial awareness tasks. He found that the ability to conserve came later in the Aboriginal
children, between aged 10 and 13 ( as(as opposed to between 5 and 7, with Piaget’s Swiss
sample).
However, he found that spatial awareness abilities developed earlier amongst the Aboriginal
children than the Swiss children. Such a study demonstrates cognitive development is not purely
dependent on maturation but on cultural factors too – spatial awareness is crucial for nomadic
groups of people.
Vygotsky, a contemporary of Piaget, argued that social interaction is crucial for cognitive
development. According to Vygotsky the child's learning always occurs in a social context in co-
operation with someone more skillful (MKO). This social interaction provides language
opportunities and VygotksyVygotsky conisderedconsidered language the foundation of thought.

• Piaget’s methods (observation and clinical interviews) are more open to biased interpretation than
other methods. Piaget made careful, detailed naturalistic observations of children, and from these
he wrote diary descriptions charting their development. He also used clinical interviews and
observations of older children who were able to understand questions and hold conversations.

Because Piaget conducted the observations alone the data collected are based on his own
subjective interpretation of events. It would have been more reliable if Piaget conducted the
observations with another researcher and compared the results afterward to check if they are
similar (i.e., have inter-rater reliability).
Although clinical interviews allow the researcher to explore data in more depth, the interpretation
of the interviewer may be biased. For example, children may not understand the question/s, they
have short attention spans, they cannot express themselves very well and may be trying to please
the experimenter. Such methods meant that Piaget may have formed inaccurate conclusions.

• As several studies have shown Piaget underestimated the abilities of children because his tests
were sometimes confusing or difficult to understand (e.g., Hughes, 1975).

Piaget failed to distinguish between competence (what a child is capable of doing) and
performance (what a child can show when given a particular task). When tasks were altered,
performance (and therefore competence) was affected. Therefore, Piaget might have
underestimated children’s cognitive abilities.
For example, a child might have object permanence (competence) but still not be able to search
for objects (performance). When Piaget hid objects from babiesbabies, he found that it wasn’t till
after nine months that they looked for it. However, Piaget relied on manual search methods –
whether the child was looking for the object or not.
Later, research such as Baillargeon and Devos (1991) reported that infants as young as four
months looked longer at a moving carrot that didn’tdid not do what it expected, suggesting they
had some sense of permanence, otherwise they wouldn’t have had any expectation of what it
should or shouldn’t do.

• The concept of schema is incompatible with the theories of Bruner (1966) and Vygotsky
(1978). Behaviorism would also refute Piaget’s schema theory because is cannot be directly
observed as it is an internal process. Therefore, they would claim it cannot be objectively
measured.
• Piaget studied his own children and the children of his colleagues in Geneva in order to deduce
general principles about the intellectual development of all children. Not only was his sample
very small, but it was composed solely of European children from families of high socio-
economic status. Researchers have therefore questioned the generalisabilitygeneralizability of his
data.
• For Piaget, language is seen as secondary to action, i.e., thought precedes language. The Russian
psychologist Lev Vygotsky (1978) argues that the development of language and thought go
together and that the origin of reasoning is more to do with our ability to communicate with
others than with our interaction with the material world.

Piaget vs Vygotsky
Piaget maintains that cognitive development stems largely from independent explorations in which
children construct knowledge of their own. Whereas Vygotsky argues that children learn through social
interactions, building knowledge by learning from more knowledgeable others such as peers and adults.
In other words, Vygotsky believed that culture affects cognitive development.
These factors lead to differences in the education style they recommend: Piaget would argue for the
teacher to provide opportunities which challenge the children’s existing schemas and for children to be
encouraged to discover for themselves.
Alternatively, Vygotsky would recommend that teacher's assist the child to progress through the zone of
proximal development by using scaffolding.
However, both theories view children as actively constructing their own knowledge of the world; they are
not seen as just passively absorbing knowledge. They also agree that cognitive development involves
qualitative changes in thinking, not only a matter of learning more things.

Piaget Vygotsky

Sociocultural Little emphasis Strong emphasis

Constructivism Cognitive constructivist Social constructivist


Stages Cognitive development follows Cognitive development is
universal stages dependent on social context (no
stages)

Learning & The child is a 'lone scientist', Learning through social


Development develops knowledge through own interactions. Child builds
exploration knowledge by working with
others

Role of Language Thought drives language Language drives cognitive


development development

Role of the Provide opportunities for children Assist the child to progress
Teacher to learn about the world for through the ZPD by using
themselves (discovery learning) scaffolding

FREQUENTLY ASKED QUESTIONS

What are the 4four stages of Piaget's theory?


Piaget divided children’s cognitive development in four stages, each of the stages represent a new way of
thinking and understanding the world.
He called them (1) sensorimotor intelligence, (2) preoperational thinking, (3) concrete operational
thinking, and (4) formal operational thinking. Each stage is correlated with an age period of childhood,
but only approximately.
According to Piaget, intellectual development takes place through stages which occur in a fixed
orderorder, and which are universal (all children pass through these stages regardless of social or cultural
background). Development can only occur when the brain has matured to a point of “readiness”.

What are some of the weaknesses of Piaget's theory?


Cross-cultural studies show that the stages of development (except the formal operational stage) occur in
the same order in all cultures suggesting that cognitive development is a product of a biological process of
maturation.
HoweverHowever, the age at which the stages are reached varies between cultures and individuals which
suggests that social and cultural factors and individual differences influence cognitive development.
LAWRENCE KOHLBERG
Kohlberg's theory of moral development is a theory that focuses on how children develop morality and
moral reasoning. Kohlberg's theory suggests that moral development occurs in a series of six stages. The
theory also suggests that moral logic is primarily focused on seeking and maintaining justice.

What Is Moral Development?

How do people develop morality? This question has fascinated parents, religious leaders, and
philosophers for ages, but moral development has also become a hot-button issue in psychology and
education.1 Do parental or societal influences play a greater role in moral development? Do all kids
develop morality in similar ways?

American psychologist Lawrence Kohlberg developed one of the best-known theories exploring some of
these basic questions.2 His work modified and expanded upon Jean Piaget's previous work but was more
centered on explaining how children develop moral reasoning.

How did the two theories differ? Piaget described a two-stage process of moral development.3 Kohlberg
extended Piaget's theory, proposing that moral development is a continual process that occurs throughout
the lifespan. His theory outlines six stages of moral development within three different levels.

In recent years, Kohlberg's theory has been criticized as being Western-centric with a bias toward men (he
primarily used male research subjects) and with having a narrow worldview based on upper-middle-class
value systems and perspectives.4

How Kohlberg Developed His Theory

Kohlberg based his theory on a series of moral dilemmas presented to his study subjects. Participants
were also interviewed to determine the reasoning behind their judgments in each scenario.5

One example was "Heinz Steals the Drug." In this scenario, a woman has cancer and her doctors believe
only one drug might save her. This drug had been discovered by a local pharmacist and he was able to
make it for $200 per dose and sell it for $2,000 per dose. The woman's husband, Heinz, could only raise
$1,000 to buy the drug.

He tried to negotiate with the pharmacist for a lower price or to be extended credit to pay for it over time.
But the pharmacist refused to sell it for any less or to accept partial payments. Rebuffed, Heinz instead
broke into the pharmacy and stole the drug to save his wife. Kohlberg asked, "Should the husband have
done that?"

Kohlberg was not interested so much in the answer to whether Heinz was wrong or right but in
the reasoning for each participant's decision. He then classified their reasoning into the stages of his
theory of moral development.6

Stages of Moral Development

Kohlberg's theory is broken down into three primary levels. At each level of moral development, there are
two stages. Similar to how Piaget believed that not all people reach the highest levels of cognitive
development, Kohlberg believed not everyone progresses to the highest stages of moral development.

Level 1. Preconventional Morality

Preconventional morality is the earliest period of moral development. It lasts until around the age of 9. At
this age, children's decisions are primarily shaped by the expectations of adults and the consequences for
breaking the rules. There are two stages within this level:

• Stage 1 (Obedience and Punishment): The earliest stages of moral development, obedience and
punishment are especially common in young children, but adults are also capable of expressing
this type of reasoning. According to Kohlberg, people at this stage see rules as fixed and
absolute.7 Obeying the rules is important because it is a way to avoid punishment.
• Stage 2 (Individualism and Exchange): At the individualism and exchange stage of moral
development, children account for individual points of view and judge actions based on how they
serve individual needs. In the Heinz dilemma, children argued that the best course of action was
the choice that best served Heinz’s needs. Reciprocity is possible at this point in moral
development, but only if it serves one's own interests.

Level 2. Conventional Morality

The next period of moral development is marked by the acceptance of social rules regarding what is good
and moral. During this time, adolescents and adults internalize the moral standards they have learned
from their role models and from society.

This period also focuses on the acceptance of authority and conforming to the norms of the group. There
are two stages at this level of morality:

• Stage 3 (Developing Good Interpersonal Relationships): Often referred to as the "good boy-
good girl" orientation, this stage of the interpersonal relationship of moral development is focused
on living up to social expectations and roles.7 There is an emphasis on conformity, being "nice,"
and consideration of how choices influence relationships.
• Stage 4 (Maintaining Social Order): This stage is focused on ensuring that social order is
maintained. At this stage of moral development, people begin to consider society as a whole
when making judgments. The focus is on maintaining law and order by following the rules, doing
one’s duty, and respecting authority.

Level 3. Postconventional Morality

At this level of moral development, people develop an understanding of abstract principles of morality.
The two stages at this level are:

• Stage 5 (Social Contract and Individual Rights): The ideas of a social contract and individual
rights cause people in the next stage to begin to account for the differing values, opinions, and
beliefs of other people.7 Rules of law are important for maintaining a society, but members of the
society should agree upon these standards.
• Stage 6 (Universal Principles): Kohlberg’s final level of moral reasoning is based on universal
ethical principles and abstract reasoning. At this stage, people follow these internalized principles
of justice, even if they conflict with laws and rules.

Kohlberg believed that only a relatively small percentage of people ever reach the post-conventional
stages (around 10 to 15%).7 One analysis found that while stages one to four could be seen as universal in
populations throughout the world, the fifth and sixth stages were extremely rare in all populations.8
Ivan PetrovichPetrovitch Pavlov

Ivan PetrovichPetrovitch Pavlov was born on September 14, 18491849, at Ryazan, where his father, Peter
DmitrievichDimitrijevic Pavlov, was a village priest. He was educated first at the church school in
Ryazan and then at the theological seminary there.

Inspired by the progressive ideas which D. I. Pisarev, the most eminent of the Russian literary critics of
the 1860’s and I. M. Sechenov, the father of Russian physiology, were spreading, Pavlov abandoned his
religious career and decided to devote his life to science. In 1870 he enrolled in the physics and
mathematics faculty to take the course in natural science.

Pavlov became passionately absorbed with physiology, which in fact was to remain of such fundamental
importance to him throughout his life. It was during this first course that he produced, in collaboration
with another student, Afanasyev, his first learned treatise, a work on the physiology of the pancreatic
nerves. This work was widely acclaimedacclaimed, and he was awarded a gold medal for it.

In 1875 Pavlov completed his course with an outstanding record and received the degree of Candidate of
Natural Sciences. However, impelled by his overwhelming interest in physiology, he decided to continue
his studies and proceeded to the Academy of Medical Surgery to take the third course there. He
completed this in 1879 and was again awarded a gold medal. After a competitive examination, Pavlov
won a fellowship at the Academy, and this together with his position as Director of the Physiological
Laboratory at the clinic of the famous Russian clinician, S. P. Botkin, enabled him to continue his
research work. In 1883 he presented his doctor’s thesis on the subject of «The centrifugal nerves of the
heart». In this work he developed his idea of nervism, using as example the intensifying nerve of the heart
which he had discovered, and furthermore laid down the basic principles on the trophic function of the
nervous system. In this as well as in other works, resulting mainly from his research in the laboratory at
the Botkin clinic, Pavlov showed that there existed a basic pattern in the reflex regulation of the activity
of the circulatory organs.
In 1890 Pavlov was invited to organize and direct the Department of Physiology at the Institute of
Experimental Medicine. Under his direction, which continued over a period of 45 years to the end of his
life, this Institute became one of the most important centrescenters of physiological research.

In 1890 Pavlov was appointed Professor of Pharmacology at the Military Medical Academy and five
years later he was appointed to the then vacant Chair of Physiology, which he held till 1925.

It was at the Institute of Experimental Medicine in the years 1891-1900 that Pavlov did the bulk of his
research on the physiology of digestion. It was here that he developed the surgical method of the
«chronic» experiment with extensive use of fistulas, which enabled the functions of various organs to be
observed continuously under relatively normal conditions. This discovery opened a new era in the
development of physiology, for until then the principal method used had been that of «acute» vivisection,
and the function of an organism had only been arrived at by a process of analysis. This meant that
research into the functioning of any organ necessitated disruption of the normal interrelation between the
organ and its environment. Such a method was inadequate as a means of determining how the functions of
an organ were regulated or of discovering the laws governing the organism as a whole under normal
conditions – problems which had hampered the development of all medical science. With his method of
research, Pavlov opened the way for new advances in theoretical and practical medicine. With extreme
clarity he showed that the nervous system played the dominant part in regulating the digestive process,
and this discovery is in fact the basis of modern physiology of digestion. Pavlov made known the results
of his research in this field, which is of great importance in practical medicine, in lectures which he
delivered in 1895 and published under the title Lektsii o rabote glavnykh pishchevaritelnyteh
zhelez (Lectures on the function of the principal digestive glands) (1897).

Pavlov’s research into the physiology of digestion led him logically to create a science of conditioned
reflexes. In his study of the reflex regulation of the activity of the digestive glands, Pavlov paid special
attention to the phenomenon of «psychic secretion», which is caused by food stimuli at a distance from
the animal. By employing the method – developed by his colleague D. D. Glinskii in 1895 – of
establishing fistulas in the ducts of the salivary glands, Pavlov was able to carry out experiments on the
nature of these glands. A series of these experiments caused Pavlov to reject the subjective interpretation
of «psychic» salivary secretion and, on the basis of Sechenov’s hypothesis that psychic activity was of a
reflex nature, to conclude that even here a reflex – though not a permanent but a temporary or conditioned
one – was involved.

This discovery of the function of conditioned reflexes made it possible to study all psychic activity
objectively, instead of resorting to subjective methods as had hitherto been necessary; it was now possible
to investigate by experimental means the most complex interrelations between an organism and its
external environment.

In 1903, at the 14th International Medical Congress in Madrid, Pavlov read a paper on «The Experimental
Psychology and Psychopathology of Animals». In this paper the definition of conditioned and other
reflexes was givengiven, and it was shown that a conditioned reflex should be regarded as an elementary
psychological phenomenon, which at the same time is a physiological one. It followed from this that the
conditioned reflex was a clue to the mechanism of the most highly developed forms of reaction in animals
and humans to their environment and it made an objective study of their psychic activity possible.

Subsequently, in a systematic programme of research, Pavlov transformed Sechenov’s theoretical attempt


to discover the reflex mechanisms of psychic activity into an experimentally proven theory of conditioned
reflexes.
As guiding principles of materialistic teaching on the laws governing the activity of living organisms,
Pavlov deduced three principles for the theory of reflexes: the principle of determinism, the principle of
analysis and synthesis, and the principle of structure.

The development of these principles by Pavlov and his school helped greatly towards the building-up of a
scientific theory of medicine and towards the discovery of laws governing the functioning of the organism
as a whole.

Experiments carried out by Pavlov and his pupils showed that conditioned reflexes originate in the
cerebral cortex, which acts as the «prime distributor and organizer of all activity of the organism» and
which is responsible for the very delicate equilibrium of an animal with its environment. In 1905 it was
established that any external agent could, by coinciding in time with an ordinary reflex, become the
conditioned signal for the formation of a new conditioned reflex. In connection with the discovery of this
general postulate Pavlov proceeded to investigate «artificial conditioned reflexes». Research in Pavlov’s
laboratories over a number of years revealed for the first time the basic laws governing the functioning of
the cortex of the great hemispheres. Many physiologists were drawn to the problem of developing
Pavlov’s basic laws governing the activity of the cerebrum. As a result of all this research there emerged
an integrated Pavlovian theory on higher nervous activity.

Even in the early stages of his research Pavlov received world acclaim and recognition. In 1901 he was
elected a corresponding member of the Russian Academy of Sciences, in 1904 he was awarded a Nobel
Prize, and in 1907 he was elected Academician of the Russian Academy of Sciences; in 1912 he was
given an honorary doctorate at Cambridge University and in the following years honorary membership of
various scientific societies abroad. Finally, upon the recommendation of the Medical Academy of Paris,
he was awarded the Order of the Legion of Honour (1915).

After the October Revolution, a special government decree, signed by Lenin on January 24, 1921, noted
«the outstanding scientific services of Academician I.P.PavlovP. Pavlov, which are of enormous
significance to the working class of the whole world».

The Communist Party and the Soviet Government saw to it that Pavlov and his collaborators were given
unlimited scope for scientific research. The Soviet Union became a prominent centre for the study of
physiology, and the fact that the 15th International Physiological Congress of August 9-17, 1935, was
held in Leningrad and Moscow clearly shows that it was acknowledged as such.

Pavlov directed all his indefatigable energy towards scientific reforms. He devoted much effort to
transforming the physiological institutions headed by him into world centres of scientific knowledge, and
it is generally acknowledged that he succeeded in this endeavour.

Pavlov nurtured a great school of physiologists, which produced many distinguished pupils. He left the
richest scientific legacy – a brilliant group of pupils, who would continue developing the ideas of their
master, and a host of followers all over the world.
Edward Lee Thorndike
Edward Lee Thorndike (August 31, 1874 – August 9, 1949) was an
American psychologist who spent nearly his entire career at Teachers College, Columbia
University. His work on comparative psychology and the learning process led to the theory
of connectionism and helped lay the scientific foundation for educational psychology. He also
worked on solving industrial problems, such as employee exams and testing. He was a member
of the board of the Psychological Corporation and served as president of the American
Psychological Association in 1912.[1][2] A Review of General Psychology survey, published in
2002, ranked Thorndike as the ninth-most cited psychologist of the 20th century.[3] Edward
Thorndike had a powerful impact on reinforcement theory and behavior analysis, providing the
basic framework for empirical laws in behavior psychology with his law of effect. Through his
contributions to the behavioral psychology field came his major impacts on education, where the
law of effect has great influence in the classroom.

Early life[edit]
Thorndike, born in Williamsburg, Massachusetts,[4] was the son of Edward R and Abbie B
Thorndike, a Methodist minister in Lowell, Massachusetts.[5] Thorndike graduated from The
Roxbury Latin School (1891), in West Roxbury, Massachusetts and from Wesleyan
University (B.S. 1895).[4] He earned an M.A. at Harvard University in 1897.[4] His two brothers
(Lynn and Ashley) also became important scholars. The younger, Lynn, was
a medievalist specializing in the history of science and magic, while the older, Ashley, was an
English professor and noted authority on Shakespeare.
While at Harvard, he was interested in how animals learn (ethology), and worked with William
James. Afterwards, he became interested in the animal 'man', to the study of which he then
devoted his life.[6] Edward's thesis is sometimes thought of as the essential document of modern
comparative psychology. Upon graduation, Thorndike returned to his initial interest, educational
psychology. In 1898 he completed his PhD at Columbia University under the supervision
of James McKeen Cattell, one of the founding fathers of psychometrics.
In 1899, after a year of unhappy initial employment at the College for Women of Case Western
Reserve in Cleveland, Ohio, he became an instructor in psychology at Teachers College at
Columbia University, where he remained for the rest of his career, studying human learning,
education, and mental testing. In 1937 Thorndike became the second President of
the Psychometric Society, following in the footsteps of Louis Leon Thurstone who had
established the society and its journal Psychometrika the previous year.
On August 29, 1900, he wed Elizabeth Moulton. They had four children, among them Frances,
who became a mathematician.[7]
During the early stages of his career, he purchased a wide tract of land on the Hudson and
encouraged other researchers to settle around him. Soon a colony had formed there with him as
its 'tribal' chief.[8]

Connectionism[edit]

Thorndike's original apparatus used in his puzzle-box experiments as seen in Animal Intelligence (Jun 1898)

Thorndike was a pioneer not only in behaviorism and in studying learning, but also in using
animals in clinical experiments.[9] Thorndike was able to create a theory of learning based on his
research with animals.[9] His doctoral dissertation, "Animal Intelligence: An Experimental Study
of the Associative Processes in Animals", was the first in psychology where the subjects were
nonhumans.[9] Thorndike was interested in whether animals could learn tasks through imitation or
observation.[10] To test this, Thorndike created puzzle boxes. The puzzle boxes were
approximately 20 inches long, 15 inches wide, and 12 inches tall.[11] Each box had a door that was
pulled open by a weight attached to a string that ran over a pulley and was attached to the
door.[11] The string attached to the door led to a lever or button inside the box.[11] When the animal
pressed the bar or pulled the lever, the string attached to the door would cause the weight to lift
and the door to open.[11] Thorndike's puzzle boxes were arranged so that the animal would be
required to perform a certain response (pulling a lever or pushing a button), while he measured
the amount of time it took them to escape.[9] Once the animal had performed the desired response
they were allowed to escape and were also given a reward, usually food.[9] Thorndike primarily
used cats in his puzzle boxes. When the cats were put into the cages they would wander
restlessly and meow, but they did not know how to escape.[12] Eventually, the cats would step on
the switch on the floor by chance, and the door would open.[12] To see if the cats could learn
through observation, he had them observe other animals escaping from the box.[12] He would then
compare the times of those who got to observe others escaping with those who did not, and he
found that there was no difference in their rate of learning.[9] Thorndike saw the same results with
other animals, and he observed that there was no improvement even when he placed the animals’
paws on the correct levers, buttons, or bar.[10] These failures led him to fall back on a trial and
error explanation of learning.[10] He found that after accidentally stepping on the switch once, they
would press the switch faster in each succeeding trial inside the puzzle box.[10] By observing and
recording the animals’ escapes and escape times, Thorndike was able to graph the times it took
for the animals in each trial to escape, resulting in a learning curve.[12] The animals had difficulty
escaping at first, but eventually "caught on" and escaped faster and faster with each successive
puzzle box trial, until they eventually leveled off.[12] The quickened rate of escape results in the s-
shape of the learning curve. The learning curve also suggested that different species learned in
the same way but at different speeds.[10] From his research with puzzle boxes, Thorndike was able
to create his own theory of learning. The puzzle box experiments were motivated in part by
Thorndike's dislike for statements that animals made use of extraordinary faculties such
as insight in their problem solving: "In the first place, most of the books do not give us a
psychology, but rather a eulogy of animals. They have all been about animal intelligence, never
about animal stupidity."[13]
Thorndike meant to distinguish clearly whether or not cats escaping from puzzle boxes were
using insight. Thorndike's instruments in answering this question were learning curves revealed
by plotting the time it took for an animal to escape the box each time it was in the box. He
reasoned that if the animals were showing insight, then their time to escape would suddenly drop
to a negligible period, which would also be shown in the learning curve as an abrupt drop;drop,
while animals using a more ordinary method of trial and error would show gradual curves. His
finding was that cats consistently showed gradual learning.

Adult learning[edit]
Thorndike put his testing expertise to work for the United States Army during World War I,
participating in the development of the Army Beta test used to evaluate illiterate, unschooled,
and non-English speaking recruits.
Thorndike believed that "Instruction should pursue specified, socially useful goals." Thorndike
believed that the ability to learn did not decline until age 35, and only then at a rate of 1 percent
per year, going against the thoughts of the time that "you can't teach old dogsdogs’ new tricks."
It was later shown[who?] that the speed of learning, not the power of learning declined with age.
Thorndike also stated the law of effect, which says behaviors that are followed by good
consequences are likely to be repeated in the future.
Thorndike identified the three main areas of intellectual development. The first being abstract
intelligence. This is the ability to process and understand different concepts. The second is
mechanical intelligence, which is the ability to handle physical objects. Lastly there is social
intelligence. This is the ability to handle human interaction[14]

1. Learning is incremental.[9]
2. Learning occurs automatically.[9]
3. All animals learn the same way.[9]
4. Law of effect- if an association is followed by a "satisfying state of affairs" it will
be strengthened and if it is followed by an "annoying state of affairs " it will be
weakened.
5. Thorndike's law of exercise has two parts;parts: the law of use and the law of
disuse.
0. Law of use- the more often an association is used the stronger it
becomes.[15]
1. Law of disuse- the longer an association is unused the weaker it
becomes.[15]
6. Law of recency- the most recent response is most likely to reoccur.[15]
7. Multiple response- problem solving through trial and error. An animal will try
multiple responses if the first response does not lead to a specific state of
affairs.[15]
8. Set or attitude- animals are predisposed to act in a specific way.[15]
9. Prepotency of elements- a subject can filter out irrelevant aspects of a problem
and focus and respond only to significant elements of a problem.[15]
10. Response by analogy- responses from a related or similar context may be used in
a new context.[15]
11. Identical elements theory of transfer- This theory states that the extent to which
information learned in one situation will transfer to another situation is
determined by the similarity between the two situations.[9] The more similar the
situations are, the greater the amount of information that will transfer.[9] Similarly,
if the situations have nothing in common, information learned in one situation
will not be of any value in the other situation.[9]
12. Associative shifting- it is possible to shift any response from occurring with one
stimulus to occurring with another stimulus.[15] Associative shift maintains that a
response is first made to situation A, then to AB, and then finally to B, thus
shifting a response from one condition to another by associating it with that
condition.[16]
13. Law of readiness- a quality in responses and connections that results in readiness
to act.[16] Thorndike acknowledges that responses may differ in their
readiness.[16] He claims that eating has a higher degree of readiness than vomiting,
that weariness detracts from the readiness to play and increases the readiness to
sleep.[16] Also, Thorndike argues that a low or negative status in respect to
readiness is called unreadiness.[16] Behavior and learning are influenced by the
readiness or unreadiness of responses, as well as by their strength.[16]
14. Identifiability- According to Thorndike, the identification or placement of a
situation is a first response of the nervous system, which can recognize it.[16] Then
connections may be made to one another or to another response, and these
connections depend upon the original identification.[16] Therefore, a large amount
of learning is made up of changes in the identifiability of situations.[16] Thorndike
also believed that analysis might turn situations into compounds of features, such
as the number of sides on a shape, to help the mind grasp and retain the situation,
and increase their identifiability.[16]
15. Availability- The ease of getting a specific response.[16] For example, it would be
easier for a person to learn to touch their nose or mouth than it would be for them
to draw a line 5 inches long with their eyes closed.[16]
Development of law of effect[edit]
Further information: Law of effect
Thorndike's research focused on instrumental learning, which means that learning is developed
from the organism doing something. For example, he placed a cat inside a wooden box. The cat
would use various methods while trying to get out, however nothing would work until it hit the
lever. Afterwards, Thorndike tried placing the cat inside the wooden box again. This time, the cat
was able to hit the lever quickly and succeeded in getting out from the box.
At first, Thorndike emphasized the importance of dissatisfaction stemming from failure as equal
to the reward of satisfaction with success, though in his experiments and trials on humans he
came to conclude that reward is a much more effective motivator than punishment. He also
emphasized that the satisfaction must come immediately after the success, or the lesson would
not sink in.[8]

Eugenic views[edit]
Thorndike was a proponent of eugenics. He argued that "selective breeding can alter man's
capacity to learn, to keep sane, to cherish justice or to be happy. There is no more certain and
economical a way to improve man's environment as to improve his nature."[17]

Criticism[edit]
Thorndike's law of effect and puzzle box methodology were subjected to detailed criticism
by behaviorists and many other psychologists.[18] The criticisms over the law of effect mostly
cover four aspects of the theory: the implied or retroactive working of the effect, the
philosophical implication of the law, the identification of the effective conditions that cause
learning, and the comprehensive usefulness of the law.[19]

Thorndike on education[edit]
Thorndike's Educational psychology began a trend toward behavioral psychology that sought to
use empirical evidence and a scientific approach to problem solving. Thorndike was among some
of the first psychologists to combine learning theory, psychometrics, and applied research for
school-related subjects to form psychology of education. One of his influences on education is
seen by his ideas on mass marketing of tests and textbooks at that time. Thorndike opposed the
idea that learning should reflect nature, which was the main thought of developmental scientists
at that time. He instead thought that schooling should improve upon nature. Unlike many other
psychologist of his time, Thorndike took a statistical approach to education in his later years by
collecting qualitative (quantitative?) information intended to help teachers and educators deal
with practical educational problems.[20] Thorndike's theory was an association theory, as many
were in that time. He believed that the association between stimulus and response was solidified
by a reward or confirmation. He also thought that motivation was an important factor in
learning.[21] The Law of Effect introduced the relation between reinforcers and punishers.
Although Thorndike's description of the relation between reinforcers and punishers was
incomplete, his work in this area would later become a catalyst in further research, such as that
of B.F. Skinner.[22]
Thorndike's Law of Effect states that "responses that produce a desired effect are more likely to
occur again whereas responses that produce an unpleasant effect are less likely to occur
again".[23] The terms 'desired effect' and 'unpleasant effect' eventually became known as
'reinforcers' and 'punishers'.[24] Thorndike's contributions to the Behavioral Psychology Society
are seen through his influences in the classroom, with a particular focus on praising and ignoring
behaviors. Praise is used in the classroom to encourage and support the occurrence of a desired
behavior. When used in the classroom, praise has been shown to increase correct responses and
appropriate behavior.[25] Planned ignoring is used to decrease, weaken, or eliminate the
occurrence of a target behavior.[25] Planned ignoring is accomplished by removing the reinforcer
that is maintaining the behavior. For example, when the teacher does not pay attention to a
"whining" behavior of a student, it allows the student to realize that whining will not succeed in
gaining the attention of the teacher.[25]

Beliefs about the behavior of women[edit]


Unlike later behaviorists such as John Watson, who placed a very strong emphasis on the impact
of environmental influences on behavior,[26] Thorndike believed that differences in the parental
behavior of men and women were due to biological, rather than cultural, reasons.[27] While
conceding that society could "complicate or deform" [28] what he believed were inborn
differences, he believed that "if we [researchers] should keep the environment of boys and girls
absolutely similar these instincts would produce sure and important differences between the
mental and moral activities of boys and girls".[29] Indeed, Watson himself overtly critiqued the
idea of maternal instincts in humans in a report of his observations of first-time mothers
struggling to breastfeed. Watson argued that the very behaviors Thorndike referred to as
resulting from a "nursing instinct" stemming from "unreasoning tendencies to pet, coddle, and
'do for' others,"[30] were performed with difficulty by new mothers and thus must have been
learned, while "instinctive factors are practically nil".[31]
Thorndike's beliefs about inborn differences between the thoughts and behavior of men and
women included misogynist, pseudo-scientific arguments about the role of women in society.
For example, along with the "nursing instinct," Thorndike talked about the instinct of
"submission to mastery," writing: "Women in general are thus by original nature submissive to
men in general."[32] Although these opinions lack substantiating evidence, such beliefs were
commonplace during this era and in many cases served to justify prejudice against women in
academia (including entrance into doctoral programs, psychological laboratories, and scientific
societies).[33]

Thorndike's word books[edit]


Thorndike composed three different word books to assist teachers with word and reading
instruction. After publication of the first book in the series, The Teacher's Word Book (1921),
two other books were written and published, each approximately a decade apart from its
predecessor. The second book in the series, its full title being A Teacher's Word Book of the
Twenty Thousand Words Found Most Frequently and Widely in General Reading for Children
and Young People, was published in 1932, and the third and final book, The Teacher's Word
Book of 30,000 Words, was published in 1944.
In the preface to the third book, Thorndike writes that the list contained therein "tells anyone
who wishes to know whether to use a word in writing, speaking, or teaching how common the
word is in standard English reading matter" (p. x), and he further advises that the list can best be
employed by teachers if they allow it to guide the decisions they make choosing which words to
emphasize during reading instruction. Some words require more emphasis than others, and,
according to Thorndike, his list informs teachers of the most frequently occurring words that
should be reinforced by instruction and thus become "a permanent part of [students’] stock of
word knowledge" (p. xi). If a word is not on the list but appears in an educational text, its
meaning only needs to be understood temporarily in the context in which it was found, and then
summarily discarded from memory.
In Appendix A to the second book, Thorndike gives credit to his word counts and how
frequencies were assigned to particular words. Selected sources extrapolated from Appendix A
include:

• Children's Reading: Black Beauty, Little Women, Treasure Island, A Christmas


Carol, The Legend of Sleepy Hollow, Youth's Companion, school primers, first
readers, second readers, and third readers
• Standard Literature: The Bible, Shakespeare, Tennyson, Wordsworth, Cowper, Pope,
and Milton
• Common Facts and Trades: The United States Constitution and the Declaration of
Independence, A New Book of Cookery, Practical Sewing and Dress Making, Garden
and Farm Almanac, and mail-order catalogues
Thorndike also examined local newspapers and correspondence for common words to be
included in the book.[citation needed]

Thorndike's influence[edit]
Thorndike contributed a great deal to psychology. His influence on animal psychologists,
especially those who focused on behavior plasticity, greatly contributed to the future of that
field.[34] In addition to helping pave the way towards behaviorism, his contribution to
measurement influenced philosophy, the administration and practice of education, military
administration, industrial personnel administration, civil service and many public and private
social services.[11] Thorndike influenced many schools of psychology as Gestalt psychologists,
psychologists studying the conditioned reflex, and behavioral psychologists all studied
Thorndike's research as a starting point.[11] Thorndike was a contemporary of John B.
Watson and Ivan Pavlov. However, unlike Watson, Thorndike introduced the concept of
reinforcement.[15] Thorndike was the first to apply psychological principles to the area of learning.
His research led to many theories and laws of learning. His theory of learning, especially the law
of effect, is most often considered to be his greatest achievement.[11] In 1929, Thorndike
addressed his early theory of learning, and claimed that he had been wrong.[9] After further
research, he was forced to denounce his law of exercise completely, because he found that
practice alone did not strengthen an association, and that time alone did not weaken an
association.[9] He also got rid of half of the law of effect, after finding that a satisfying state of
affairs strengthens an association, but punishment is not effective in modifying behavior.[9] He
placed a great emphasis on consequences of behavior as setting the foundation for what is and is
not learned. His work represents the transition from the school of functionalism to behaviorism,
and enabled psychology to focus on learning theory.[9] Thorndike's work would eventually be a
major influence to B.F. Skinner and Clark Hull. Skinner, like Thorndike, put animals in boxes
and observed them to see what they were able to learn. The learning theories of Thorndike
and Pavlov were later synthesized by Clark Hull.[11] His work on motivation and attitude
formation directly affected studies on human nature as well as social order.[11] Thorndike's
research drove comparative psychology for fifty years, and influenced countless psychologists
over that period of time, and even still today.

Accomplishments[edit]
In 1912, Thorndike was elected president for the American Psychological Association. In 1917
he was elected as a Fellow of the American Statistical Association.[35] He was admitted to
the National Academy of Sciences in 1917. He was one of the first psychologists to be admitted
to the association. Thorndike is well known for his experiments on animals supporting the law of
effect.[36] In 1934, Thorndike was elected president of the American Association for the
Advancement of Science.[37]

Opposition to Thorndike[edit]
Because of his "racist, sexist, and antisemitic ideals", amid the George Floyd protests of 2020,
the Board of Trustees of Teachers' College in New York voted unanimously to remove his name
from Thorndike Hall.[38]

Selected works[edit]
• Educational Psychology (1903)
• Introduction to the Theory of Mental and Social Measurements (1904)
• The Elements of Psychology (1905)
• Animal Intelligence (1911)
• Edward L. Thorndike. (1999) [1913], Education Psychology: briefer course, New York:
Routledge, ISBN 978-0-415-21011-9
• The Teacher's Word Book (1921)
• The Psychology of Arithmetic (1922)
• The Measurement of Intelligence (1927)
• Human Learning (1931)
• A Teacher's Word Book of the Twenty Thousand Words Found Most Frequently and Widely in
General Reading for Children and Young People (1932)
• The Fundamentals of Learning (1932)
• The Psychology of Wants, Interests, and Attitudes (1935)
• The Teacher's Word Book of 30,000 Words (co-authored with Irving Lorge) (194
JOHN B. WATSON
John B. Watson was a pioneering psychologist who played an important role in developing behaviorism.
Watson believed that psychology should primarily be scientific observable behavior. He is remembered for
his research on the conditioning process.

Watson is also known for the Little Albert experiment, in which he demonstrated that a child could be
conditioned to fear a previously neutral stimulus. His research also revealed that this fear could be
generalized to other similar objects.

Early Life

John B. Watson was born January 9, 1878, and grew up in South Carolina. He entered Furman University
at the age of 16. After graduating five years later with a master's degree, he began studying psychology at
the University of Chicago, earning his Ph.D. in psychology in 1903.

Career

Watson began teaching psychology at Johns Hopkins University in 1908. In 1913, he gave a seminal lecture
at Columbia University titled "Psychology as the Behaviorist Views It," which essentially detailed the
behaviorist position.1 According to Watson, psychology should be the science of observable behavior.

"Psychology as the behaviorist views it is a purely objective experimental branch of natural science. Its
theoretical goal is the prediction and control of behavior. Introspection forms no essential part of its
methods, nor is the scientific value of its data dependent upon the readiness with which they lend themselves
to interpretation in terms of consciousness."1

The "Little Albert" Experiment


In his most famous and controversial experiment, known today as the "Little Albert" experiment, John
Watson and a graduate assistant named Rosalie Rayner conditioned a small child to fear a white rat. They
accomplished this by repeatedly pairing the white rat with a loud, frightening clanging noise.

They were also able to demonstrate that this fear could be generalized to white, furry objects other than the
white rat. The ethics of the experiment are often criticized today, especially because the child's fear was
never de-conditioned.

In 2009, researchers proposed that Little Albert was a boy named Douglas Merritte. Questioning what
happened to the child had intrigued many for decades.2 Sadly, the researchers found that the child died at
the age of six of hydrocephalus, a medical condition in which fluid builds up inside the skull.

In 2012, researchers proposed that Merritte suffered from neurological impairments at the time of the Little
Albert experiment and that Watson may have knowingly misrepresented the boy as a "healthy" and
"normal" child. However, in 2014, researchers suggested that another child, Albert Barger, matches the
characteristics of Little Albert better than Douglas Merritte.2

Leaving Academia

Watson remained at Johns Hopkins University until 1920. He had an affair with Rayner, divorced his first
wife, and was then asked by the university to resign his position. Watson later married Rayner and the two
remained together until her death in 1935. After leaving his academic position, Watson began working for
an advertising agency where he stayed until he retired in 1945.

During the latter part of his life, Watson's already poor relationships with his children grew progressively
worse. He spent his last years living a reclusive life on a farm in Connecticut. Shortly before his death on
September 25, 1958, he burned many of his unpublished personal papers and letters.

Contributions to Psychology

Watson set the stage for behaviorism, which soon rose to dominate psychology. While behaviorism began
to lose its hold after 1950, many of the concepts and principles are still widely used today. Conditioning
and behavior modification are still widely used in therapy and behavioral training to help clients change
problematic behaviors and develop new skills.

Achievements and Awards

Watson's lifetime achievements, publications, and awards include:

• 1915—Served as the president of the American Psychological Association (APA)


• 1919—Published Psychology Fromfrom the Standpoint of a Behaviorist
• 1925—Published Behaviorism3
• 1928—Published Psychological Care of Infant and Child
• 1957—Received the APA's Award for Distinguished Scientific Contributions
JEROME BRUNER
Bruner (1966) was concerned with how knowledge is represented and organized through different modes
of thinking (or representation).
In his research on the cognitive development of children, Jeromechildren, Jerome Bruner proposed three
modes of representation:
Enactive Enactive representation (action-based)
Iconic Iconic representation (image-based)
Symbolic Symbolic representation (language-based)
Bruner's constructivist theory suggests it is effective when faced with new material to follow a
progression from enactive to iconic to symbolic representation; this holds true even for adult learners.
Bruner's work also suggests that a learner even of a very young age is capable of learning any material so
long as the instruction is organized appropriately, in sharp contrast to the beliefs of Piaget and other stage
theorists.

Bruner's Three Modes of Representation


Modes of representation are the way in which information or knowledge are stored and encoded in
memory.
Rather than neat age-related stages (like Piaget), the modes of representation are integrated and only
loosely sequential as they "translate" into each other.

Enactive (0 - 1 year)
The first kind of memory. This mode is used within the first year of life (corresponding with Piaget’s
sensorimotor stage). Thinking is based entirely on physical actions, and infants learn by doing, rather
than by internal representation (or thinking).
It involves encoding physical action basedaction-based information and storing it in our memory. For
example, in the form of movement as a muscle memory, a baby might remember the action of shaking a
rattle.
This mode continues later in many physical activities, such as learning to ride a bike.
Many adults can perform a variety of motor tasks (typing, sewing a shirt, operating a lawn mower) that
they would find difficult to describe in iconic (picture) or symbolic (word) form.

Iconic (1 - 6 years)
Information is stored as sensory images (icons), usually visual ones, like pictures in the mind. For some,
this is conscious; others say they don’tdo not experience it.
This may explain why, when we are learning a new subject, it is often helpful to have diagrams or
illustrations to accompany the verbal information.
Thinking is also based on the use of other mental images (icons), such as hearing, smellsmell, or touch.

Symbolic (7 years onwards)


This develops last. This is where information is stored in the form of a code or symbol, such as language.
This mode is acquired around six to seven years-old (corresponding to Piaget’s concrete operational
stage).
In the symbolic stage, knowledge is stored primarily as words, mathematical symbols, or in other symbol
systems, such as music.
Symbols are flexible in that they can be manipulated, ordered, classified, etc. so the user isn’tis not
constrained by actions or images (which have a fixed relation to that which they represent).

The Importance of Language


Language is important for the increased ability to deal with abstract concepts.
Bruner argues that language can code stimuli and free an individual from the constraints of dealing only
with appearances, to provide a more complex yet flexible cognition.
The use of words can aid the development of the concepts they represent and can remove the constraints
of the “here & now” concept. Bruner views the infant as an intelligent & active problem solver from birth,
with intellectual abilities basically similar to those of the mature adult.

Educational Implications
The aim of education should be to create autonomous learners (i.e., learning to learn).
For Bruner (1961), the purpose of education is not to impart knowledge, but instead to facilitate a child's
thinking and problem-solving skills which can then be transferred to a range of situations. Specifically,
education should also develop symbolic thinking in children.
In 1960 Bruner's text, The Process of Education was published. The main premise of Bruner's text was
that students are active learners who construct their own knowledge.

Readiness
Bruner (1960) opposed Piaget's notion of readiness. He argued that schools waste time trying to match the
complexity of subject material to a child's cognitive stage of development.
This means students are held back by teachers as certain topics are deemed too difficult to understand and
must be taught when the teacher believes the child has reached the appropriate stage of cognitive
maturity.

The Spiral Curriculum


Bruner (1960) adopts a different view and believes a child (of any age) is capable of understandingcan
understand complex information:
'We begin with the hypothesis that any subject can be taught effectively in some intellectually honest
form to any child at any stage of development.' (p. 33)
Bruner (1960) explained how this was possible through the concept of the spiral curriculum. This
involved information being structured so that complex ideas can be taught at a simplified level first, and
then re-visited at more complex levels later onlater.
Therefore, subjects would be taught at levels of gradually increasing difficultly (hence the spiral analogy).
Ideally, teaching his way should lead to children being able to solve problems by themselves.

Discovery Learning
Bruner (1961) proposes that learners construct their own knowledge and do this by organizing and
categorizing information using a coding system. Bruner believed that the most effective way to develop a
coding system is to discover it rather than being told by the teacher.
The concept of discovery learning implies that students construct their own knowledge for themselves
(also known as a constructivist approach).
The role of the teacher should not be to teach information by rote learning, but instead to facilitate the
learning process. This means that a good teacher will design lessons that help students discover the
relationship between bits of information.
To do this a teacher must give students the information they need, but without organizing for them. The
use of the spiral curriculum can aid the process of discovery learning.

Bruner and Vygotsky


Both Bruner and Vygotsky emphasize a child's environment, especially the social environment, more than
Piaget did. Both agree that adults should play an active role in assisting the child's learning.
Bruner, like Vygotsky, emphasized the social nature of learning, citing that other people should help a
child develop skills through the process of scaffolding.
'[Scaffolding] refers to the steps taken to reduce the degrees of freedom in carrying out some tasktasks so
that the child can concentrate on the difficult skill she is in the process of acquiring' (Bruner, 1978, p. 19).
He was especially interested in the characteristics of people whom he considered to have achieved their
potential as individuals.
The term scaffolding first appeared in the literature when Wood, Bruner, and Ross described how
tutors'tutors interacted with a preschooler to help them solve a block reconstruction problem (Wood et al.,
1976).
The concept of scaffolding is very similar to Vygotsky's notion of the zone of proximal development, and
it's not uncommon for the terms to be used interchangeably.
Scaffolding involves helpful, structured interaction between an adult and a child with the aim of helping
the child achieve a specific goal.
The purpose of the support is to allow the child to achieve higher levels of development by:

• Simplifying the task or idea.


• Motivating and encouraging the child.
• Highlighting important task elements or errors.
• Giving models that can be imitated.

Bruner and Piaget


Agree
Children are innately PRE-ADAPTEDpRE-ADAPTED to learning
Children have a NATURAL CURIOSITY
Children’s COGNITIVE STRUCTURES develop over time
Children are ACTIVE participants in the learning process
Cognitive development entails the acquisition of SYMBOLS
Disagree
Social factors, particularly language, were important for cognitive growth. These underpin the concept of
‘scaffolding’..’
The development of LANGUAGE is a cause not a consequence of cognitive development
You can SPEED-UP cognitive development. You don’tdo not have to wait for the child to be ready
The involvement of ADULTS and MORE KNOWLEDGEABLE PEERS makes a big difference
The involvement of ADULTS and MORE KNOWLEDGEABLE PEERS makes a big difference
Obviously,
modes
may dominate
are not
there
related
in are
usage,
similarities
in they
termscoexist.
ofbetween
which presuppose
Piaget andthe
Bruner,
one that
but precedes
an important difference
it. While is thatone
sometimes Bruner’s
mode
Bruner states that what determines the level of intellectual development is the extent to which the child
has been given appropriate instruction together with practice or experience.
ALBERT BANDURA
Albert Bandura is an influential social cognitive psychologist who is perhaps bestbest known for his
social learning theory, the concept of self-efficacy, and his famous Bobo doll experiments. He is a
Professor Emeritus at Stanford University and is widely regarded as one of the greatest living
psychologists.

One 2002 survey ranked him as the fourth most influential psychologist of the twentieth century, behind
only B.F. Skinner, Sigmund Freud, and Jean Piaget.

Albert Bandura's Claims to Fame

Albert Bandura is best known for his work in the following areas:

• Bobo doll studies


• Observational learning
• Self-efficacy
• Social learning theory

0 seconds of 1 minute, 42 secondsVolumeseconds Volume 90%

1:42

Basic Principles of Social Learning Theory

Albert Bandura's Early Life

Albert Bandura was born on December 4, 1925, in a small Canadian town located approximately 50fifty
miles from Edmonton. The last of six children, Bandura's early education consisted of one small school
with only two teachers for high school. According to Bandura, because of this limited access to
educational resources, "The students had to take charge of their own education."
He realized that while "the content of most textbooks is perishable...the tools of self-directedness serve
one well over time." These early experiences may have contributed to his later emphasis on the
importance of personal agency.

Bandura soon became fascinated by psychology after enrolling at the University of British Columbia. He
had started out as a biological sciences major and his interest in psychology formed by accident. While
working nights and commuting to school with a group of students, he found himself arriving at school
earlier than his courses started.

To pass the time, he began taking "filler classes" during these early morning hours, which led him to
eventually stumble upon psychology.1

Bandura explained, "One morning, I was wasting time in the library. Someone had forgotten to return a
course catalog and I thumbed through it attempting to find a filler course to occupy the early time slot. I
noticed a course in psychology that would serve as an excellent filler. It sparked my interestinterest, and I
found my career."

He earned his degree from the University of British Columbia in 1949 after just three years of study and
then went on to graduate school at the University of Iowa. The school had been home to Kenneth Spence,
who collaborated with his mentor Clark Hull at Yale University, and other psychologists including Kurt
Lewin.

While the program took an interest in social learning theory, Bandura felt that it was too focused
on behaviorist explanations. Bandura earned his MA degree in 1951 and his Ph.D. in clinical psychology
in 1952.

Career and Theories

After earning his Ph.D., he was offered a position at Stanford University and accepted it. He began
working at Stanford in 1953 and has continued to work at the university to this day. It was during his
studies on adolescent aggression that Bandura became increasing interested in vicarious learning,
modeling, and imitation.

Albert Bandura's social learning theory stressed the importance of observational learning, imitation, and
modeling. "Learning would be exceedingly laborious, not to mention hazardous, if people had to rely
solely on the effects of their own actions to inform them what to do," Bandura explained in his 1977 book
on the subject.

His theory integrated a continuous interaction between behaviors, cognitions, and the environment.

Bobo Doll Study

Bandura's most famous experiment was the 1961 Bobo doll study. In the experiment, he made a film in
which an adult model was shown beating up a Bobo doll and shouting aggressive words.

The film was then shown to a group of children. Afterward, the children were allowed to play in a room
that held a Bobo doll. Those who had seen the film with the violent model were more likely to beat the
doll, imitating the actions and words of the adult in the film clip.
The Bobo doll study was significant because it departed from behaviorism’s insistence that all behavior is
directed by reinforcement or rewards. The children received no encouragement or incentives to beat up
the doll; they were simply imitating the behavior they had observed.

Bandura termed this phenomenon observational learning and characterized the elements of effective
observational learning as attention, retention, reciprocationreciprocation, and motivation.

Bandura's work emphasizes the importance of social influences, but also a belief in personal control.
"People with high assurance in their capabilities approach difficult tasks as challenges to be mastered
rather than as threats to be avoided," he has suggested.

Great Quotes Fromfrom Psychologist Albert Bandura

Is Albert Bandura a Behaviorist?

While most psychology textbooks place Bandura’s theory with those of the behaviorists, Bandura himself
has noted that he "...never really fit the behavioral orthodoxy."

Even in his earliest work, Bandura argued that reducing behavior to a stimulus-response cycle was too
simplistic. While his work used behavioral terminology such as 'conditioning' and 'reinforcement,'
Bandura explained, "...I conceptualized these phenomena as operating through cognitive processes."

"Authors of psychological texts continue to mischaracterize my approach as rooted in behaviorism,"


Bandura has explained, describing his own perspective as 'social cognitivism.'

Bandura's Selected Publications

Bandura has been a prolific author of books and journal articles over the last 60 years and is the most
widely cited living psychologist.

Some of Bandura's best-known books and journal articles have become classics within psychology and
continue to be widely cited today. His first professional publication was a 1953 paper titled "'Primary' and
'Secondary' Suggestibility" that appeared in the Journal of Abnormal and Social Psychology.

In 1973, Bandura published Aggression: A Social Learning Analysis, which focused on the origins of
aggression. His 1977 book Social Learning Theory presented the basics of his theory of how people learn
through observation and modeling.

His 1977 article entitled "Self-Efficacy: Toward a Unifying Theory of Behavioral Change" was published
in Psychological Review and introduced his concept of self-efficacy. The article also became an instant
classic in psychology.

Contributions to Psychology

Bandura’s work is considered part of the cognitive revolution in psychology that began in the late 1960s.
His theories have had a tremendous impact on personality psychology, cognitive psychology, education,
and psychotherapy.
LEV VYGOTSKY
The work of Lev Vygotsky (1934) has become the foundation of much research and theory in cognitive
development over the past several decades, particularly of what has become known as sociocultural
theory.
Vygotsky's sociocultural theory views human development as a socially mediated process in which
children acquire their cultural values, beliefs, and problem-solving strategies through collaborative
dialogues with more knowledgeable members of society. Vygotsky's theory is comprised of concepts
such as culture-specific tools, private speech, and the Zone of Proximal Development.
Vygotsky's theories stress the fundamental role of social interaction in the development of cognition
(Vygotsky, 1978), as he believed strongly that community plays a central role in the process of "making
meaning."
Unlike Piaget's notion that childrens' development must necessarily precede their learning, Vygotsky
argued, "learning is a necessary and universal aspect of the process of developing culturally organized,
specifically human psychological function" (1978, p. 90). In other words, social learning tends to precede
(i.e., come before) development.
Vygotsky has developed a sociocultural approach to cognitive development. He developed his theories at
around the same time as Jean Piaget was starting to develop his ideas (1920's and 30's), but he died at the
age of 38, and so his theories are incomplete - although some of his writings are still being translated from
Russian.
No single principle (such as Piaget's equilibration) can account for development. Individual development
cannot be understood without reference to the social and cultural context within which it is embedded.
Higher mental processes in the individual have their origin in social processes.

Vygotsky's theory differs from that of Piaget in a number of important ways:


1: Vygotsky places more emphasis on culture affecting cognitive development.

This contradicts Piaget's view of universal stages and content of development (Vygotsky does not refer to
stages in the way that Piaget does).
Hence Vygotsky assumes cognitive development varies across cultures, whereas Piaget states cognitive
development is mostly universal across cultures.
2: Vygotsky places considerably more emphasis on social factors contributing to cognitive development.

(i) Vygotsky states the importance of cultural and social context for learning. Cognitive development
stems from social interactions from guided learning within the zone of proximal development as children
and their partner's co-construct knowledge. In contrast, Piaget maintains that cognitive development
stems largely from independent explorations in which children construct knowledge of their own.
(ii) For Vygotsky, the environment in which children grow up will influence how they think and what
they think about.
3: Vygotsky places more (and different) emphasis on the role of language in cognitive development.

According to Piaget, language depends on thought for its development (i.e., thought comes before
language). For Vygotsky, thought and language are initially separate systems from the beginning of life,
merging at around three years of age, producing verbal thought (inner speech).
For Vygotsky, cognitive development results from an internalization of language.
4: According to Vygotsky adults are an important source of cognitive development.

Adults transmit their culture's tools of intellectual adaptation that children internalize. In contrast, Piaget
emphasizes the importance of peers, as peer interaction promotes social perspective taking.
Effects of Culture: - Tools of intellectual adaptation
Vygotsky claimed that infants are born with the basic abilities for intellectual development called
'elementary mental functions' (Piaget focuses on motor reflexes and sensory abilities).

Elementary mental functions include –


o Attention
o Sensation
o Perception
o Memory
Eventually, through interaction within the sociocultural environment, these are developed into more
sophisticated and effective mental processes which Vygotsky refers to as 'higher mental functions.'

Each culture provides its children tools of intellectual adaptation that allow them to use the basic mental
functions more effectively/adaptively.
Tools of intellectual adaptation is Vygotsky’s term for methods of thinking and problem-solving
strategies that children internalize through social interactions with the more knowledgeable members of
society.
For example, memory in young children this is limited by biological factors. However, culture determines
the type of memory strategy we develop. For example, in western culture, children learn note-taking to
aid memory, but in pre-literate societies, other strategies must be developed, such as tying knots in a
string to remember, or carrying pebbles, or repetition of the names of ancestors until large numbers can be
repeated.

Vygotsky, therefore, sees cognitive functions, even those carried out alone, as affected by the beliefs,
values, and tools of intellectual adaptation of the culture in which a person develops and therefore socio-
culturally determined. The tools of intellectual adaptation, therefore, vary from culture to culture - as in
the memory example.

Social Influences on Cognitive Development


Like Piaget, Vygotsky believes that young children are curious and actively involved in their own
learning and the discovery and development of new understandings/schema. However, Vygotsky placed
more emphasis on social contributions to the process of development, whereas Piaget emphasized self-
initiated discovery.
According to Vygotsky (1978), much important learning by the child occurs through social interaction
with a skillful tutor. The tutor may model behaviors and/or provide verbal instructions for the child.
Vygotsky refers to this as cooperative or collaborative dialogue. The child seeks to understand the actions
or instructions provided by the tutor (often the parent or teacher) then internalizes the information, using
it to guide or regulate their own performance.

Zone of Proximal Development


The concept of the More Knowledgeable Other is integrally related to the second important principle of
Vygotsky's work, the Zone of Proximal Development.
This is an important concept that relates to the difference between what a child can achieve independently
and what a child can achieve with guidance and encouragement from a skilled partner.

For example, the child could not solve the jigsaw puzzle (in the example above) by itself and would have
taken a long time to do so (if at all), but was able to solve it following interaction with the father, and has
developed competence at this skill that will be applied to future jigsaws.
Vygotsky (1978) sees the Zone of Proximal Development as the area where the most sensitive instruction
or guidance should be given - allowing the child to develop skills they will then use on their own -
developing higher mental functions.
Vygotsky also views interaction with peers as an effective way of developing skills and strategies. He
suggests that teachers use cooperative learning exercises where less competent children develop with help
from more skillful peers - within the zone of proximal development.
Evidence for Vygotsky and the ZPD
Freund (1990) conducted a study in which children had to decide which items of furniture should be
placed in particular areas of a dolls house.
Some children were allowed to play with their mother in a similar situation before they attempted it alone
(zone of proximal development) while others were allowed to work on this by themselves (Piaget's
discovery learning).
Freund found that those who had previously worked with their mother (ZPD) showed the greatest
improvement compared with their first attempt at the task. The conclusion being that guided learning
within the ZPD led to greater understanding/performance than working alone (discovery learning).

Vygotsky and Language


Vygotsky believed that language develops from social interactions, for communication purposes.
Vygotsky viewed language as man’s greatest tool, a means for communicating with the outside world.
According to Vygotsky (1962) language plays two critical roles in cognitive development:
1: It is the main means by which adults transmit information to children.
2: Language itself becomes a very powerful tool of intellectual adaptation.
Vygotsky (1987) differentiates between three forms of language: social speech which is external
communication used to talk to others (typical from the age of two); private speech (typical from the age of
three) which is directed to the self and serves an intellectual function; and finally private speech goes
underground, diminishing in audibility as it takes on a self-regulating function and is transformed into
silent inner speech (typical from the age of seven).
For Vygotsky, thought and language are initially separate systems from the beginning of life, merging at
around three years of age. At this point speech and thought become interdependent: thought becomes
verbal, speech becomes representational. When this happens, children's monologues internalized to
become inner speech. The internalization of language is important as it drives cognitive development.
'Inner speech is not the interiour aspect of external speech - it is a function in itself. It still remains speech,
i.e., thought connected with words. But while in external speech thought is embodied in words, in inner
speech words dies as they bring forth thought. Inner speech is to a large extent thinking in pure meanings.'
(Vygotsky, 1962: p. 149)
Vygotsky (1987) was the first psychologist to document the importance of private speech. He considered
private speech as the transition point between social and inner speech, the moment in development where
language and thought unite to constitute verbal thinking.
Thus private speech, in Vygotsky's view, was the earliest manifestation of inner speech. Indeed, private
speech is more similar (in its form and function) to inner speech than social speech.
Private speech is 'typically defined, in contrast to social speech, as speech addressed to the self (not to
others) for the purpose of self-regulation (rather than communication).' (Diaz, 1992, p.62)
Unlike inner speech which is covert (i.e., hidden), private speech is overt. In contrast to Piaget’s (1959)
notion of private speech representing a developmental dead-end, Vygotsky (1934, 1987) viewed private
speech as:
'A revolution in development which is triggered when preverbal thought and preintellectual language
come together to create fundamentally new forms of mental functioning.'
(Fernyhough & Fradley, 2005: p. 1).
In addition to disagreeing on the functional significance of private speech, Vygotsky and Piaget also
offered opposing views on the developmental course of private speech and the environmental
circumstances in which it occurs most often (Berk & Garvin, 1984).

Through private speech, children begin to collaborate with themselves in the same way a more
knowledgeable other (e.g., adults) collaborate with them in the achievement of a given function.
Vygotsky sees "private speech" as a means for children to plan activities and strategies and therefore aid
their development. Private speech is the use of language for self-regulation of behavior. Language is,
therefore, an accelerator to thinking/understanding (Jerome Bruner also views language in this way).
Vygotsky believed that children who engaged in large amounts of private speech are more socially
competent than children who do not use it extensively.
Vygotsky (1987) notes that private speech does not merely accompany a child’s activity but acts as a tool
used by the developing child to facilitate cognitive processes, such as overcoming task obstacles,
enhancing imagination, thinking, and conscious awareness.
Children use private speech most often during intermediate difficulty tasks because they are attempting to
self-regulate by verbally planning and organizing their thoughts (Winsler et al., 2007).
The frequency and content of private speech are then correlated with behavior or performance. For
example, private speech appears to be functionally related to cognitive performance: It appears at times of
difficulty with a task.
For example, tasks related to executive function (Fernyhough & Fradley, 2005), problem-solving tasks
(Behrend et al., 1992), schoolwork in both language (Berk & Landau, 1993), and mathematics (Ostad &
Sorensen, 2007).
Berk (1986) provided empirical support for the notion of private speech. She found that most private
speech exhibited by children serves to describe or guide the child's actions.
Berk also discovered than child engaged in private speech more often when working alone on challenging
tasks and also when their teacher was not immediately available to help them. Furthermore, Berk also
found that private speech develops similarly in all children regardless of cultural background.
Vygotsky (1987) proposed that private speech is a product of an individual’s social environment. This
hypothesis is supported by the fact that there exist high positive correlations between rates of social
interaction and private speech in children.
Children raised in cognitively and linguistically stimulating environments (situations more frequently
observed in higher socioeconomic status families) start using and internalizing private speech faster than
children from less privileged backgrounds. Indeed, children raised in environments characterized by low
verbal and social exchanges exhibit delays in private speech development.
Childrens’ use of private speech diminishes as they grow older and follows a curvilinear trend. This is due
to changes in ontogenetic development whereby children are able to internalize language (through inner
speech) in order to self-regulate their behavior (Vygotsky, 1987).
For example, research has shown that childrens’ private speech usually peaks at 3–4 years of age,
decreases at 6–7 years of age, and gradually fades out to be mostly internalized by age 10 (Diaz, 1992).
Vygotsky proposed that private speech diminishes and disappears with age not because it becomes
socialized, as Piaget suggested, but rather because it goes underground to constitute inner speech or
verbal thought” (Frauenglass & Diaz, 1985).

Classroom Applications

Vygotsky's approach to child development is a form of social constructivism, based on the idea that
cognitive functions are the products of social interactions.
Vygotsky emphasized the collaborative nature of learning by the construction of knowledge through
social negotiation.
He rejected the assumption made by Piaget that it was possible to separate learning from its social
context.
Vygotsky believed everything is learned on two levels. First, through interaction with others, and then
integrated into the individual’s mental structure.
Every function in the child’s cultural development appears twice: first, on the social level, and later, on
the individual level; first, between people (interpsychological) and then inside the child
(intrapsychological). This applies equally to voluntary attention, to logical memory, and to the formation
of concepts. All the higher functions originate as actual relationships between individuals. (Vygotsky,
1978, p.57)
Teaching styles based on constructivism mark a conscious effort to move from ‘traditional, objectivist
models didactic, memory-oriented transmission models’ (Cannella & Reiff, 1994) to a more student-
centred approach.
A contemporary educational application of Vygotsky's theory is "reciprocal teaching," used to improve
students' ability to learn from text. In this method, teachers and students collaborate in learning and
practicing four key skills: summarizing, questioning, clarifying, and predicting. The teacher's role in the
process is reduced over time.
Also, Vygotsky theory of cognitive development on learners is relevant to instructional concepts such as
"scaffolding" and "apprenticeship," in which a teacher or more advanced peer helps to structure or
arrange a task so that a novice can work on it successfully.
Vygotsky's theories also feed into the current interest in collaborative learning, suggesting that group
members should have different levels of ability so more advanced peers can help less advanced members
operate within their ZPD.

Critical Evaluation
Vygotsky's work has not received the same level of intense scrutiny that Piaget's has, partly due to the
time-consuming process of translating Vygotsky's work from Russian. Also, Vygotsky's sociocultural
perspective does not provide as many specific hypotheses to test as did Piaget's theory, making refutation
difficult, if not impossible.
Perhaps the main criticism of Vygotsky's work concerns the assumption that it is relevant to all cultures.
Rogoff (1990) dismisses the idea that Vygotsky's ideas are culturally universal and instead states the
concept of scaffolding - which is heavily dependent on verbal instruction - may not be equally useful in
all cultures for all types of learning. Indeed, in some instances, observation and practice may be more
effective ways of learning certain skills.
Burrhus Frederic Skinner

Burrhus Frederic Skinner (March 20, 1904 – August 18, 1990) was an
American psychologist, behaviorist, author, inventor, and social philosopher.[2][3][4][5] He was a professor
of psychology at Harvard University from 1958 until his retirement in 1974.[6]
Considering free will to be an illusion, Skinner saw human action as dependent on consequences of
previous actions, a theory he would articulate as the principle of reinforcement: If the consequences to an
action are bad, there is a high chance the action will not be repeated; if the consequences are good, the
probability of the action being repeated becomes stronger.[7]
Skinner developed behavior analysis, especially the philosophy of radical behaviorism,[8] and founded
the experimental analysis of behavior, a school of experimental research psychology. He also
used operant conditioning to strengthen behavior, considering the rate of response to be the most effective
measure of response strength. To study operant conditioning, he invented the operant conditioning
chamber (aka the Skinner box),[7] and to measure rate he invented the cumulative recorder. Using these
tools, he and Charles Ferster produced Skinner's most influential experimental work, outlined in their
1957 book Schedules of Reinforcement.[9][10]
Skinner was a prolific author, publishing 21 books and 180 articles.[11] He imagined the application of his
ideas to the design of a human community in his 1948 utopian novel, Walden Two,[3] while his analysis of
human behavior culminated in his 1958 work, Verbal Behavior.[12]
Skinner, John B. Watson and Ivan Pavlov, are considered to be the pioneers of modern behaviorism.
Accordingly, a June 2002 survey listed Skinner as the most influential psychologist of the 20th century.[13]

Biography
Skinner was born in Susquehanna, Pennsylvania, to Grace and William Skinner, the latter of whom was a
lawyer. Skinner became an atheist after a Christian teacher tried to assuage his fear of the hell that his
grandmother described.[14] His brother Edward, two and a half years younger, died at age 16 of a cerebral
hemorrhage.[15]
Skinner's closest friend as a young boy was Raphael Miller, whom he called Doc because his father was a
doctor. Doc and Skinner became friends due to their parents' religiousness and both had an interest in
contraptions and gadgets. They had set up a telegraph line between their houses to send messages to each
other, although they had to call each other on the telephone due to the confusing messages sent back and
forth. During one summer, Doc and Skinner started an elderberry business to gather berries and sell them
door to door. They found that when they picked the ripe berries, the unripe ones came off the branches
too, so they built a device that was able to separate them. The device was a bent piece of metal to form a
trough. They would pour water down the trough into a bucket, and the ripe berries would sink into the
bucket and the unripe ones would be pushed over the edge to be thrown away.[16]
Education
Skinner attended Hamilton College in New York with the intention of becoming a writer. He found
himself at a social disadvantage at the college because of his intellectual attitude.[further explanation needed][17] He
was a member of Lambda Chi Alpha fraternity.[16]
He wrote for the school paper, but, as an atheist, he was critical of the traditional mores of his college.
After receiving his Bachelor of Arts in English literature in 1926, he attended Harvard University, where
he would later research and teach. While attending Harvard, a fellow student, Fred S. Keller, convinced
Skinner that he could make an experimental science of the study of behavior. This led Skinner to invent a
prototype for the Skinner box and to join Keller in the creation of other tools for small experiments.[17]
After graduation, Skinner unsuccessfully tried to write a novel while he lived with his parents, a period
that he later called the "Dark Years".[17] He became disillusioned with his literary skills despite
encouragement from the renowned poet Robert Frost, concluding that he had little world experience and
no strong personal perspective from which to write. His encounter with John B. Watson's behaviorism led
him into graduate study in psychology and to the development of his own version of behaviorism.[17]
Later life

Skinner received a PhD from Harvard in 1931, and remained there as a researcher for some years. In
1936, he went to the University of Minnesota in Minneapolis to teach.[18] In 1945, he moved to Indiana
University,[19] where he was chair of the psychology department from 1946 to 1947, before returning to
Harvard as a tenured professor in 1948. He remained at Harvard for the rest of his life. In 1973, Skinner
was one of the signers of the Humanist Manifesto II.[20]
In 1936, Skinner married Yvonne "Eve" Blue. The couple had two daughters, Julie (later Vargas) and
Deborah (later Buzan; married Barry Buzan).[21][22] Yvonne died in 1997,[23] and is buried in Mount
Auburn Cemetery, Cambridge, Massachusetts.[17]
Skinner's public exposure had increased in the 1970s, he remained active even after his retirement in
1974, until his death. In 1989, Skinner was diagnosed with leukemia and died on August 18, 1990, in
Cambridge, Massachusetts. Ten days before his death, he was given the lifetime achievement award by
the American Psychological Association and gave a talk concerning his work.[24]

Contributions to psychology
Behaviorism
Main articles: Behaviorism and Radical behaviorism
Skinner referred to his approach to the study of behavior as radical behaviorism,[25] which originated in
the early 1900s as a reaction to depth psychology and other traditional forms of psychology, which often
had difficulty making predictions that could be tested experimentally. This philosophy of behavioral
science assumes that behavior is a consequence of environmental histories of reinforcement (see applied
behavior analysis). In his words:
The position can be stated as follows: what is felt or introspectively observed is not some nonphysical
world of consciousness, mind, or mental life but the observer's own body. This does not mean, as I shall
show later, that introspection is a kind of psychological research, nor does it mean (and this is the heart of
the argument) that what are felt or introspectively observed are the causes of the behavior. An organism
behaves as it does because of its current structure, but most of this is out of reach of introspection. At the
moment we must content ourselves, as the methodological behaviorist insists, with a person's genetic and
environment histories. What are introspectively observed are certain collateral products of those
histories.… In this way we repair the major damage wrought by mentalism. When what a person does [is]
attributed to what is going on inside him, investigation is brought to an end. Why explain the explanation?
For twenty-five hundred years people have been preoccupied with feelings and mental life, but only
recently has any interest been shown in a more precise analysis of the role of the environment. Ignorance
of that role led in the first place to mental fictions, and it has been perpetuated by the explanatory
practices to which they gave rise.

Foundations of Skinner's behaviorism


Skinner's ideas about behaviorism were largely set forth in his first book, The Behavior of
Organisms (1938).[9] Here, he gives a systematic description of the manner in which environmental
variables control behavior. He distinguished two sorts of behavior which are controlled in different ways:

• Respondent behaviors are elicited by stimuli, and may be modified through respondent
conditioning, often called classical (or pavlovian) conditioning, in which a neutral stimulus is
paired with an eliciting stimulus. Such behaviors may be measured by their latency or
strength.
• Operant behaviors are 'emitted', meaning that initially they are not induced by any particular
stimulus. They are strengthened through operant
conditioning (aka instrumental conditioning), in which the occurrence of a response yields a
reinforcer. Such behaviors may be measured by their rate.
Both of these sorts of behavior had already been studied experimentally, most notably: respondents,
by Ivan Pavlov;[26] and operants, by Edward Thorndike.[27] Skinner's account differed in some ways from
earlier ones,[28] and was one of the first accounts to bring them under one roof.
The idea that behavior is strengthened or weakened by its consequences raises several questions. Among
the most commonly asked are these:

1. Operant responses are strengthened by reinforcement, but where do they come from in
the first place?
2. Once it is in the organism's repertoire, how is a response directed or controlled?
3. How can very complex and seemingly novel behaviors be explained?
1. Origin of operant behavior
Skinner's answer to the first question was very much like Darwin's answer to the question of the origin of
a 'new' bodily structure, namely, variation and selection. Similarly, the behavior of an individual varies
from moment to moment; a variation that is followed by reinforcement is strengthened and becomes
prominent in that individual's behavioral repertoire. Shaping was Skinner's term for the gradual
modification of behavior by the reinforcement of desired variations. Skinner believed that 'superstitious'
behavior can arise when a response happens to be followed by reinforcement to which it is actually
unrelated.
2. Control of operant behavior
The second question, "how is operant behavior controlled?" arises because, to begin with, the behavior is
"emitted" without reference to any particular stimulus. Skinner answered this question by saying that a
stimulus comes to control an operant if it is present when the response is reinforced and absent when it is
not. For example, if lever-pressing only brings food when a light is on, a rat, or a child, will learn to press
the lever only when the light is on. Skinner summarized this relationship by saying that a discriminative
stimulus (e.g. light or sound) sets the occasion for the reinforcement (food) of the operant (lever-press).
This three-term contingency (stimulus-response-reinforcer) is one of Skinner's most important concepts,
and sets his theory apart from theories that use only pair-wise associations.[28]
3. Explaining complex behavior
Most behavior of humans cannot easily be described in terms of individual responses reinforced one by
one, and Skinner devoted a great deal of effort to the problem of behavioral complexity. Some complex
behavior can be seen as a sequence of relatively simple responses, and here Skinner invoked the idea of
"chaining". Chaining is based on the fact, experimentally demonstrated, that a discriminative stimulus not
only sets the occasion for subsequent behavior, but it can also reinforce a behavior that precedes it. That
is, a discriminative stimulus is also a "conditioned reinforcer". For example, the light that sets the
occasion for lever pressing may also be used to reinforce "turning around" in the presence of a noise. This
results in the sequence "noise – turn-around – light – press lever – food." Much longer chains can be built
by adding more stimuli and responses.
However, Skinner recognized that a great deal of behavior, especially human behavior, cannot be
accounted for by gradual shaping or the construction of response sequences.[29] Complex behavior often
appears suddenly in its final form, as when a person first finds his way to the elevator by following
instructions given at the front desk. To account for such behavior, Skinner introduced the concept of rule-
governed behavior. First, relatively simple behaviors come under the control of verbal stimuli: the child
learns to "jump," "open the book," and so on. After a large number of responses come under such verbal
control, a sequence of verbal stimuli can evoke an almost unlimited variety of complex responses.[29]
Reinforcement
Reinforcement, a key concept of behaviorism, is the primary process that shapes and controls behavior,
and occurs in two ways: positive and negative. In The Behavior of Organisms (1938), Skinner
defines negative reinforcement to be synonymous with punishment, i.e. the presentation of an aversive
stimulus. This definition would subsequently be re-defined in Science and Human Behavior (1953).
In what has now become the standard set of definitions, positive reinforcement is the strengthening of
behavior by the occurrence of some event (e.g., praise after some behavior is performed),
whereas negative reinforcement is the strengthening of behavior by the removal or avoidance of some
aversive event (e.g., opening and raising an umbrella over your head on a rainy day is reinforced by the
cessation of rain falling on you).
Both types of reinforcement strengthen behavior, or increase the probability of a behavior reoccurring; the
difference being in whether the reinforcing event is something applied (positive reinforcement) or
something removed or avoided (negative reinforcement). Punishment can be the application of an
aversive stimulus/event (positive punishment or punishment by contingent stimulation) or the removal of
a desirable stimulus (negative punishment or punishment by contingent withdrawal). Though punishment
is often used to suppress behavior, Skinner argued that this suppression is temporary and has a number of
other, often unwanted, consequences.[30] Extinction is the absence of a rewarding stimulus, which
weakens behavior.
Writing in 1981, Skinner pointed out that Darwinian natural selection is, like reinforced behavior,
"selection by consequences." Though, as he said, natural selection has now "made its case," he regretted
that essentially the same process, "reinforcement", was less widely accepted as underlying human
behavior.[31]
Schedules of reinforcement
Main article: Schedules of reinforcement
Skinner recognized that behavior is typically reinforced more than once, and, together with Charles
Ferster, he did an extensive analysis of the various ways in which reinforcements could be arranged over
time, calling it the schedules of reinforcement.[10]
The most notable schedules of reinforcement studied by Skinner were continuous, interval (fixed or
variable), and ratio (fixed or variable). All are methods used in operant conditioning.

• Continuous reinforcement (CRF): each time a specific action is performed the subject
receives a reinforcement. This method is effective when teaching a new behavior because it
quickly establishes an association between the target behavior and the reinforcer.[32]
• Interval schedule: based on the time intervals between reinforcements.[7]
o Fixed interval schedule (FI): A procedure in which reinforcements are
presented at fixed time periods, provided that the appropriate response is made.
This schedule yields a response rate that is low just after reinforcement and
becomes rapid just before the next reinforcement is scheduled.
o Variable interval schedule (VI): A procedure in which behavior is reinforced
after scheduled but unpredictable time durations following the previous
reinforcement. This schedule yields the most stable rate of responding, with the
average frequency of reinforcement determining the frequency of response.
• Ratio schedules: based on the ratio of responses to reinforcements.[7]
o Fixed ratio schedule (FR): A procedure in which reinforcement is delivered
after a specific number of responses have been made.
o Variable ratio schedule (VR):[7] A procedure in which reinforcement comes
after a number of responses that is randomized from one reinforcement to the
next (e.g. slot machines). The lower the number of responses required, the higher
the response rate tends to be. Variable ratio schedules tend to produce very rapid
and steady responding rates in contrast with fixed ratio schedules where the
frequency of response usually drops after the reinforcement occurs.
"Skinnerian" principles have been used to create token economies in a number of institutions, such as
psychiatric hospitals. When participants behave in desirable ways, their behavior is reinforced with tokens
that can be changed for such items as candy, cigarettes, coffee, or the exclusive use of a radio or
television set.[33]
Verbal Behavior[edit]
Challenged by Alfred North Whitehead during a casual discussion while at Harvard to provide an account
of a randomly provided piece of verbal behavior,[34] Skinner set about attempting to extend his then-new
functional, inductive approach to the complexity of human verbal behavior.[35] Developed over two
decades, his work appeared in the book Verbal Behavior. Although Noam Chomsky was highly critical
of Verbal Behavior, he conceded that Skinner's "S-R psychology" was worth a review.[36] (behavior
analysts reject the "S-R" characterization: operant conditioning involves the emission of a response which
then becomes more or less likely depending upon its consequence.)[36]
Verbal Behavior had an uncharacteristically cool reception, partly as a result of Chomsky's review, partly
because of Skinner's failure to address or rebut any of Chomsky's criticisms.[37] Skinner's peers may have
been slow to adopt the ideas presented in Verbal Behavior because of the absence of experimental
evidence—unlike the empirical density that marked Skinner's experimental work.[38]

Scientific inventions
Operant conditioning chamber
An operant conditioning chamber (also known as a "Skinner box") is a laboratory apparatus used in the
experimental analysis of animal behavior. It was invented by Skinner while he was a graduate student
at Harvard University. As used by Skinner, the box had a lever (for rats), or a disk in one wall (for
pigeons). A press on this "manipulandum" could deliver food to the animal through an opening in the
wall, and responses reinforced in this way increased in frequency. By controlling this reinforcement
together with discriminative stimuli such as lights and tones, or punishments such as electric shocks,
experimenters have used the operant box to study a wide variety of topics, including schedules of
reinforcement, discriminative control, delayed response ("memory"), punishment, and so on. By
channeling research in these directions, the operant conditioning chamber has had a huge influence on
course of research in animal learning and its applications. It enabled great progress on problems that
could be studied by measuring the rate, probability, or force of a simple, repeatable response. However, it
discouraged the study of behavioral processes not easily conceptualized in such terms—spatial learning,
in particular, which is now studied in quite different ways, for example, by the use of the water maze.[28]
Cumulative recorder
The cumulative recorder makes a pen-and-ink record of simple repeated responses. Skinner designed it
for use with the operant chamber as a convenient way to record and view the rate of responses such as a
lever press or a key peck. In this device, a sheet of paper gradually unrolls over a cylinder. Each response
steps a small pen across the paper, starting at one edge; when the pen reaches the other edge, it quickly
resets to the initial side. The slope of the resulting ink line graphically displays the rate of the response;
for example, rapid responses yield a steeply sloping line on the paper, slow responding yields a line of
low slope. The cumulative recorder was a key tool used by Skinner in his analysis of behavior, and it was
very widely adopted by other experimenters, gradually falling out of use with the advent of the laboratory
computer and use of line graphs.[39] Skinner's major experimental exploration of response rates, presented
in his book with Charles Ferster, Schedules of Reinforcement, is full of cumulative records produced by
this device.[10]
Air crib
The air crib is an easily cleaned, temperature- and humidity-controlled box-bed intended to replace the
standard infant crib.[40] Skinner invented the device to help his wife cope with the day-to-day tasks of
child rearing. It was designed to make early childcare simpler (by reducing laundry, diaper rash, cradle
cap, etc.), while allowing the baby to be more mobile and comfortable, and less prone to cry. Reportedly
it had some success in these goals.[41]
Pigeon-guided missile
During World War II, the US Navy required a weapon effective against surface ships, such as the
German Bismarck class battleships. Although missile and TV technology existed, the size of the primitive
guidance systems available rendered automatic guidance impractical. To solve this problem, Skinner
initiated Project Pigeon,[50][51] which was intended to provide a simple and effective guidance system. This
system divided the nose cone of a missile into three compartments, with a pigeon placed in each. Lenses
projected an image of distant objects onto a screen in front of each bird. Thus, when the missile was
launched from an aircraft within sight of an enemy ship, an image of the ship would appear on the screen.
The screen was hinged, such that pecks at the image of the ship would guide the missile toward the
ship.[52]
Despite an effective demonstration, the project was abandoned, and eventually more conventional
solutions, such as those based on radar, became available. Skinner complained that "our problem was no
one would take us seriously."[53]
Verbal summator
Early in his career Skinner became interested in "latent speech" and experimented with a device he called
the verbal summator.[54] This device can be thought of as an auditory version of the Rorschach
inkblots.[54] When using the device, human participants listened to incomprehensible auditory "garbage"
but often read meaning into what they heard. Thus, as with the Rorschach blots, the device was intended
to yield overt behavior that projected subconscious thoughts. Skinner's interest in projective testing was
brief, but he later used observations with the summator in creating his theory of verbal behavior. The
device also led other researchers to invent new tests such as the tautophone test, the auditory apperception
test, and the Azzageddi test.[55]

Influence on teaching
Along with psychology, education has also been influenced by Skinner's views, which are extensively
presented in his book The Technology of Teaching, as well as reflected in Fred S. Keller's Personalized
System of Instruction and Ogden R. Lindsley's Precision Teaching.
Skinner argued that education has two major purposes:

1. to teach repertoires of both verbal and nonverbal behavior; and


2. to interest students in learning.
He recommended bringing students' behavior under appropriate control by providing reinforcement only
in the presence of stimuli relevant to the learning task. Because he believed that human behavior can be
affected by small consequences, something as simple as "the opportunity to move forward after
completing one stage of an activity" can be an effective reinforcer. Skinner was convinced that, to learn, a
student must engage in behavior, and not just passively receive information.[45]: 389 
Skinner believed that effective teaching must be based on positive reinforcement which is, he argued,
more effective at changing and establishing behavior than punishment. He suggested that the main thing
people learn from being punished is how to avoid punishment. For example, if a child is forced to practice
playing an instrument, the child comes to associate practicing with punishment and thus develops feelings
of dreadfulness and wishes to avoid practicing the instrument. This view had obvious implications for the
then widespread practice of rote learning and punitive discipline in education. The use of educational
activities as punishment may induce rebellious behavior such as vandalism or absence.[56]
Edward Chace Tolman

Latent learning is a type of learning which is not apparent in the learner's behavior at the time of learning,
but which manifests later when a suitable motivation and circumstances appear. This shows that learning
can occur without any reinforcement of a behavior. .
The idea of latent learning was not original to Tolman, but he developed it further. Edward Tolman
argued that humans engage in this type of learning everyday as we drive or walk the same route daily and
learn the locations of various buildings and objects. Only when we need to find a building or object does
learning
This Photo bybecome
Unknownobvious
Author is licensed under CC BY

Tolman conducted experiments with rats and mazes to examine the role that reinforcement plays in the
way that rats learn their way through complex mazes. These experiments eventually led to the theory of
latent learning

Cognitive maps as an example of latent learning in rats


Tolman coined the term cognitive map, which is an internal representation (or image) of external
environmental feature or landmark. He thought that individuals acquire large numbers of cues (i.e.
signals) from the environment and could use these to build a mental image of an environment (i.e. a
cognitive map).
By using this internal representation of a physical space they could get to the goal by knowing where it is
in a complex of environmental features. Short cuts and changeable routes are possible with this model.
In their famous experiments Tolman and Honzik (1930) built a maze to investigate latent learning in rats.
The study also shows that rats actively process information rather than operating on a stimulus response
relationship.
Aim
To demonstrate that rats could make navigational decisions based on knowledge of the envi-ronment,
rather than their directional choices simply being dictated by the effects of rewards.

Procedure
In their study 3 groups of rats had to find their way around a complex maze. At the end of the maze there
was a food box. Some groups of rats got to eat the food, some did not, and for some rats the food was
only available after 10 days.
Group 1: Rewarded

• Day 1 – 17: Every time they got to end, given food (i.e. reinforced).

Group 2: Delayed Reward

• Day 1 - 10: Every time they got to end, taken out.


• Day 11 -17: Every time they got to end, given food (i.e. reinforced).

Group 3: No reward

• Day 1 – 17: Every time they got to end, taken out.

Results
The delayed reward group learned the route on days 1 to 10 and formed a cognitive map of the maze.
They took longer to reach the end of the maze because there was no motivation for them to perform.
From day 11 onwards they had a motivation to perform (i.e. food) and reached the end before the reward
group.

This shows that between stimulus (the maze) and response (reaching the end of the maze) a mediational
process was occurring the rats were actively processing information in their brains David Everett
Rumelhart (June 12, 1942 – March 13, 2011)[1] was an American psychologist who made many
contributions to the formal analysis of human cognition, working primarily within the frameworks
of mathematical psychology, symbolic artificial intelligence, and parallel distributed processing. He
also admired formal linguistic approaches to cognition, and explored the possibility of formulating
a formal grammar to capture the structure of stories.

Contents

• 1Biography
• 2Work
• 3References
• 4External links

Biography[edit]
Rumelhart was born in Mitchell, South Dakota on June 12, 1942. His parents were Everett Leroy and
Thelma Theora (Ballard) Rumelhart.[2] He began his college education at the University of South
Dakota, receiving a B.A. in psychology and mathematics in 1963. He studied mathematical
psychology at Stanford University, receiving his Ph.D. in 1967. From 1967 to 1987 he served on the
faculty of the Department of Psychology at the University of California, San Diego. In 1987 he moved
to Stanford University, serving as Professor there until 1998. Rumelhart was elected to the National
Academy of Sciences in 1991 and received many prizes, including a MacArthur Fellowship in July
1987, the Warren Medal of the Society of Experimental Psychologists, and the APA Distinguished
Scientific Contribution Award. Rumelhart, co-recipient with James McClelland, won the
2002 University of Louisville Grawemeyer Award in Psychology.[3]
Rumelhart became disabled by Pick's disease, a progressive neurodegenerative disease, and at the
end of his life lived with his brother in Ann Arbor, Michigan. He died in Chelsea, Michigan. He is
survived by two sons.[2]

Work[edit]
Rumelhart was the first author of a highly cited paper from 1985[4] (co-authored by Geoffrey
Hinton and Ronald J. Williams) that applied the back-propagation algorithm (also known as the
reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970) to multi-layer
neural networks. This work showed through experiments that such networks can learn useful internal
representations of data. The approach has been widely used for basic cognition researches (e.g.,
memory, visual recognition) and practical applications. This paper, however, does not cite earlier
work of the backpropagation method, such as the 1974 dissertation[5] of Paul Werbos.
In the same year, Rumelhart also published Parallel Distributed Processing: Explorations in the
Microstructure of Cognition[6] with James McClelland, which described their creation of computer
simulations of perceptrons, giving to computer scientists their first testable models of neural
processing, and which is now regarded as a central text in the field of cognitive science.[1]
Rumelhart's models of semantic cognition and specific knowledge in a diversity of learned
domains using initially non-hierarchical neuron-like processing units continue to interest scientists in
the fields of artificial intelligence, anthropology, information science, and decision science.
In his honor, in 2000 the Robert J. Glushko and Pamela Samuelson Foundation created the David E.
Rumelhart Prize for Contributions to the Theoretical Foundations of Human Cognition.[1][7] A Review
of General Psychology survey, published in 2002, ranked Rumelhart as the 88th most cited
psychologist of the 20th century, tied with John Garcia, James J. Gibson, Louis Leon
Thurstone, Margaret Floy Washburn, and Robert S. Woodworth.[8]
by mentally using their cognitive map (which they had latently learned).
Critical Evaluation
The behaviorists stated that psychology should study actual observable behavior, and that nothing
happens between stimulus and response (i.e. no cognitive processes take place).

Edward Tolman (1948) challenged these assumptions by proposing that people and animals are
active information processes and not passive learners as Behaviorism had suggested. Tolman developed a
cognitive view of learning that has become popular in modern psychology.
Tolman believed individuals do more than merely respond to stimuli; they act on beliefs, attitudes,
changing conditions, and they strive toward goals. Tolman is virtually the only behaviorists who found
the stimulus-response theory unacceptable, because reinforcement was not necessary for learning to
occur. He felt behavior was mainly cognitive.
Max Wertheimer
Gestalt psychology, school of psychology founded in the 20th century that provided the foundation for
the modern study of perception. Gestalt theory emphasizes that the whole of anything is greater than its
parts. That is, the attributes of the whole are not deducible from analysis of the parts in isolation. The
word Gestalt is used in modern German to mean the way a thing has been “placed,” or “put together.”
There is no exact equivalent in English. “Form” and “shape” are the usual translations; in psychology the
word is often interpreted as “pattern” or “configuration.”

Gestalt theory originated in Austria and Germany as a reaction against the associationist
and structural schools’ atomistic orientation (an approach which fragmented experience into distinct and
unrelated elements). Gestalt studies made use instead of phenomenology. This method, with a tradition
going back to Johann Wolfgang von Goethe, involves nothing more than the description of direct
psychological experience, with no restrictions on what is permissible in the description. Gestalt
psychology was in part an attempt to add a humanistic dimension to what was considered a sterile
approach to the scientific study of mental life. Gestalt psychology further sought to encompass the
qualities of form, meaning, and value that prevailing psychologists had either ignored or presumed to fall
outside the boundaries of science.

The publication of Czech-born psychologist Max Wertheimer’s “Experimentelle Studien über das Sehen
von Bewegung” (“Experimental Studies of the Perception of Movement”) in 1912 marks the founding of
the Gestalt school. In it Wertheimer reported the result of a study on apparent movement conducted
in Frankfurt am Main, Germany, with psychologists Wolfgang Köhler and Kurt Koffka. Together, these
three formed the core of the Gestalt school for the next few decades. (By the mid-1930s all had become
professors in the United States.)
The earliest Gestalt work concerned perception, with particular emphasis on visual
perceptual organization as explained by the phenomenon of illusion. In 1912 Wertheimer discovered
the phi phenomenon, an optical illusion in which stationary objects shown in rapid
succession, transcending the threshold at which they can be perceived separately, appear to move. The
explanation of this phenomenon—also known as persistence of vision and experienced when
viewing motion pictures—provided strong support for Gestalt principles.

Under the old assumption that sensations of perceptual experience stand in one-to-one relation to
physical stimuli, the effect of the phi phenomenon was apparently inexplicable. However, Wertheimer
understood that the perceived motion is an emergent experience, not present in the stimuli in isolation but
dependent upon the relational characteristics of the stimuli. As the motion is perceived, the
observer’s nervous system and experience do not passively register the physical input in a piecemeal way.
Rather, the neural organization as well as the perceptual experience springs immediately into existence as
an entire field with differentiated parts. In later writings this principle was stated as the law of Prägnanz,
meaning that the neural and perceptual organization of any set of stimuli will form as good a Gestalt, or
whole, as the prevailing conditions will allow.

See All Good Facts


Major elaborations of the new formulation occurred within the next decades. Wertheimer, Köhler,
Koffka, and their students extended the Gestalt approach to problems in other areas of
perception, problem solving, learning, and thinking. The Gestalt principles were later applied to
motivation, social psychology, and personality (particularly by Kurt Lewin) and to aesthetics and
economic behaviour. Wertheimer demonstrated that Gestalt concepts could also be used to shed light on
problems in ethics, political behaviour, and the nature of truth. Gestalt psychology’s traditions continued
in the perceptual investigations undertaken by Rudolf Arnheim and Hans Wallach in the United States.

Cognitive Psychology
cognitive psychology, Branch of psychology devoted to the study of human cognition, particularly as it
affects learning and behaviour. The field grew out of advances in Gestalt, developmental,
and comparative psychology and in computer science, particularly information-
processing research. Cognitive psychology shares many research interests with cognitive science, and
some experts classify it as a branch of the latter. Contemporary cognitive theory has followed one of two
broad approaches: the developmental approach, derived from the work of Jean Piaget and concerned with
“representational thought” and the construction of mental models (“schemas”) of the world, and the
information-processing approach, which views the human mind as analogous to a sophisticated computer
system.

Moral Psychology
moral psychology, in psychology and philosophy, the empirical and conceptual study of moral judgment,
motivation, and development, among other related topics.

Moral psychology encompasses the investigation of the psychological presuppositions of normative


ethical theories, including those regarding freedom of will and determinism and the possibility
of altruism or its alternative, psychological egoism (the notion that humans are ultimately motivated only
by perceived self-interest). The field is also concerned with the nature of akrasia (weakness of will, an
important notion in ancient Greek ethics) and moral self-deception; whether the normative demands of
certain ethical theories are realistic or reasonable, given normal human capacities and dispositions; the
psychological constitution and development of virtues and of moral character; and the nature and role of
the “moral emotions,” such as anger, indignation, compassion, and remorse.
David Everett Rumelhart
David E. Rumelhart was a leading cognitive theorist who helped develop Schema Theory, a theory where
knowledge is sorted by an individual and placed into categories where the information makes sense to the
individual. Rumelhart (1980) attributes the term schema to the work of the German philosopher,
Immanuel Kant in 1787. Kant first describes the process of a learner taking new information and sorting it
and putting the information into categories. Another important theorist that Rumelhart credits for his
schema theory is Sir Frederick Bartlett, who in 1932, called the process schema.

Biography

David E. Rumelhart was born on June 6, 1942 in Wessington Springs, South Dakota; the oldest of three
sons born to Thelma and Everett Rumelhart. He obtained his degrees in Mathematics and Psychology
from the University of South Dakota in 1963 and his Doctorate in 1967 from Stanford. He was a
professor at the University of California when he began working on schema theory with Andrew Ortony.
Then in 1987, he returned to Stanford where he was a professor for eleven years, until failing health
forced him to retire in 1998. He died March 13, 2011 from Pike’s Disease, a progressive
neurodegenerative disease similar to Alzheimer’s disease. He left behind two sons and four grandsons
(Carey, 2011).

Schema Theory

According to Rumelhart (1980), Schema Theory states that all knowledge can be packaged into smaller
units called schema. The schema not only contains the knowledge, but also how the knowledge is to be
used in memory recall. In a paper presented at a TESOL Conference in Ontario, Canada, Patricia Carrell
(1983) explained that comprehension of new information can take place only when the new information
can be related to something the learner already knows. The new information is compared against existing
schema and then placed into a schema that best fits the new information. The information is then
processed in one of two ways. In one model, called the ”bottom-up” or “data-driven” processing method,
the information is compared to smaller schema, which then can be combined into a larger or specific,
schema. As more schemas are activated, a more specific schemata is activated. The other process, called
the “top-down” or the “conceptually-driven” processing model, occurs when the individual searches for
confirmations of predictions. Understanding and comprehension take place when an individual can
activate the “top-down” and the “bottom-up” processing at the same time.
Example

In a Blog post, Angelo Bonadonna (2004) asked a professor that he had back in 1980 about Dr.
Rumelhart and Schema Theory. Dr. Denner, who was a professor at Northeastern University, taught
educational psychology at that time. Dr. Denney responded about a story he had read from Dr. Rumelhart,
and added some things to it and taught it to his freshmen class.

I do recall the story I told, because in the early years I used it here also. These days, I mainly teach
statistics, so I have not discussed schema theory for quite a while. The story I told was inspired by and
adapted from an example of how schemata function in comprehension presented in one of the early (now
classic) articles on schema theory. The reference is: Rummelhart, D.E. & Ortony, A. (1977). The
representation of knowledge in memory. In R.C. Anderson, R.J. Spiro & W.E. Montague (Eds),
Schooling and the Acquisition of Knowledge. Hillsdale NJ: Erlbaum. The essential lines of the story are
there, but I modified them and elaborated on them for my own teaching
purposes.

Sally heard the ice-cream vendor.

• What did Sally hear? How old is Sally? The schema of buy-sell is familiar with most people.
Something is exchanged for money. The schema for an ice cream vendor is also familiar to most people.
They may envision a brightly colored truck.

Sally turned and ran back into the house.

• Why did Sally go back into the house? In the buy-sell schema, Sally would go back into the house to
get money because she did not have any or enough for the purchase of a treat.

A short while later, Sally came back carrying her pocketbook.

• Does this make sense in a Buy-Sell schema? Again, even though detailed information is missing,
people will activate a schema they have personal knowledge about, like what she was wearing, the color
of the pocketbook.

The ice-cream vendor saw Sally reach into her pocketbook.

• What is Sally reaching for? Again, money is required in the buy-sell schema.

Sally drew a gun and shot him.

• This is a shooting schema. How old is Sally now? Motive? Now, the shooting schema is activated.
Sally may become older, and reside in a seedier side of town. Again, very little information is offered in
the sentence, but the person reading the sentence is filling in with their own schema.

The ice-cream vendor wiped the water from his face.

• How old is Sally now? Once again, Sally becomes a small girl and the shooting schema is discarded.

Feel free to use the story, although do give Rummelhart and Ortony (1977) credit for the examples (and
me a little too for my adaptation and elaboration of them). Dr. Denny (2004)
Learning

Rumelhart and Norton (1978) explain that there are some problems associated with Schema Theory.
Scientific work had not determined absolutely how people learn new information. The explanation came
on how people learned by using logic.

Accretion Whenever new information is encountered, it is processed and a schema is determined for it. If
there is an existing schema already in memory, then the new information is stored there. In essence, the
new information is matched very close to a schema that has already been created.

Tuning When a schema is modified according to the new information, then tuning has occurred. When
there is not a close schema already available to the learner, then the learner may take an existing schema
and change it to fit the new knowledge. The manipulation of the schema creates a slightly different
schema than the one the learner originally had.

Restructuring When the new information is so different that there is no existing schema already in place,
then a totally new schema is created. A learner has to take an existing schema and change it so much that
a new schema emerges, and then the new learning has occurred.

Summary

Rumelhart and Ortony (1977) listed the four major characteristics of schemata:

1. Schemata have variables.

2. Schemata can be embedded into one another.

3. Schemata represent knowledge at all levels of abstraction.

4. Schemata represent knowledge rather than definitions.

Rumelhart started his teaching career by studying the parts of the brain and how the brain functions. He
developed theories on how the brain developed connections with new information and how the
information is processed. Schema Theory tries to explain how new learning is acquired. The new
information is broken down into smaller parts and then categorized into schemas. The information may be
assimilated into an existing schema without changing the schema, it may modify the schema, or it may
create a new schema altogether. The schemas may be used in a “bottom-up” process or a “top-down”
process. The collection of schemas a person has then becomes that person’s body of knowledge.
Kurt Lewin
Kurt Lewin was an influential psychologist who is today recognized as the founder of modern social
psychology. His research on group dynamics, experiential learning, and action research had a tremendous
influence on the growth and development of social psychology. He is also recognized for his important
contributions in the areas of applied psychology and organizational psychology. In a 2002 review of some
of the most influential psychologists of the 20th century, Lewin was ranked as the 18th most eminent
psychologist.1

"There is nothing so practical as a good theory." —Kurt Lewin

Best Known For

• Experiential learning
• Field Theory
• Group dynamics
• Considered the founder of modern social psychology

What Is Social Psychology?

Early Life

Born in Prussia to a middle-class Jewish family, Kurt Lewin moved to Berlin at the age of 15 to attend the
Gymnasium. He enrolled at the University of Frieberg in 1909 to study medicine before transferring to
the University of Munich to study biology. He eventually completed a doctoral degree at the University of
Berlin.

He originally began his studies with an interest in behaviorism, but he later developed an interest in
Gestalt psychology. He served in the German army and was later injured in combat.2 These early
experiences had a major impact on the development of his field theory and later study of group dynamics.
What Is the Gestalt Approach In Psychology?

Career

In 1921, Kurt Lewin began lecturing on philosophy and psychology at the Psychological Institute of the
University of Berlin. His popularity with students and prolific writing drew the attention of Stanford
University, and he was invited to be a visiting professor in 1932. Eventually, Lewin emigrated to the U.S.
and took a teaching position at the University of Iowa, where he worked until 1945.

While Lewin emphasized the importance of theory, he also believed that theories needed to have practical
applications. Lewin established the Research Center for Group Dynamics at Massachusetts Institute of
Technology (MIT) and the National Training Laboratories (NTL). Lewin died of a heart attack in 1947.

Field Theory

Influenced by Gestalt psychology, Lewin developed a theory that emphasized the importance of
individual personalities, interpersonal conflict, and situational variables.

Lewin's Field Theory proposed that behavior is the result of the individual and the environment. This
theory had a major impact on social psychology, supporting the notion that our individual traits and the
environment interact to cause behavior.

The Lewin, Lippitt, and White Study

In this study, schoolchildren were assigned to either authoritarian, democratic, or laissez-faire leadership
groups. It was demonstrated that democratic leadership was superior to authoritarian and laissez-faire
leadership. These findings prompted a wealth of research on leadership styles.3

Contributions to Psychology

Kurt Lewin contributed to Gestalt psychology by expanding on gestalt theories and applying them to
human behavior. He was also one of the first psychologists to systematically test human behavior,
influencing experimental psychology, social psychology, and personality psychology. He was a prolific
writer, publishing more than 80 articles and eight books on various psychology topics. Many of his
unfinished papers were published by his colleagues after his sudden death at age 56.

Lewin is known as the father of modern social psychology because of his pioneering work that utilized
scientific methods and experimentation to look at social behavior. Lewin was a seminal theorist whose
enduring impact on psychology makes him one of the preeminent psychologists of the 20th century.
DR. E. Paul Torrance
DR. E. Paul Torrance was a pioneer in creativity research and education for more than 50 years. He was a
monumental figure who has helped make a better world through his lifetime focus on the development of
creative potential of individuals of all abilities and ages. He produced over 1800 publications and
presentations on creativity (Millar, 1997).Torrance chose to define creativity as a process because he
thought if we understood the creative process, we could predict what kinds of person could master the
process, what kind of climate made it grow and what products would be involved (Torrance, 1995).
Torrance created a battery of tests of creative thinking abilities for use from kindergarten through
graduate and professional education. The Torrance Tests of Creative Thinking (TTCT) are the most
widely used tests of creative talent in the United States and have been translated into over 30 different
languages. The TTCT have been standardized and published and France, Italy and China. There is very
little racial, socioeconomic or cultural bias in the TTCT (Torrance, 1988).In longitudinal studies (1958-
2000), Torrance found that students identified as creatively gifted but not intellectually gifted (IQ of
130+), out achieved the intellectually in adulthood. He found that characteristics of the creative thinking
abilities differ from those of the abilities involved in intelligence and logical reasoning. In fact, the use of
intelligence tests to identify gifted students misses about 70% of those who are equally gifted using
creativity criteria identified in tests such as the TTCT (Torrance, 1995).Torrance’s research has
demonstrated that a variety of techniques for training in creative problem solving produce significant
creative growth without interfering with traditional kinds of educational achievement. Creative growth
seems to be the greatest and most predictable when deliberate, direct teaching of creative thinking skills
are involved. Torrance believed that each person is unique and has particular strengths that are of value
and must be respected; therefore, education must be built upon strengths rather than weaknesses. It takes
courage to be creative. Just as soon as you have a new idea, you are a minority of one. Torrance found
that learning and thinking creatively takes place in the process of sensing difficulties, problem, and gaps
in information; making guesses or formulating hypotheses about these deficiencies; in testing these
guesses and possibility, revising and retesting them; and finally in communicating the results. Vital
human needs are involved in each of these four stages. If we sense that something is missing or wrong our
tension is aroused and we become uncomfortable. To relieve our tension we try to make guesses in order
to fill gaps and make connections. We know that our guesses may be wrong, but we find early on if we
are correct. Thus we are driven to test our hypotheses, to modify them and to correct our errors. Once we
make a discovery, we want to tell somebody about it. It is natural for humans to learn creatively.
DaviD McclellanD’s
David McClelland’s motivation theory says that humans have a total of three core types emotional needs,
which they acquire as a result of their life journeys. Given that this model focuses on needs, it is
considered a content theory of motivation. The needs the model considers are:

We’ve written several articles on various content and process theories of motivation that you might find
interesting. These include articles on Adam’s equity theory and Herzberg’s two factor theory of
motivation. We’ve also written an introductory post of Adair’s 8 basic rule of motivation and have a guest
post on Reversal Theory.

McClelland says that these needs are scalar and everyone has a blend of them, though usually one is
dominant.

The blend and strength of an individual’s needs shapes their behaviors and motivations in work, and in
the wider world. The different needs bring different strengths, weaknesses, preferred ways of working and
behavioral risks into the workplace.
Awareness of your own needs can help you improve your own self-awareness, self-management and
decision-making. Similarly, knowing the needs of the people you work with (or for) can help you manage
them more effectively.

While many people may have a sense of their own needs, most people chose not to fully reveal them to
others. McClelland uses an iceberg analogy to explain this.

What we see of others, the bit above the surface, is based on what they do and includes their knowledge,
skills and behaviors. The things that we don’t see, the bit below the surface, is their true underlying self.
This includes their motives, personality characteristics, values, beliefs and self-opinions. This split of
external and internal presentation is very similar to the concept of personality and character ethics.

We only see a little bit of who people are, the bit below the surface may be much more complicated…

The Three Emotional Needs

Most individuals have a dominant emotional need. The emotional need which is dominant will help shape
an individual’s feelings, actions and behaviors. It will also go some way towards shaping their
preferences in the working environment. It may also shape their strengths and potential risks as both part
of a team or as a leader.

Some people need to overcome challenges and succeed.


The first need detailed in McClelland’s Acquired Needs Motivation Theory is the need for achievement.

The need for achievement presents itself as an emotional drive towards progressing quickly, delivering
tasks, succeeding, attaining high levels of performance and other potentially competitive outcomes.

Work Preferences

Individuals with a high level of emotional need for achievement want to be constantly overcoming
challenging, yet achievable, tasks. They thrive on being slightly stretched and on the feeling of reward
they receive when they complete a deliverable.

These individuals have a moderate level of risk tolerance in relation to the work they like to do. They
know that if their activities are too risky they may fail and not receive their hit of achievement they desire.
However, if they are not risky enough, their achievements won’t feel truly rewarding.

Strengths and Risks

Individuals with a high level of emotional need for achievement often have high levels of drive. They can
be a great asset to a team when they are being well managed and things are going well. When they are
focused, they have the ability to produce a high volume of high quality outputs. To keep them performing
at their best, try to provide them with stimulation. They need challenge, recognition and active
management to the ensure the stretch and leadership attention they desire.

When things are not going well, though, these individuals can also feel frustrated. They can become bored
or impatient, which can lead to some poorer behaviors. If this happens, overcome it by reengaging them
through a new set of challenges and an opportunity to deliver.
As a Leader

Individuals with a high level of emotional need for achievement can be very effective leaders. Their
desire for achievement means that they will face into their work and drive their teams towards high
volumes of work and a high quality of delivery.

Unfortunately, this drive can also be a bit of an Achilles’ heel for these leaders. If they do not check their
drive, and effectively manage their own teams, these individuals run the risk of overworking their team
members and ultimately losing their follower-ship and support. They also face the risk of personal burn-
out. They may need help to give themselves space to recover from the exertions of their work.

The Need for Power

The second need detailed in McClelland’s Acquired Needs Motivation Theory is the need for power.

The need for power presents itself as an emotional drive towards status, influence, control over others and
winning. Individuals with a high need for power desire respect and authority over others.

Work Preferences

Individuals with a high level of emotional need for power want to be constantly competing with,
directing, managing and exerting influence over others. They thrive on winning in competitions with
others and the sense of increased status that winning brings them.

These individuals typically end up with high levels of risk tolerance. Their often highly competitive
natures and their need for ever increasing status means they may take ever increasing risks in an effort to
increase their status and control.

Strengths and Risks

Individuals with a high level of emotional need for power are often tenacious and resolute, willing and
able to make and deliver on difficult decisions, and willing to do what it takes to achieve their goals.

Individuals with a high level of emotional need for power can be a mixed blessing within a team
environment. While their needs and desires are aligned to those of the team or organization, their drive for
power can be a helpful tool in motivating them, and others around them.

However, it the objectives of an individual with a high emotional need for power become separated from
the objectives of an organization, these individuals will usually pursue their own goals, even to the
detriment of the organization. It’s important for those leading individuals with a high drive for power to
align their goals with the organization’s goals.

As a Leader

Individuals with a high level of emotional need for power can be very effective leaders in specific
situations. Their desire for obtaining and maintaining power and status means they are often willing to
make difficult decisions and see through difficult objectives, where they think these objectives will help
their personal power goals.
Clearly though, individuals with a high emotional need for power also bring many risks when they are in
leadership positions. Perhaps the greatest risk associated with these leaders relates to the cultures they
create. Leaders with a high emotional need for emotional power often seek loyalty or subservience in
others almost above all else. When this happens, organizational cultures become toxic and fearful and
organizational performance often reduces.

Another important risk these leaders bring at an organizational level, is the risk of these leaders increasing
their own power and status at a cost to the organization. Examples of this type of activity could include
inflating team sizes, taking on work from other divisions, undermining other leaders and generally doing
whatever it takes to increase their status. In some instances these individuals may see status and power as
zero-sum games (which we’ve yet to write about). This means they may seek to undermine the status and
power of others to increase their own status and power.

Udemy

The Need for Affiliation

The third need detailed in McClelland’s Acquired Needs Motivation Theory is the need for affiliation.

The need for affiliation presents itself as an emotional drive towards being liked and accepted. Individuals
with a high need for affiliation desire having agreeable and collaborative working relationships with
others and a harmonious social environment.

For some people, getting along well with others is the most important thing.

Work Preferences

Individuals with a high level of emotional need for affiliation want to be constantly working in an
environment where people feel welcomed, included, harmonious and collaborative. They are often
socially perceptive and work towards maintaining effective social relationships and creating positive
environments.

These individuals typically end up with fairly low levels of risk tolerance. Their desire for social harmony
means they don’t want to “rock the boat” or take on activities that may upset people or lead to conflict.

Strengths and Risks

Individuals with a high level of emotional need for affiliation can be a real asset for a team. They often
focus on pulling people together, creating social links and helping teams form. In addition, they can be
motivating, enthusiastic, engaging and drive real team delivery. They are very much at their best when
working towards a common and collaborative goal with others.

It’s important though from a leadership perspective to help these individuals focus on their deliverables as
well as their social relationships and structures. Often these individuals will be willing to reduce the pace
or quality of their deliverables if doing so may create more social harmony.
To help these individuals remain at their best, it’s important to focus on the culture of the team and to
create a collaborative environment. This can be done in part by creating collaborative goals or objectives,
by building team relationships through things like team building activities and by seeking to minimize
conflict, or at least explain the benefits of conflict to these individuals.

As a Leader

Individuals with a high level of emotional need for affiliation can be very effective leaders in specific
situations. Their desire for social harmony and conviviality means they can create inclusive cultures,
cohesive teams and a real sense of collaboration and commonality.

Unfortunately though, individuals with a high emotional need for affiliation run the risk of putting social
harmony ahead of progress and delivery. They may not be as objective as other leaders and there is a
definite risk that these leaders will focus more on outcomes for their people than for the business.

Learning More
We’ve written several articles on various content and process theories of motivation that you might find
interesting. These include articles on Adam’s equity theory and Herzberg’s two factor theory of
motivation. We’ve also written an introductory post of Adair’s 8 basic rule of motivation and have a guest
post on Reversal Theory. You can listen to our podcast on reversal theory below:

The World of Work Project View


McClelland’s Acquired Needs Motivation Theory is a simple but useful way to think about your own
drivers at work, or those of the people you work with. To get the most out of it, it may be worth reflecting
on yourself and determining which emotional needs you most associate with. Once you’ve done this, you
can think what your own needs profile might indicate about the risks and strengths that you bring to your
role. Depending on how you feel, it might be worth having a discussion with your line manager about
this.

Like all models that group people into specific categories, this model shouldn’t be considered as
definitive. Instead, it should be used as a basis for self-reflection, coaching conversations or team
discussions.

As a nearly final thought on this model, senior leaders should focus on and search for individuals in their
teams with high levels of emotional need for power. These individuals, while hugely useful in certain
circumstances, also have the ability to create hugely toxic cultures, which will damage an organization in
the longer term. They are almost certainly difficult to spot though as they may adopt a “kiss up and kick
down” approach to their corporate lives.
Victor Harold Vroom
Why do people make the choices they do at work, and how can managers and leaders make effective
decisions? These are two essential questions for managers to understand. They were both tackled with
characteristic clear-thinking and rigor by one man.

Short Biography

Victor Vroom was born in 1932 and grew up in the suburbs of Montreal. Initially, he was a bright child
with little academic interest – unlike his two older brothers. Instead, his passion was big-band jazz music
and, as a teenager, he dedicated up to 10 hours a day to practicing Alto Sax and Clarinet.

Leaving school, but finding the move to the US as a professional musician was tricky, Vroom enrolled in
college and learned, through psychometric testing, that the two areas of interest that would best suit him
were music (no surprise) and psychology. Unfortunately, whilst he now enjoyed learning, his college did
not teach psychology.

At the end of the year, he was able to transfer, with a full year’s credit, to McGill University, where he
earned a BSc in 1953 and a Masters in Psychological Science (MPs Sc) in 1955. He then went to the US
to study for his PhD at the University of Michigan. It was awarded to him in 1958.

His first research post was at the University of Michigan, from where he moved to the University of
Pennsylvania in 1960 and then, in 1963, to Carnegie Mellon University. He remained there until receiving
a second offer from Yale University – this time to act as Chairman of the Department of Administrative
Sciences, and to set up a graduate school of organization and management.
He has remained there for the rest of his career, as John G Searle Professor and, currently,
as BearingPoint Professor Emeritus of Management & Professor of Psychology.

Vroom’s first book was Work and Motivation (1964) which introduced the first of his major
contributions; his ‘Expectancy Theory’ of motivation. He also collaborated with Edward Deci to produce
a review of workplace motivation, Management and Motivation, in 1970. They produced a revised edition
in 1992.

His second major contribution was the ‘Vroom-Yetton model of leadership decision making’. Vroom and
Philip Yetton published Leadership and Decision-Making in 1973. He later revised the model with Arthur
Jag, and together, they published The New Leadership: Managing Participation in Organizations in 1988.

It is also worth mentioning that Vroom had a bruising experience while pursued through the courts by an
organization he had earlier collaborated with. They won their case for copyright infringement so I shall
say no more. The judgement is available online. Vroom’s account of this, at the end of a long
autobiographical essay, is an interesting read. It was written as part of his presidency of the Society for
Industrial and Organizational Psychology in 1980-81.

Vroom’s Expectancy Theory of Motivation

Pocket blog has covered Vroom’s expectancy theory in an earlier blog, and it is also described in detail in
The Management Models Pocketbook. It is an excellent model that deserves to be far better known than it
is. Possibly the reason is because Vroom chose to express his theory as an equation: bad move! Most
people are scared of equations. That’s why we at Management Pocketbooks prefer to use the metaphor of
a chain. Motivation breaks down if any of the links is compromised. Take a look at our short and easy to
follow article.

The Vroom-Yetton-Jag Model of Leadership Decision-making

This one is a bit of a handful. Vroom has expressed some surprise that it became a well-adopted tool and,
more recently, noted that societies and therefore management styles have changed, rendering it less
relevant now than it was in its time. That said, it is instructive to understand the basics.

Decision-making is a leadership role, and (what I shall call) the V-Y-J model is a situational leadership
model for what style of decision-making a leader should select.

It sets out the different degrees to which a manager or leader can involve their team in decision-making,
and also the situational characteristics that would lead to a choice of each style.

Five levels of Group Involvement in Decision-making


Level 1: Authoritative A1
The leader makes their decision alone.

Level 2: Authoritative A2
The leader invites information and then makes their decision alone.

Level 3: Consultative C1
The leader invites group members to offer opinions and suggestions, and then makes their decision alone.

Level 4: Consultative C2
The leader brings the group together to hear their discussion and suggestions, and then makes their
decision alone.

Level 5: Group Consensus


The leader brings the group together to discuss the issue, and then facilitates a group decision.

Choosing a Decision-Making Approach

The V-Y-J model sets out a number of considerations and research indicates that, when a decision
approach is chosen that follows these considerations, leaders self-report greater levels of success than
when the model is not followed. The considerations are:

1. How important is the quality of the decision?


2. How much information and expertise does the leader have?
3. How well structured is the problem or question?
4. How important is group-member acceptance of the decision?
5. How likely is group-member acceptance of the decision?
6. How much do group members share the organization’s goals (against pursuing their own
agendas)?
7. How likely is the group to be able to reach a consensus?

A Personal Reflection

I have found both of Vroom’s principal models enormously helpful, both as a project leader and as a
management trainer. I find it somewhat sad that, in Vroom’s own words, ‘the wrenching changes at Yale
and the … lawsuit have taken their emotional and intellectual toll.’ Two major events created a huge
mental and emotional distraction for Vroom in the late 1980s. At a time when he should still have been at
the peak of his intellectual powers, he was diverted from his research. I think this is sad and wonder what
insights we may have lost as a result.
Wolfgang Köhler
Wolfgang Köhler, (born January 21 [January 9, Old Style], 1887, Revel, Estonia, Russian Empire [now
Tallinn, Estonia]—died June 11, 1967, Enfield, New Hampshire, U.S.), German psychologist and a key
figure in the development of Gestalt psychology, which seeks to understand learning, perception, and
other components of mental life as structured wholes.

Köhler’s doctoral thesis with Carl Stumpf at the University of Berlin (1909) was an investigation of
hearing. As assistant and lecturer at the University of Frankfurt (1911), he continued his auditory
research. In 1912 he and Kurt Koffka were subjects for experiments on perception conducted by Max
Wertheimer, whose report on the experiments launched the Gestalt movement. Thereafter Köhler was
associated with Wertheimer and Koffka as the three endeavored to gain acceptance for the new theory.

As director of the anthropoid research station of the Prussian Academy of Sciences at Tenerife, Canary
Islands (1913–20), Köhler conducted experiments on problem-solving by chimpanzees, revealing their
ability to devise and use simple tools and build simple structures. His findings appeared in the
classic Intelligenzprüfungen an Menschenaffen (1917; The Mentality of Apes), a work that emphasized
insight and led to a radical revision of learning theory. Another major work, Die physischen Gestalten in
Ruhe und im stationären Zustand (1920; “Physical Gestalt in Rest and Stationary States”), was based on
an attempt to determine the relation of physical processes in nervous tissue to perception.

In 1921 Köhler became head of the psychological institute and professor of philosophy at the University
of Berlin, directing a series of investigations that explored many aspects of Gestalt theory and
publishing Gestalt Psychology (1929). Outspoken in his criticism of Adolf Hitler’s government, Köhler
went to the United States in 1935 and was professor of psychology at Swarthmore College in
Pennsylvania until 1955.
mind, in the Western tradition, the complex of faculties involved in perceiving, remembering,
considering, evaluating, and deciding. Mind is in some sense reflected in such occurrences as sensations,
perceptions, emotions, memory, desires, various types of reasoning, motives, choices, traits
of personality, and the unconscious.

A brief treatment of mind follows. The subject of mind is treated in a number of articles. For a
philosophical treatment of Western conceptions, see mind, philosophy of. For scientific treatment of the
so-called mental faculties, see intelligence; animal learning; learning
theory; memory; perception; thought. For treatment of Eastern conceptions, in the context of the
respective philosophical traditions, see Buddhism; Hinduism; etc.

To the extent that mind is manifested in observable phenomena, it has frequently been regarded as a
peculiarly human possession. Some theories, however, posit the existence of mind in other animals
besides human beings. One theory regards mind as a universal property of matter. According to another
view, there may be superhuman minds or intelligences, or a single absolute mind,
a transcendent intelligence.

Common assumptions among theories of mind

Several assumptions are indispensible to any discussion of the concept of mind. First is the assumption
of thought or thinking. If there were no evidence of thought in the world, mind would have little or no
meaning. The recognition of this fact throughout history accounts for the development of diverse theories
of mind. It may be supposed that such words as “thought” or “thinking” cannot, because of their
own ambiguity, help to define the sphere of mind. But whatever the relation of thinking to sensing,
thinking seems to involve more—for almost all observers—than a mere reception of impressions from
without. This seems to be the opinion of those who make thinking a consequence of sensing, as well as of
those who regard thought as independent of sense. For both, thinking goes beyond sensing, either as an
elaboration of the materials of sense or as an apprehension of objects that are totally beyond the reach of
the senses.

The second assumption that seems to be a root common to all conceptions of mind is that of knowledge or
knowing. This may be questioned on the ground that, if there were sensation without any form of thought,
judgment, or reasoning, there would be at least a rudimentary form of knowledge—some degree
of consciousness or awareness by one thing or another. If one grants the point of this objection, it
nevertheless seems true that the distinction between truth and falsity and the difference between
knowledge, error, and ignorance or between knowledge, belief, and opinion do not apply to sensations in
the total absence of thought. Any understanding of knowledge that involves these distinctions seems to
imply mind for the same reason that it implies thought. There is a further implication of mind in the fact
of self-knowledge. Sensing may be awareness of an object, and to this extent it may be a kind of knowing,
but it has never been observed that the senses can sense or be aware of themselves.

Thought seems to be not only reflective but reflexive, that is, able to consider itself, to define the nature of
thinking, and to develop theories of mind. This fact about thought—its reflexivity—also seems to be a
common element in all the meanings of “mind.” It is sometimes referred to as “the reflexivity of the
intellect,” as “the reflexive power of the understanding,” as “the ability of the understanding to reflect
upon its own acts,” or as “self-consciousness.” Whatever the phrasing, a world without self-
consciousness or self-knowledge would be a world in which the traditional conception of mind would
probably not have arisen.

You might also like