Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 23

BEHAVIORISM :

IVAN PAVLOV, E.L


THORNDIKE, JOHN B.
WATSON, J.F SKINNER
The theory of behaviorism focuses on the study of
observable and measurable behavior. It emphasizes
that behavior is mostly learned through conditioning
and reinforcement (rewards and punishment). It does
not give much attention to the mind and the
possibility of thought processes occurring in the mind.
Contributions in the development of the behaviorist
theory largely came from Pavlov, Watson, Thorndike
and Skinner.
CLASSICAL CONDITIONING

Ivan Pavlov, a Russian


physiologist, is well
known for his work in
classical conditioning
or stimulus
substitution. Pavlov's
most renowned
experiment involved
meat, a dog and a bell.

Initially, Pavlov was measuring the dog's salivation


in order to study digestion. This is when he
stumbled upon classical conditioning.
Pavlov's Experiment. Before
conditioning, ringing the bell
(neutral stimulus) caused no
response from the dog. Placing
food (unconditioned stimulus) in
front of the dog initiated
salivation (unconditioned
response). During conditioning,
the bell was rung a few seconds
before the dog was presented
with food. After conditioning, the
ringing of the bell (conditioned
stimulus) alone produced
salivation (conditioned response).
This is classical conditioning
CONNECTIONISM THEORY

Edward Thorndike's Connectionism theory


gave us the original S-R framework of behavioral
psychology. More than a hundred years ago he wrote
a text book entitled, Educational Psychology. He was
the first one to use this term. He explained that
learning is the result of associations forming between
stimuli (S) and responses (R).
Such associations or habits" become
strengthened or weakened by the nature and
frequency of the S-R pairings. The model for S-R
theory was trial and error learning in which certain
responses came to be repeated more than others
because of rewards.
Thorndike's theory on connectionism,
states that learning has taken place when. a strong
connection or bond between stimulus and response
is formed. He came up with three primary laws:

 Law of Effect. The law of effect states that a


connection between a stimulus and response is
strengthened when the consequence is positive
(reward) and the connection between the stimulus and
the response is weakened when the consequence is
negative. Thorndike later on, revised this "law" when he
found that negative rewards (punishment) do not
necessarily weaken bonds, and that some seemingly
pleasurable consequences do not necessarily motivate
performance.
 Law of Exercise.
 This tells us that the more an S-R (stimulus
response) bond is practiced the stronger it will
become. "Practice makes perfect" seem to be
associated with this. However, like the law of
effect, the law of exercise also had to be
revised when Thorndike found that practice
without feedback does not necessarily
enhance performance.
 Law of Readiness. This states that the more readiness the
learner has to respond to the stimulus, the stronger will be the
bond between them. When a person is ready to respond to a
stimulus and is not made to respond, it becomes annoying to the
person. For example, if the teacher says, "Okay we will now
watch the movie (stimulus) you've been waiting for." And
suddenly the power goes off. The students will feel frustrated
because they were ready to respond to the stimulus but were
prevented from doing so. Likewise, if the person is not at all
ready to respond to stimuli and is asked to respond, that also
becomes annoying. For instance, the teacher calls a student to
stand up and recite, and then the teacher asks the question and
expects the student to respond right away when he is still not
ready. This will be annoying to the student. That is why
teachers should remember to say the question first, and wait for
a few seconds before calling on anyone to answer.
Principles Derived from
Thorndike's Connectionism:
1. Learning requires both practice and rewards
(laws of effect/ exercise)

2. A series of S-R connections can be chained


together if they belong to the same action sequence
(law of readiness).

3. Transfer of learning occurs because of


previously encountered situations.

4. Intelligence is a function of the number of


connections learned.
John B. Watson
Was the first American psychologist to work
with Pavlov's ideas. He too was initially involved in
animal studies, then later became involved in human
behavior research. He considered that humans are
born with a few reflexes and the emotional reactions
of love and rage. All other behavior is learned through
stimulus-response John Watson associations through
conditioning. He believed in the power 1878 1958 of
conditioning so much that he said that if he is given a
dozen healthy infants he can make them into
anything you want them to be, basically through
making stimulus-response connections through
conditioning
Experiment on Albert. Watson applied classical
conditioning in his experiment concerning Albert, a
young child and a white rat. In the beginning, Albert was
not afraid of the rat; but Watson made a sudden loud
noise each time Albert touched the rat. Because Albert
was frightened by the loud noise, he soon became
conditioned to fear and avoid the rat. Later, the child's
response was generalized to other small animals. Now,
he was also afraid of small animals. Watson then
extinguished" or made the child "unlearn fear by
showing the rat without the loud noise. Surely, Watson's
research methods would be questioned today
nevertheless, his work did clearly show the role of
conditioning in the development of emotional responses
to certain stimuli. This may help us understand the
fears, phobias and prejudices that people develop.
OPERANT CONDITIONING

Burrhus Frederick Skinner. Like Pavlov, Watson


and Thorndike, Skinner believed in the stimulus-
response pattern of conditioned behavior. His
theory zeroed in only on changes in observable
behavior, excluding any likelihood of any processes
taking place in the mind. Skinner's 1948 book,
Walden Two, is about a utopian society based on
operant conditioning. He also wrote Science and
Human Behavior, (1953) in which he pointed out
how the principles of operant conditioning function
in social institutions such as government, law,
religion, economics and education.
Skinner's work
differs from that of
the three
behaviorists before
him in that he
studied operant
behavior (voluntary
behaviors used in
operating on the
environment). Thus,
his theory came to
be known as
Operant
Conditioning.
Operant Conditioning is based upon the notion
that learning is a result of change in over
behavior. Changes in behavior are the result of
an individual's response to events (stimuli)
that occur in the environment. A response
produces a consequence such as defining a
word, hitting a ball, or solving a math
problem. When a particular Stimulus-Response
(S-R) pattern is reinforced (rewarded), the
individual is conditioned to respond.
Skinner also looked into extinction or non-
reinforcement: Responses that are not reinforced
are not likely to be repeated. For example, ignoring
a student's misbehavior may extinguish that
behavior

Shaping of Behavior. An animal on a cage may take a


very long time to figure out that pressing a lever will
produce food. To accomplish such behavior, successive
approximations of the behavior re rewarded until the
animal learns the association between the lever and the
food reward. To begin shaping, the animal may be
rewarded for simply turning in the direction of the lever,
then for moving toward the lever, for brushing against
the lever, and finally for pressing the lever.
Behavioral chaining comes about when a series of steps are
needed to be learned. The animal would master each step in
sequence until the entire sequence is learned. This can be
applied to a child being taught to tie a shoelace. The child can
be given reinforcement (rewards) until the entire process of
tying the shoelace is learned.
Reinforcement Schedules. Once the desired behavioral
response is accomplished, reinforcement does not have to be
100%; in fact, it can be maintained more successfully through
what Skinner referred to as partial reinforcement schedules.
Partial reinforcement schedules include interval schedules and
ratio schedules.
Fixed Interval Schedules. The target response is reinforced
after a fixed amount of time has passed since the last
reinforcement. Example, the bird in a cage is given food
(reinforcer) every 110 minutes, regardless of how many times
it presses the bar.
Variable Interval Schedules. This is similar to fixed
interval schedules but the amount of time that must pass
between reinforcement varies. Example, the bird may
receive food (reinforcer) different intervals, not every ten
minutes.
Fixed Ratio Schedules. A fixed number of correct
responses must occur before reinforcement may recur.
Example, the bird will be given food (reinforcer)
everytime it presses the bar 5 times.
Variable Ratio Schedules. The number of correct
repetitions of the correct response for reinforcement
varies. Example, the bird is given food (reinforcer) after it
presses the bar 3 times, then after 10 times, then after 4
times. So the bird will not be able to predict how many
times it needs to press the bar before it gets food again.
Variable interval and especially, variable ratio
schedules produce steadier and more persistent
rates of response because the learners cannot
predict when the reinforcement will come
although they know that they will eventually
succeed. An example of this is why people
continue to buy lotto tickets even when an almost
negligible percentage of people actually win.
While it is true that very rarely there is a big
winner, but once in a while somebody hits the
jackpot (reinforcement). People cannot predict
when the jackpot can be gotten (variable interval)
so they continue to buy tickets (repetition of
response).
Implications of Operant Conditioning
1. Practice should take the form of question
(stimulus) answer (response) frames which
expose the student to the subject in gradual
steps.
2. Require that the learner makes a response for
every frame and receives immediate feedback.
3. Try to arrange the difficulty of the questions so
the response is always correct and hence, a positive
reinforcement.
4. Ensure that good performance in the lesson is
paired with secondary reinforcers such as verbal
praise, prizes and good grades.
Principles Derived from Skinner's Operant
Conditioning:
1. Behavior that is positively reinforced will
reoccur; intermittent reinforcement is
particularly effective.
2. Information should be presented in small
amounts so that responses can be reinforced
("shaping").
3. Reinforcements will generalize across similar
stimuli (""stimulus generalization") producing
secondary conditioning

You might also like