Learning

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

PSY01: Introduction to Psychology USI Vincentian Learning Module

MODULE 10: Learning


Learning is defined as the relatively permanent change in knowledge or behavior that is
the result of experience. Learned behaviors involve change and experience. Learning involves
acquiring knowledge and skills through experience.
Learning has traditionally been studied in terms of its simplest components — the
associations our minds automatically make between events. Our minds have a natural tendency
to connect events that occur closely together or in sequence. Associative learning occurs when
an organism makes connections between stimuli or events that occur together in the
environment. You will see that associative learning is central to all three basic learning processes
discussed in this chapter; classical conditioning tends to involve unconscious processes, operant
conditioning tends to involve conscious processes, and observational learning adds social and
cognitive layers to all the basic associative processes, both conscious and unconscious. These
three basic forms of learning will be discussed in detail in this module, but it is helpful to have a
brief overview of each as you begin to explore how learning is understood from a psychological
perspective.
In classical conditioning, also known as Pavlovian conditioning, organisms learn to
associate events —or stimuli— that repeatedly happen together. We experience this process
throughout our daily lives. For example, you might see a flash of lightning in the sky during a
storm and then hear a loud boom of thunder. The sound of the thunder naturally makes you jump.
Because lightning reliably predicts the impending boom of thunder, you may associate the two
and jump when you see lightning. In operant conditioning, organisms learn, again, to associate
events — a behavior and its consequence (reinforcement or punishment). Let’s say for example
when you train a dog. When you tell a dog to sit and give him a treat when he does, after repeated
experiences, the dog associate the act of sitting with receiving a treat. He learns that the
consequence of sitting is that he gets a doggie biscuit. Conversely, if the dog is punished when
peeing inside your house, the dog becomes conditioned to not pee in the house. A pleasant
consequence encourages more of that behavior in the future, whereas a punishment deters the
behavior. Observational learning extends the effective range of both classical and operant
conditioning. In contrast to classical and operant conditioning, in which learning occurs only
through direct experience, observational learning is the process of watching others and then
imitating what they do. For instance, you may have learned to always clean up after eating

Classical Conditioning
In the early part of the 20th century, Russian physiologist Ivan Pavlov (1849–
1936) was studying the digestive system of dogs when he noticed an
interesting behavioral phenomenon: The dogs began to salivate as soon as
the lab technicians who normally fed them entered the room. Pavlov realized
that the dogs were salivating because they knew that they were about to be
fed; the dogs had begun to associate the arrival of the technicians with the
food that soon followed their appearance in the room.

Figure 10.1: Ivan Pavlov

Module 10: Learning 1


PSY01: Introduction to Psychology USI Vincentian Learning Module

With his team of researchers, Pavlov began studying this process in more detail. He conducted a
series of experiments in which, over a number of trials, dogs were exposed to a sound immediately
before receiving food. He systematically controlled the onset of the sound and the timing of the
delivery of the food, and recorded the amount of the dogs’ salivation. Initially the dogs salivated
only when they saw or smelled the food, but after several pairings of the sound and the food, the
dogs began to salivate as soon as they heard the sound. The animals had learned to associate the
sound with the food that followed.
Pavlov had identified a fundamental associative learning process called classical conditioning.
Classical conditioning refers to learning that occurs when a neutral stimulus becomes
associated with a stimulus that naturally produces a behavior . After the association is learned,
the previously neutral stimulus is sufficient to produce the behavior.
Psychologists use specific terms to identify the stimuli and the responses in classical conditioning.
• Unconditioned stimulus (UCS) is something that triggers a naturally occurring
response.
• Unconditioned response (UCR) is the naturally occurring response that follows the
unconditioned stimulus. Some examples of the US-UR pairs include:
o Sneezing (UCR) to pepper (UCS)
o Shivering (UCR) to cold (UCS)
o Blinking (UCR) to a bright light (UCS)
*Notice how all of these responses are reflexive and unlearned, which is why we refer to them as being unconditioned.

• Neutral stimulus (NS) is something that does not naturally produce a response.
• Conditioned stimulus (CS) is a once neutral stimulus that has been repeatedly
presented prior to the unconditioned stimulus and evokes a similar response as the
unconditioned stimulus.
• Conditioned Response (CR) is the acquired response to the conditioned stimulus, which
was the formerly neutral stimulus.
In Pavlov’s experiment, the sound of
the tone served as the initial neutral
stimulus. It became a conditioned
stimulus after learning because it
produced a conditioned response.
Note that the unconditioned
response and the conditioned
response are the same behavior. In
Pavlov’s experiment, it was
salivation. The unconditioned and
conditioned responses are given
different names because they are
produced by different stimuli. The
unconditioned stimulus produces
the unconditioned response and the Figure 10.2: Four-Panel Illustration of Classical Conditioning
conditioned stimulus produces the conditioned response (see Figure 10.2).

Module 10: Learning 2


PSY01: Introduction to Psychology USI Vincentian Learning Module

The Persistence and Extinction of Conditioning


After he had demonstrated that learning could occur through association, Pavlov moved on to
study the variables that influenced the strength and the persistence of conditioning. In some
studies, after the conditioning had taken place, Pavlov presented the sound repeatedly but
without presenting the food afterward. Figure 10.3 shows what happened. As you can see, after
the initial acquisition or learning phase in which the conditioning occurred, when the CS was then
presented alone, the behavior rapidly decreased. The dogs salivated less and less to the sound,
and eventually the sound did not elicit salivation at all. Extinction refers to the reduction in
responding that occurs when the conditioned stimulus is presented repeatedly without the
unconditioned stimulus.
Figure 10.3: Acquisition, Extinction, and Spontaneous Recovery

Acquisition: The CS and the US are repeatedly paired together and behavior increases.
Extinction: The CS is repeatedly presented alone, and the behavior slowly decreases.
Spontaneous Recovery: After a pause, when the CS is again presented alone, the behavior may
again occur and then again show extinction.
Source
Although at the end of the first extinction period the CS was no longer producing salivation, the
effects of conditioning had not entirely disappeared. Pavlov found that, after a pause, sounding
the tone again elicited salivation, although to a lesser extent than before extinction took place.
The increase in responding to the CS following a pause after extinction is known as spontaneous
recovery. When Pavlov again presented the CS alone, the behavior again showed extinction.
Although the behavior has disappeared, extinction is never complete. If conditioning is again
attempted, the animal will learn the new associations much faster than it did the first time.

Stimulus Generalization vs Stimulus Discrimination


Pavlov also experimented with presenting new stimuli that were similar, but not identical to, the
original conditioned stimulus. For instance, if the dog had been conditioned to being scratched
before the food arrived, the stimulus would be changed to being rubbed rather than scratched.
He found that the dogs also salivated upon experiencing the similar stimulus.

Module 10: Learning 3


PSY01: Introduction to Psychology USI Vincentian Learning Module

This process is known as stimulus generalization, which refers to the tendency to respond to
stimuli that resemble the original conditioned stimulus. The ability to generalize has important
evolutionary significance. If we eat some red berries and they make us sick, it would be a good
idea to think twice before we eat some purple berries. Although the berries are not exactly the
same, they nevertheless are similar and may have the same negative properties.
The flip side of generalization is stimulus discrimination or the tendency to respond differently
to stimuli that are similar but not identical. Pavlov’s dogs quickly learned, for example, to salivate
when they heard the specific tone that had preceded food, but not upon hearing similar tones
that had never been associated with food. Discrimination is also useful; if we do try the purple
berries, and if they do not make us sick, we will be able to make the distinction in the future.
Second-order Conditioning:
In some cases, an existing conditioned stimulus can serve as an unconditioned stimulus for a
pairing with a new conditioned stimulus, and this process is known as second-order
conditioning. In one of Pavlov’s studies, for instance, he first conditioned the dogs to salivate to
a sound, and then repeatedly paired a new CS, a black square, with the sound. Eventually he found
that the dogs would salivate at the sight of the black square alone, even though it had never been
directly associated with the food.
Secondary conditioners in everyday life include our attractions to or fears of things that stand for
or remind us of something else. For example, we might feel good (CR) when hearing a particular
song (CS) that is associated with a romantic moment (US). If we associate that song with a
particular artist, then we may have those same good feelings whenever we hear another song by
that same artist. We now have a favorite performing artist, thanks to second order conditioning,
and according to the early behaviorists, we acquired this preference without consciously making
the decision.

Application
Classical conditioning has also been used to help explain the experience of posttraumatic stress
disorder (PTSD). PTSD is a severe anxiety disorder that can develop after exposure to a fearful
event, such as the threat of death (American Psychiatric Association, 2013). PTSD occurs when the
individual develops a strong association between the situational factors that surrounded the
traumatic event. For example in war, military uniforms or the sounds and smells (neutral stimuli)
become associated with the fearful trauma of war (unconditioned stimulus). As a result of the
conditioning, being exposed to similar stimuli, or even thinking about the situation in which the
trauma occurred (conditioned stimulus) becomes sufficient to produce the conditioned response
of severe anxiety (Keane, Zimering, & Caddell, 1985).

PTSD develops because the emotions experienced during the event have produced neural activity
in the amygdala and created strong conditioned learning. In addition to the strong conditioning
that people with PTSD experience, they also show slower extinction to classical conditioning

Module 10: Learning 4


PSY01: Introduction to Psychology USI Vincentian Learning Module

(Milad et al., 2009). In short, people with PTSD have developed very strong association with the
events surrounding the trauma and are also slow to show extinction to the conditioned stimulus.

Figure 10.4: The Role of Classical Conditioning and PTSD

SAA#1
1. If posttraumatic stress disorder (PTSD) is a type of classical conditioning,
how might psychologists use the principles of classical conditioning to
treat the disorder?
2. Think of and write an example in your life of how classical conditioning
has produced a positive emotional response, such as happiness or excitement? How
about a negative emotional response, such as fear, anxiety, or anger?

Operant Conditioning
In classical conditioning the organism learns to associate new stimuli with natural, biological
responses such as salivation or fear. The organism does not learn something new, but rather
begins to perform in an existing behavior in the presence of a new signal. Operant conditioning,
on the other hand, is learning that occurs based on the consequences of behavior and can involve
the learning of new actions. Operant conditioning occurs when a dog rolls over on command
because it has been praised for doing so in the past, when a bully threatens his classmates because
doing so allows him to get his way, and when a child gets good grades because her parents
threaten to punish her if she does not. In operant conditioning, the organism learns from the
consequences of its own actions.
Thorndike and Skinner
Psychologist Edward L. Thorndike (1874–1949) was the first scientist to systematically study
operant conditioning. In his research, Thorndike (1898) observed cats who had been placed in a
puzzle box from which they tried to escape. Observing the changes in the cats’ behavior led

Module 10: Learning 5


PSY01: Introduction to Psychology USI Vincentian Learning Module

Thorndike to develop his law of effect, the principle that responses that create a typically
pleasant outcome in a particular situation are more likely to occur again in a similar situation,
whereas responses that produce a typically unpleasant outcome are less likely to occur again
in the same situation (Thorndike, 1911). The essence of the law of effect is that successful
responses are pleasurable. These responses are strengthened or enriched by experience, and thus
occur more frequently. Unsuccessful responses, which produce unpleasant experiences, are
weakened and subsequently occur less frequently. The influential behavioral psychologist B. F.
Skinner (1904–1990) expanded on Thorndike’s ideas to develop a more complete set of principles
to explain operant conditioning.
Reinforcement and Punishment
Skinner studied in detail how animals changed their behavior through reinforcement, which
increases the likelihood of a behavior reoccurring, and punishment, which decreases the
likelihood of a behavior reoccurring. Skinner used the term reinforcer to refer to any event that
strengthens or increases the likelihood of a behavior and the term punisher to refer to any event
that weakens or decreases the likelihood of a behavior. He used the terms positive and negative
to refer to whether a reinforcement was presented or removed, respectively.
Reinforcement: There are two ways of reinforcing a behavior: Positive reinforcement
strengthens a response by presenting something pleasant after the response and negative
reinforcement strengthens a response by reducing or removing something unpleasant . For
example, giving a child praise for completing his homework is positive reinforcement. Taking
aspirin to reduce the pain of a headache is negative reinforcement. In both cases, the
reinforcement makes it more likely that behavior will occur again in the future.
Unfortunately, getting a child to do homework is not as simple as giving praise. The rats in
Skinner's box were always hungry, so food was always reinforcing. A child is not always in need of
praise, especially if some alternate activity (e.g. TV) is providing superior reinforcement.
Reinforcement is not a specific item or event. Reinforcement is what increases behavior. People
differ in what makes them feel good, and what makes a hungry person feel good is not the same
as what will reinforce a full person. Something does not count as reinforcement unless it increases
the targeted behavior.
Because people differ in what please them, using reinforcement to control behavior in a group
setting is not easy. Sometimes tokens, such as coins or points, are used as reinforcers in settings
such as schools, homes, or prison, and this is called a token economy. These tokens can be
exchanged for what the individual finds reinforcing at that time. A child, for example, might be
able to use his points for a desired snack or time on the computer. Loyalty cards like SM Advantage
Card also works using the same principle. You earn points when you purchase in SM stores, then
you can use your accumulated points as payment for your next purchase. Reinforcement size must
still, however, be greater than the reinforcement value of any alternate behavior. A tired teenager,
for example, might obtain more reinforcement from sleeping, even if they are offered a lot of
money to take a job that starts at 5 a.m.
Punishment: There are two ways to punish a behavior: Positive punishment weakens a response
by presenting something unpleasant after the response, whereas negative punishment weakens

Module 10: Learning 6


PSY01: Introduction to Psychology USI Vincentian Learning Module

a response by reducing or removing something pleasant. A child who is given chores after fighting
with a sibling, a type of positive punishment, or who loses out on the opportunity to go to recess
after getting a poor grade, a type of negative punishment, is less likely to repeat these behaviors.
Consistent use of punishment for a behavior is more effective than occasional punishment. A child
who is only occasionally reprimanded for sneaking candy into his room will be more likely to
continue. Also, if the punishment is strong, it will be more effective. For example, a Php10 000 fine
for a first-time jaywalking offense will be more likely to deter the behavior in the future than a
Php1 000 fine. These terms are summarized in Table 10.1.

Table 10.1: How Positive and Negative Reinforcement and Punishment Influence Behavior
Operant conditioning
Description Outcome Example
term
Positive reinforcement Add or increase a Behavior is strengthened Giving a student a
pleasant stimulus prize after he gets an
A on a test
Negative Reduce or remove an Behavior is strengthened Taking painkillers that
reinforcement unpleasant stimulus eliminate pain
increases the
likelihood that you
will take painkillers
again
Positive punishment Present or add an Behavior is weakened Giving a student extra
unpleasant stimulus homework after she
misbehaves in class
Negative punishment Reduce or remove a Behavior is weakened Taking away a teen’s
pleasant stimulus computer after he
misses curfew

Reinforcement Schedules
One way to expand the use of operant learning is to modify the schedule on which the
reinforcement is applied. To this point we have only discussed a continuous reinforcement
schedule, in which the desired response is reinforced every time it occurs; whenever the dog rolls
over, for instance, it gets a biscuit. Continuous reinforcement results in relatively fast learning, but
also rapid extinction of the desired behavior once the reinforcer disappears. The problem is that
because the organism is used to receiving the reinforcement after every behavior, the responder
may give up quickly when it does not appear.
Most real-world reinforcers are not continuous; they occur on a partial (or intermittent)
reinforcement schedule, which is a schedule in which the responses are sometimes reinforced,
and sometimes not. In comparison to continuous reinforcement, partial reinforcement schedules
lead to slower initial learning, but they also lead to greater resistance to extinction. Because the
reinforcement does not appear after every behavior, it takes longer for the learner to determine
that the reward is no longer coming, and thus extinction is slower.
Partial reinforcement schedules are determined by whether the reinforcement is:

Module 10: Learning 7


PSY01: Introduction to Psychology USI Vincentian Learning Module

• Ratio: Based on the number of responses that the organism engages in


• Interval: Based on the time that elapses between reinforcement
• Fixed: Based on a regular schedule
• Variable: Based on an unpredictable schedule
In a fixed-ratio schedule, a behavior is reinforced after a specific number of responses . For
instance, a rat’s behavior may be reinforced after it has pressed a key 20 times, or a salesperson
may receive a bonus after she has sold 10 products. As you can see in Figure 10.5, once the
organism has learned to act in accordance with the fixed-reinforcement schedule, it will pause
only briefly when reinforcement occurs before returning to a high level of responsiveness.
A variable-ratio schedule provides reinforcers after a specific but average number of
responses. Winning money from slot machines or on a lottery ticket are examples of reinforcement
that occur on a variable-ratio schedule. For instance, a slot machine may be programmed to
provide a win every 20 times the user pulls the handle, on average. A variable ratio schedule tends
to produce high rates of responding because reinforcement increases as the number of responses
increase.
In a fixed-interval schedule, reinforcement occurs for the first response made after a specific
amount of time has passed. For instance, on a one-minute fixed-interval schedule the animal
receives a reinforcer every minute, assuming it engages in the behavior at least once during the
minute. As you can see in Figure 10.5, animals under fixed-interval schedules tend to slow down
their responding immediately after the reinforcement, but then increase the behavior again as the
time of the next reinforcement gets closer. Most students study for exams the same way.
In a variable-interval schedule, the reinforcers appear on an interval schedule, but the timing
is varied around the average interval, making the appearance of the reinforcer unpredictable.
An example might be checking your e-mail: You are reinforced by receiving messages that come,
on average, say every 30 minutes, but the reinforcement occurs only at random times. Interval
reinforcement schedules tend to produce slow and steady rates of responding. The four types of
partial reinforcement schedules are summarized in Table 10.2.

Figure 10.5: Examples of Response Patterns by Animals Trained Under Different Partial
Reinforcement Schedules

Module 10: Learning 8


PSY01: Introduction to Psychology USI Vincentian Learning Module

Table 10.2: Reinforcement Schedules


Reinforcement schedule Explanation Real-world example
Fixed-ratio Behavior is reinforced after a Factory workers who are paid
specific number of responses according to the number of
products they produce
Variable-ratio Behavior is reinforced after an Payoffs from slot machines and
average, but unpredictable, other games of chance
number of responses
Fixed-interval Behavior is reinforced for the People who earn a monthly
first response after a specific salary
amount of time has passed
Variable-interval Behavior is reinforced for the Person who checks voice mail for
first response after an average, messages
but unpredictable, amount of
time has passed
Changing Behavior with Reinforcement and Punishment
It is also important to note that reinforcement and punishment are not simply opposites. The use
of positive reinforcement in changing behavior is almost always more effective than using
punishment. This is because positive reinforcement makes the person or animal feel better,
helping create a positive relationship with the person providing the reinforcement.
Types of positive reinforcement that are effective in everyday life include verbal praise or approval,
the awarding of status or prestige, and direct financial payment. Punishment combined with
reinforcement for an alternative behavior is more effective than punishment alone.
Punishment is more likely to create only temporary changes in behavior because it is based on
coercion and typically creates a negative and adversarial relationship with the person providing
the punishment. When the person, who provides the punishment, leaves the situation, the
unwanted behavior is likely to return. Additionally, those punished for bad behavior typically
change their behavior only to avoid the punishment rather than internalizing the norms of being
good for its own sake.
Punishment may also have unintended consequences. Punishment models aggression as a
method to control other people (Kohn, 1993). Punishment can cause anxiety which interferes with
learning. Emotional punishment, such as criticism or withdrawal of affection, can lead to
depression. If punishment is used, it is important to combine the punishment with reinforcement
for an alternative behavior. Then the person can leave the situation having learned how to earn
positive consequences.
There are alternatives to using punishment in operant conditioning. Extinction, for example, will
occur if you remove reinforcement from a previously occurring behavior. For example, if a child
is getting attention for throwing a tantrum, some psychologists recommend that you ignore him.
If attention has been the reward for this behavior in the past, removing the attention will cause
extinction of the tantrum. In some cases, "time-outs" are a form of extinction for a misbehaving
child. The parent takes away the reinforcing situation to eliminate a behavior.

Module 10: Learning 9


PSY01: Introduction to Psychology USI Vincentian Learning Module

Behavior modification refers to the deliberate and systematic use of conditioning to modify
behavior. Parents, for example, might want to use behavior modification with their children. A
system of rewards and punishments could be used to train children to do chores. Psychologists
might design programs using behavior modification for a variety of group settings. Prisons,
schools, and mental institutions are examples of places where administrators need to control
behavior. Token economies might be established so that individuals received tokens for good
behaviors. Tokens could then be exchanged for a variety of rewards. Penalties would be used for
less desirable acts, and these penalties could include fines that are paid with tokens.
Although the distinction between reinforcement and punishment is usually clear, in some cases it
is difficult to determine whether a reinforcer is positive or negative. On a hot day, a cool breeze
could be seen as a positive reinforcer (because it brings in cool air) or a negative reinforcer
(because it removes hot air). In other cases, reinforcement can be both positive and negative. One
may smoke a cigarette both because it brings pleasure, positive reinforcement, and because it
eliminates the craving for nicotine, negative reinforcement. Remember that reinforcement always
increases behavior, regardless of whether it is negative or positive. Punishment is the correct term
used for a consequence that suppresses behavior.

SAA#2
3. Compare and contrast classical and operant conditioning. How are
they alike? How do they differ?
4. What is the difference between negative reinforcement and
punishment?

Observational Learning (Modeling)


Observational learning, also called modeling, is
learning by observing the behavior of others. To
demonstrate the importance of observational learning in
children, Bandura, Ross, and Ross (1963) showed children
a live image of either a man or a woman interacting with
a Bobo doll, a filmed version of the same events, or a
cartoon version of the events. The Bobo doll is an
inflatable balloon with a weight in the bottom that makes
it come back up when you knock it down. In all three
conditions, the model violently punched the clown,
Figure 10.6: Bobo doll experiment
kicked the doll, sat on it, and hit it with a hammer.
The researchers first let the children view one of the three types of modeling, and then let them
play in a room in which there were some really fun toys. To create some frustration in the children,
Bandura let the children play with the fun toys for only a couple of minutes before taking them
away. Then Bandura gave the children a chance to play with the Bobo doll.
If you guessed that most of the children imitated the model, you would be correct. Regardless of
which type of modeling the children had seen, and regardless of the sex of the model or the child,
the children who had seen the model behaved aggressively, just as the model had done. They
also punched, kicked, sat on the doll, and hit it with the toy hammer. Bandura and his colleagues

Module 10: Learning 10


PSY01: Introduction to Psychology USI Vincentian Learning Module

had demonstrated that these children had learned new behaviors, simply by observing and
imitating others.
Observational learning is useful for animals and for people because it allows us to learn without
having to actually engage in what might be a risky behavior. Monkeys that see other monkeys
respond with fear to the sight of a snake learn to fear the snake themselves, even if they have
been raised in a laboratory and have never actually seen a snake (Cook & Mineka, 1990). As
Bandura put it:
The prospects for [human] survival would be slim indeed if one could learn only by
suffering the consequences of trial and error. For this reason, one does not teach
children to swim, adolescents to drive automobiles, and novice medical students to
perform surgery by having them discover the appropriate behavior through the
consequences of their successes and failures. The more costly and hazardous the
possible mistakes, the heavier is the reliance on observational learning from
competent learners (Bandura, 1977, p. 212).
Although modeling is normally adaptive, it can be problematic for children who grow up in violent
families. These children are not only the victims of aggression, but they also see it happening to
their parents and siblings. Because children learn how to be parents, in large part by modeling the
actions of their own parents, it is no surprise that there is a strong correlation between family
violence in childhood and violence as an adult. Children who witness their parents being violent
or who are themselves abused are more likely as adults to inflict abuse on intimate partners or
their children, and to be victims of intimate violence (Heyman & Slep, 2002). In turn, their children
are more likely to interact violently with each other and to aggress against their parents (Patterson,
Dishion, & Bank, 1984).
However, although modeling can increase violence, it can also have positive effects. Research has
found that, just as children learn to be aggressive through observational learning, they can also
learn to be altruistic in the same way (Seymour, Yoshida, & Dolan, 2009). Simple exposure to a
model is not enough to explain observational learning. Many children watch violent media and
do not become violent. Others watch acts of kindness and do not learn to behave in a similar
fashion. A variety of factors have been shown to explain the likelihood that exposure will lead to
learning and modeling. Similarity, proximity, frequency of exposure, reinforcement, and likeability
of the model are all related to learning. In addition, people can choose what to watch and whom
to imitate. Observational learning is dependent on many factors, so people are well advised to
select models carefully, both for themselves and their children.

SAA#3
5. Imagine that you had a 12-year-old nephew who spent many hours
a day playing violent video games. Basing your answer on the
material covered in this chapter, do you think that his parents
should limit his exposure to the games? Why or why not?

Module 10: Learning 11

You might also like