Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 136

Chapter 3: Learning and Theories

Objectives of this Chapter


• On the completion of this chapter the students will
be able to:
- reproduce the textual definition of learning
- define learning in their own words
- explain the meanings of the important terms in
the textual definition of
learning
- identify the prominent theories of learning
- state basic concepts, principles and laws
underlying each theory of learning
- identify and describe the elements in each
theory of learning
- express the procedures in each theory of
learning
- describe the practical implications of each
theory of learning for teaching
-indicate the strengths and weaknesses of
each theory of learning
Learning
Definition
Learning is relatively permanent change of
behavior as a result of experience/practice.
Learning Theories
• Learning theories can be classified into the
following:
-Behaviorist theory of learning
-Social learning theory of learning
-Cognitive theory of learning.
Behaviorist Theory of Learning

• Behaviorism is the approach to psychology


emphasizing that human and animal behavior can
be:
-observed,
-measured, and
-understood with out recourse to explanation
involving mental state.
• According to this view, only overt behavior should
be studied without reference to mental processes.
• The fundamental assumption of behaviorists
regarding learning is that learning occurs
through association.
• Behaviorists believe that learning process
takes place in two basic ways, namely:
–Classical conditioning
–Operant (instrumental) conditioning
Classical Conditioning

• Is based on the assumption that learning is the result of


association of events occurring in sequence.
• Examples
– Thinking about rain at the sight of black clouds in the sky and
thunder (US = rain, CS = black cloud and thunder, R =thinking),
– expectation of accident on hearing the siren of ambulance (US
= accident, CS = sound of accident, R =expectation),,
– So in classical conditioning an organism is presented with a
neutral stimulus followed by biologically relevant stimulus in
appropriate temporal order.
– More examples from you---
Pavlov’s Experiment

• Pavlov had been studying salivary secretion in


dogs and had already determined that when
food was put in a dog’s mouth, the animal
would invariably salivate.
• Pavlov also noticed that when he worked with
the same dog repeatedly the dog would
salivate in the presence of a number of stimuli
that were associated with food.
• For example,
- mere sight of the food dish,
- the person who regularly brought the food, or
- even the sound of his footstep.
• Later, he realized that these associations
represented a simple but important form of
learning.
• From that time onward, Pavlov devoted his
effort to the study of learning, which he hoped
might enable him to understand better the
working of the brain
The Elements of Pavlov’s classical conditioning

A. Unconditioned Stimulus (US)


• Any stimulus that is capable of producing a
particular reflexive response by itself.
• In Pavlov’s experiment US is food powder.
B. Unconditioned Response (UR)
• Is a reflexive response of the experimental subject
to the US.
• In Pavlov’s experiment UR is salivation in response
to food in the mouth.
C. The Conditioned Stimulus (CS)

• It is a new originally neutral stimulus that


became associated with the unconditioned
stimulus and there by gained the power to
elicit response from the subject.
• In Pavlov’s experiment, sound of the bell is
called CS.
D. The Conditioned Response (CR)

• This is a learned response to the conditioned stimulus.


• In Pavlov’s experiment the dog’s response to the bell is
conditioned (learned) response
• Some examples from daily life:
- Lightening – thunder incident (US – thunder, CS =
lightening)
- Heart beat when reading final exam schedule (US=
exam, CS= the sight of schedule
- Heart beat at the sight of sexy photograph (US =
sex, CS = sexy photograph).
Procedures of Classical Conditioning

• This refers to different time intervals between


the presentation of the CS and US.
A. Simultaneous conditioning
• CS is presented a fraction of second before US
is presented and left until subject gives
response.
B. Delayed Conditioning
• CS is presented for several seconds or more
before US is presented and left until the
response occurs.
• The light (CS) is turned on for several seconds
before the food (US) is presented and left on
until the response (salivation) occurs
• This is the procedure that produces the strongest
bonding b/n CS and US
• C. Trace conditioning
• CS is presented first and then removed before US is
presented so that only memory trace of the CS
remains to be conditioned.
• For example a tone (CS) is sounded for duration of 3
seconds after which it is turned off. Ten seconds after
the food (US) is presented and elicits salivation as the
UR.
• After a number of such tone-food pairings, the tone
comes to elicit salivation
D. Temporal conditioning
• If US is regularly presented every “t” seconds
in time, even if there is no explicit CS, after a
while, conditioned responses will start
operating just prior to the time US is
scheduled to occur.
• For example, if a puff of pair were blown into
your eye making it blink on a regular basis, say
every 10 seconds, you might eventually blink
your eye at about 9.5 seconds, just before the
air puff (US) occurred.
• N.B. Here the CS is time itself
E. Backward Conditioning

• In this case the order of pairing is reversed.


• That is, US is presented before the CS.
• In this procedure it is found that little learning
occurs
F. Higher Order Conditioning

• This is a procedure in which a well-established


CS comes to serve as a US and may paired
with a new CS to produce conditioning.
• For example, having the tone (CS1) well
associated with food (US) on some days, the
dog is given the usual tone-food pairings,
while on the others a light (CS2) precedes the
tone (CS1) and no food.
• Eventually salivation will be elicited by the
light (CS2).
N.B. The light is not directly paired with food

HOC in actual life situations.


Basic Laws (principles) of Classical conditioning

A. Acquisition
• It is initial learning of the response or initial stage of
association between CS and US.
B. Extinction
• The repeated presentation of the CS without the US
results in a weakening and eventually disappearance of
the conditioned reflex and this disappearance of CR is
called extinction.
• It is a process of suppressing the conditioned response
(CR) rather than eliminating it.
C. Spontaneous Recovery
• Spontaneous recovery is the temporary return
of conditioned response following a resting
period.
• It indicates that extinction is something more
than a passive forgetting of a learned
response.

D. Inhibition
• This is a conditioning in which the CS actively
suppresses a learned reflex.
• It is a process by which the subject learns not
to respond by the help of CS
• Extinction is inhibition of CR which can be
recovered again.
• E. Stimulus Generalization
• It is the capability of organisms to respond to two or
more similar stimuli in the same way.
• For example, in the Pavlov conditioning the dog was
first conditioned to salivate when a circle was
presented.
• Then it was demonstrated that the dog also
exhibited the CR in response to closed geometric
figures such as ellipses, pentagons and even squares.
• Another example from life experience is the
infant calling any male face “Daddy”
• N.B. The more closely the stimuli resemble
the greater the strength of response.
• F. Discrimination
• It is the tendency of organism (subject) to
sharpen its learning, to be selective, in response
to the myriad (varied) but similar stimuli it must
deal with from moment to moment.
• For example, let us consider an experiment in
which two CSs are presented at different times
during the conditioning procedure.
• One, a 1000 HZ tone designated as CS +, is
always followed by the US (food).
• The other, a 5000HZ tone termed the CS- is
never accompanied by the US. The CS+ and
CS- are presented in a random order over a
series of conditioning trails.
• At first, salivation occurs to both CS+ and CS-
(stimulus generalization).
• As conditioning proceeds, however, salivation
comes to be elicited only in the presence of
CS+.
• That is, the animal appears to discriminate the
CS that is paired with food from the CS that is
never associated with food.
Exercise (10%)

Assume:
A. A girl who becomes fearful on seeing a
mistreating boyfriend,
B. Students in a classroom who feel happy whenever a
supportive teacher comes to the class and identify:
A. US (1.5 points)
B. CS (1.5 points)
C. R (1 point)
D. UR (0.5 points)
E. CR (0.5 points)
Applications of Classical Conditioning

A. The Bell and pad method for Bed Wetting


• By the age of 5 or 6, children normally wake
up in response to the sensation of a full
bladder.
• They inhibit urination, which is an automatic
or reflexive response to bladder tension, and
go to the bathroom.
• But bed-wetters tend not to respond to the
sensation of a full bladder while asleep.
• So they remain asleep and frequently wet
their bed.
• By this method, such children are taught to
wake up in response to bladder tension as
follows
• They are made sleep on a special pad that has
been placed on the bed.
• When the children started to urinate, the
water content of the urine causes an electrical
circuit in the pad to be closed.
• Closing of the circuit triggers a bell or buzzer,
and the child is awakened.
• What are CS, UR, CR, and US?
B. Systematical Desensitization
• used to treat client with emotional problem like
exaggerated fear_ phobia.
• In this procedure:
- the client is taught how to bring about a systematic
relaxation (e.g. by tensing and
relaxing muscle groups).
-Then the client is made construct a list of fear
inducing situation involving the feared object (animal).
• This could range from those causing very small
amount of fear to the feared object
(animal) as exemplified below.
- Dull picture of the object (animal)
- Bright picture of the object (animal)
- A toy of the object (animal)
- A dead body of the animal
- The actual animal
• In the conditioning trials, the least fear producing
item on the lists presented along with the cue
from the therapist to relax, which constitutes the
US.
• The weak fear is thus paired with the more
powerful US signaling relaxation.
• The therapist then continues to move up the list
to successively more powerful fear eliciting CSs.
• What are US, CS, UR, and CR?
Operant (Instrumental) Conditioning

• The groundwork for this conditioning was laid


down by an American Psychologist known as
E.I.Thorndike.
• Thorndike’s Experiment
• In this experiment a hungry cat would be
locked in a box called puzzle box.
• In side the box was a string that when
properly operated released the cat from the
box.
• A food was placed out side the box.
• When first placed inside the box, the animal would
make abortive attempts to escape, such as:
-clawing at the door and every thing it could reach
-pushing at the ceiling
-trying to squeeze through any opening
-biting at the boxes or wires
-thrusting paws out through any opening etc.
• After a while, the cat would more or less
accidentally perform the appropriate response
(pressing the string which would open the door
and release the animal, thus giving it access to
the food.
• Other trails were repeated over several period
of time.
• In this experiment, Thorndike had made the
following observations.
1. The act of pressing the string, which was
followed by a rewarding consequence- food-
was observed to be performed repeatedly
(frequently) by the cat.
• The consequence is the result of operating
(operant) behavior which, in turn, encourages
the recurrence of the behavior.
• Hence, operant conditioning is any learning
process in which the consequences of a
deliberate behavior by an organism make it
perform that behavior again in the future.
• Here the consequence could be either the
attainment of need satisfying thing (food in
this case) or the removal of undesirable thing
(punishment), both having a reinforcing effect.
• The key features of this learning situation are that:
- Some action (behavior) of the learner is instrumental
in producing reinforcement (consequence) when it
operates upon the environment
• The behavior that produces the reinforcement is
strengthened in the sense that it is more likely to
occur in the future_ an association between behavior
(response) and reinforcement ( consequence) (Morgan
1979)
2 nd

• The cat became more effective in escaping from the box


over a series of trials (Grusec, 1990).
• In other words, the cat demonstrated improvement in
opening the door skillfully as escape trials continued
• According him each successful escape trial served to
stamp in (strengthen) a stimulus – response bond for
learning.
• From this observation, Thorndike deduced that:
- each successful trail attribute to the strength of
a stimulus-response bond for learning.
For Thorndike:
-Stimulus is a reward (reinforcer) involved in
any problem solving situation
-Response is what the cat did to solve the
problem
-this can framed as S-R
3rd
• The diffuse escape behavior noted during the
early trials in the box became more focused
and the cat would soon perform the
appropriate response in very precise way.
• Here behaviors, which could not be resulted in
opening the door and in getting access to the
food were observed to be diminished and
finally eliminated (rejected) by the cat.
• That is, the cat directly went to string, instead
of other sites in the box and immediately
started pressing it.
• From this observation he concluded that in
problem solving situations, the behavior
(action) which is resulted in the solution of the
problem is likely to be focused by organisms
whereas the ineffective behaviors are rejected
• N.B. This process of learning and problem
solving which involves testing every possible
solution and rejecting those that yield an error
was termed by Thorndike as trial and error
• According to this view, the essence of learning
is the organism’s ability to solve problems.
• The consequence of behavior is important in
learning process.
• That is, learning is determined by the
consequence of behavior, which leads to
Thorndke’s low of effect.
Torndik’s Laws of Operant Conditioning

• A. The Law of Effect


• Actions followed by a satisfying change are
likely to be repeated in same or similar
situation.
• B. The Law of Readiness
• When any conduction unit is ready to conduct,
to do so is satisfying for it.
C. The Law of Repetition
• Any response to a situation will, other things equal, be
more strongly connected with the situation in proportion
to the:
- number of times it has been connected with that
situation and
- average vigor and
- duration of the connection.
• Shortly exercise strengthens the bond between situation
and response
D. Subordinate Laws
• Multiple Response: Confronted with new
situation the organisms response in variety of
ways before arriving at the correct responses.
• Attitude: The learner performs the task well if
he/she has attitude set in the task.
• Prepotency of Elements: The learner reacts to the
learning situation in a selective manner.
• Analogy: The organism responds to a new
situation on the basis of the responses made by
him in similar situation in the past. He makes
response by comparison or analogy
• Principle of polarity: Connections act more easily
in the direction in which they were first
formed than opposite direction
• In summary, Thorndike’s learning theory and
laws have the following implication for
effective learning.
-The importance of readiness for learning
-The importance of reward and
punishment in the process of learning
-The importance of repetition
• Others such as the:
-importance if previous experience
-importance of similarity b/n two situation
-greater impact of reward in strengthening
connection than the corresponding impact of
punishment in weakening it.
-Importance of independent work for
children
• Aids to improve learning:
-interest in the work
-interest in improvement
-significance of the work
-attentiveness
Elements of Operant Conditioning

A. Operant
• Is a voluntary response that is emitted by the organism to modify its
environment
B. Reinforcer
• Is the presentation of appropriate stimulus following an operant
response-a reward for specific response.
C. Reinforcement
• Is the process of response strengthening that takes place as a result
of the delivery of a reinforcer
D. Trial and Error
• This is the process of testing every possible solution to solve a
problem and rejecting those that yield an error.
• From the above analysis one see that operant
conditioning is based on the idea that learning
is the result of the association between
deliberate behavior of an organism (response)
and its consequence.
Reinforcement in Operant Conditioning
• When does the consequence of a behavior
reinforce it?
• According to operant conditioning theory, a
behavior is reinforced when the consequence is the:
- addition of something desired (positive
reinforcement) and
- removal of something unwanted (negative
reinforcement),
Types of Reinforcement
A. Positive Reinforcement
• In this process behavior is strengthened because something
desirable (rewarding) is given as a consequence of an operant
being performed e.g. food, money etc.
• In operant conditioning studies with people, positive
reinforcements are often social behavior or consequence
• Eg. - A Parent’s praise (social behavior) often reinforces a child
- Goldstar or extra points (consequence) reinforce a
students
- A cash bonus reinforces a successful salesperson
B. Negative Reinforcement
• In this case behavior is strengthened because something
aversive or unpleasant in removed or prevented from
occurring.
Examples
• If a dog escapes a mild electric shock by jumping over a
hurdle, then this behavior (the act of jumping) is likely to be
repeated.
• If a student causes his teacher to stop nagging when he
finally hands in a late assignment, he is being negatively
reinforced.
• In each case the probability of the operant
behavior increases because of its consequences.
• That is why we say the consequences of behaviors
in operant conditioning are reinforcers.
• Negative reinforcement, therefore, works the
same way that the positive reinforcement does,
except that it happens by removing undesirable
events instead of by adding desirable ones
• Fear acts as a negative rein forcer because removal
of fear increase the probability that the behavior
preceding it is repeated.
• E.g. - Planning a head of time for fear that
things will go wrong
- Studying a head of examination to avoid
failure in exams.
• Both positive reinforcement and negative
reinforcement differ from punishment
C. Primary Reinforcement
• Primary reinforcement is one that is effective
for an untrained subject.
• It does not require any special previous
training in order to strengthen behavior.
• The first time a primary reinforcer is made
contingent upon a response, it will begin to
strengthen that response.
• They are reinforcers that satisfy basic
biological needs such as:
- hunger (food), thirsty (water),
- adequate warmth (clothing),
- sex (intercourse) all are positive primary
rein forcers;
- the need to avoid pain (electricity) –
negative primary rein forcer
D. Secondary Reinforcement
• Secondary reinforcers refers to stimuli which
were originally neutral but happened to occur
with a primary reinforcer and eventually
acquire reinforcing power of their own
• Secondary rein forcers are often learned ones.
• Suppose that a click was produced every time
the primary reinforces or food was delivered.
• The click would become a secondary or
learned reinforcer.
• At first, the click stimulus would have no
reinforcing property, but by its presence every
time the primary rein forcer was delivered, it
would become a rein forcer in its own right
• N.B. The secondary rein forcers have most of
the properties of a primary reinforcer except
that, to retain reinforcing power, they have to
be paired with original primary reinforcer
from time to time (Seifert, 1991).
• Some examples of secondary reinforcers in
classroom teaching learning processes
- Achievement grades
- A teacher’s praise
- Nearness:
-sitting next to student briefly,
- putting student desk near the teacher,
- playing game with student
Privileges:
- appointing student as:
- leader of activity or
- classroom monitor
Words:
- giving verbal or written compliments for
work/behavior or
- sending positive note home to parents
• Examples of secondary social reinforcers
- Gestures and facial expressions :
- smiles, - laughter,
- winks, - head nods
- Touch:
- hugs, - handshakes,
- pats on back, - holding hands
E. Immediate
• Refers to providing reinforcement with no any delay
after the demonstration of the desired behavior.
• Immediacy of reinforcement influences our behavior,
strengthen it.
F. Delayed Reinforcement
• Refers to providing reinforcement a long period of time
after the demonstration of the desired behavior.
• Delay of reinforcement following the response makes
the behavior weaker.
G. Continuous reinforcement
• is a reinforcement that happens on every
possible occasion or is the provision of a
reinforcer whenever the desired behavior is
demonstrated by a subject.
• When to use continuous reinforcement?
• this reinforcement is especially effective at the
beginning of conditioning, when an operant
behavior is first being established.
H. Intermittent reinforcement
• is a reinforcement that happens on some
occasions but not on others.
• is the process of reinforcement in which a
reinforcer is given irregularly, sometimes
giving reinforcer and sometimes not.
• When to use intermittent reinforcement?
• Once a behavior has become established
intermittent reinforcement is not only more
efficient than continuous reinforcement but
also more effective in maintaining a behavior
over time.
• That is, as a rule partial or intermittent
reinforcement causes behavior to take longer to
build up in frequency but afterwards, it also
takes longer to disappear or extinguish.
• It also avoids satiation or bordom.
V. Schedule of Reinforcement
• This is the pattern of reinforcement or a rule to
describe how the delivery of a reinforcer is
related to a response
V. Four Schedules of Intermittent Reinforcement

• Operant theorists have studied the effect of


four major patterns of intermittent
reinforcement.
• The patterns are defined by whether they
result from:
-intervals of time passing
-certain number of response occurring
A. Fixed Interval Schedule

• Reinforcer occurs after a specific interval of


time like every 10 seconds. E.g. Teacher salary
B. Variable Interval Schedule

• Reinforcer occurs after variable intervals of


time, though averaging some predictable
interval, like after an average of every 10,
minutes may be possible.
• E.g. -test time. Or
• 0 1 2 3 4 5 (6) 7 8 9 10 11 12 13 14 15 16 17
(18) 19 20 21 22 23 (24) 25 26 27 28 29 30
• In this example, a total of 30 minutes was
used to deliver the reinforcement 3 times.
30/3=10 minuets
C. Fixed Ratio Schedule

• reinforcer occurs after a specific number of


responses, like after every 7 responses. E.g.
piece work or commission.
D. Variable Ratio schedule

• reinforcer occurs after a variable number of


responses though averaging some predictable
numbers, like number dialing phone
E.g. 0 1 (2) 3 4 5 (6) 7 (8) 9 10 (11) 12 (13) 14 15
16 17 (18). On average every 18/6=3rd
response
The Effects of the above Schedules on the Patterns of Behavior

• These different reinforcement schedules show


significant variation in the patterns of behavior
they produce.
• Regular/ fixed schedules produce an alternating
pattern of slowing down and speeding up.
• An Irregular/ variable schedule of reinforcement
produces relatively even frequency of behavior.
• That is, pausing and accelerating do not occur
with variable schedules of reinforcement.
Examples
• Unpredictable time for tests (variable
intervals)
• Gamblers do not know the exact tossing which
is following reward (variable ratio)
• Dialing a busy phone (variable ratio).
VII. Punishment
•  Punishment is any aversive consequence
(stimulus) that occurs after some specific
operant behavior (response)
• is intended to suppress that behavior may
consist of:
- giving something of negative value or
- taking away something of positive value
because of organisms (individuals) misbehavior
• E.g. A traffic ticket for speeding, or giving a misbehaving
child pushup of 10 times or some heavy work
• Application
• Punishment is a virtually universal tool of behavioral
control from curdles to grave.
• Our religious beliefs and our legal and penal systems
rely heavily on negative consequences for deviant
behaviors to control members of society.
• Punishment is even used as a therapeutic technique to
control certain behavioral abnormalities
Does punishment work?

• The question of whether punishment works turns


out to be far more complicated than would at first
appear.
• The people like:
- Thorndike,
- Freud,
- Esters and
- Skinner did not advocate the use of punishment to
control behavior
• Their view was held until the 1960s when the
first really intensive experimental investigations
of punishment took place.
• Once systematic investigation began, it became
clear that punishment did work.
• the appropriate question to ask bout
punishment was not whether to use it but
under what conditions punishment could be
most effective.
Variables Influencing Punishment Effectiveness

A. Severity
• the more sever the punishment, the greater the
degree of response suppression it produces.
B. Schedule presentation
• continuous punishment more effectively
suppresses behavior than does intermittent
punishment.
• Continuous punishment is particularly useful in
child rearing
C. Acceptable Alternative Response
• is coupling punishment with the opportunity to
gain positive reinforcement by responding in
another way.
• It is one of the most powerful means of
suppressing on going behavior.
D. Immediacy and Justification
• punishment must be administered immediately
after misbehavior occurred and should be justified
IX. Side Effects of Punishment

• Potential side effects of punishment can be


divided into four categories.
A Physical or psychological avoidance of the
punishing agent
B. Emotional disturbance
C. Over generalization
D. Aggression
X. Modifying Complex Behavior
A. Shaping
• This involves reinforcing successive approximation
to a final, desirable behavior
• It is a more suitable procedure for increasing
response frequency.
• This technique is especially helpful in
strengthening behaviors that happen only rarely).
• It is also useful in speeding up operant
conditioning (Morgan)
B. Chaining
• Is the way to build a complex sequence of behaviors
by reinforcing individual elements of the sequence.
• The technique is often used successfully by animal
trainers to produce complicated results.
E.g. At the circus, a dog might pick up a ball, carry it
across a distance, and drop it carefully in a basket.
 
C. Response Cost
• It is a way of reducing the frequency of
undesirable behavior by removing a specified
reinforcement whenever the behavior occurs.
• For the system to work:
- the organism need to begin with plenty of
reinforcements
- reinforcements can be taken back in small bits
whenever the undesirable behavior occurs.
D. Time out procedures
• This is a way to reduce inappropriate behavior
by denying a subject reinforcement for a fixed
period of time.
- E.g. If a student has become too rowdy, the
teacher may tell him/her to put his/her head
down on his/her desk for two minutes.
• E. Contingency Contracting
• This is the use of written documents to specify
how behaviors and reinforcements will be related
for particular students.
• Typically, a contract spells out exactly what:
- a particular student is expected to do,
- rewards he/she will earn as a result,
- the consequences of not completing a contract
will be.
XI. Operant Conditioning in Everyday life

A. Skill Learning
• Operant conditioning is responsible for almost
every learned skill or ability that we have acquired
since our birth.
• For example,
- playing piano, - writing a letter,
- opening a door, - working screwdriver,
- riding a bicycle, - cook a dinner etc, all acquired,
in great part, through operant conditioning .
B. Socialization
• Some of our:
- beliefs,
- customs, and
- goals may be learned through the mechanism of
operant conditioning.
• Such learning seems especially evident during the
period when the young children are being thought
the way of their group – when they are socialized.
C/ Control of Behavior
• Agencies of human society, for example,
government and the school may use principle
of operant conditioning to shape behavior
• Parents and other agents of society usually
don’t deliberately shape behavior but society
is arranged so that reinforcements are
contingent upon behavior.
D. Deliberate Behavior shaping
• Besides being ever present in human
situations, operant conditioning is sometimes
deliberately used to shape desired behaviors.
• Programmed learning and certain types of
therapy for behavior disorders are some
examples.
Cognitive Theory of Learning
• This school puts heavy emphasis on cognitive
factor in learning.
• Learning in this situation relies heavily on the
processing and storage of information as it
comes to us from the environment (Morgan,
1994)
Information processing
• This involves:
-Acquisition of information, which is selective
-Rehearsal – involves focusing attention on an item of
information by repetition or processing it in some other
way such as:
-Encoding – presenting information in some
form
-Semantic recall – learning according to
meaning etc
-Adaptation – Which involves:
-Assimilation – adding new information to
the cognitive organization already there
-Accommodation – changing intellectual
organization somewhat to adjust to the
new idea.
Retrieval – recalling things without cue.
According to this school individuals:
- are active in making sense out of the world
-investigate and reach at conclusion
-have large capacity for learning- creative
-are capable of processing information.
Latent learning
• This is any learning that does not manifest
itself in the immediate performance of the
organism.
• The term latent implies that learning is
identified with knowledge that is somehow
stored until it is needed for the performance
of some behavior.
• In this sense leaning remains hidden until the
appropriate conditions bring it out- latent
learning can take place for two reasons:
- for fearing punishment
- lack of evidence’s importance.
Insight Learning
• This is a learning characterized by a period during
which no apparent progress is made in finding a
solution until the correct answer appears to leap out
in a flash of insight (suddenly).
• Insight learning occurs because of perceptual
organization of elements in the environment- new
relationship among objects and events are suddenly
seen.
 
Social Learning Theory
• In both classical and operant conditioning
behaviorists focus on ones direct personal
experience as they believe that each
individual’s current behavior is a result of past
conditioning.
• To them, people’s behaviors are the products
of their reinforcing environment.
• The social learning theorists, though they
share the views of both behaviorists (the
importance of environment for learning) and
cognitive theorists (the importance of mental
processes for learning), are concerned with
idea that learning can take place in social
situation through watching what other people
do.
• According to this view, significant part of our
learning is accomplished through modeling
and/or imitation, even though the imitative
responses are not directly reinforced.
• To these theorists, as opposed to the
behaviorists, people are active in learning.
• In this theory modeling and imitation are
important terms (concepts)
• Modeling
• This is special kind of learning in which a
person acquires a response to a situation
simply by watching the response of others.
• We tend to model behaviors if we:
- watch the models being rewarded for the
behavior or
- particularly admire the behavior.
- Children model:
- their parents (typically), - a teacher
- a grandparent, - a baby sitter
- an older brother, - an athlete,
- movie star etc.
• They follow models for undesirable and
desirable behavior.
• But as they grow older, they usually become
more selective and realistic,
• They follow the example of a close friend
instead of a distant idol.
Imitation
• refers to the act of copying the behavior of other
persons; a response that is like the stimulus
triggering the response.
• It has important role in the observational learning.
• Children:
- identify with their parents
- imitate their different cultural and economic
behaviors.
Factors Affecting the Occurrence of
Imitation

I. Reinforcement and Punishment


• If the model behavior is reinforced, it is likely
that the observers will imitate the behavior.
• If, on the other hand, the model behavior is
punished, it is unlikely that the observer will
imitate the behavior.
• For example, in an experiment a group of
young children watched as a model was
rewarded with juice and candy for being
aggressive and on another occasion a different
group of children watched the same model
was scold for the same aggression.
• The result of the experiment showed that children
who had observed the model receiving reward for
aggressive behavior rarely imitated the aggression.
• N.B Though it was observed punishment inhabits
the imitation of model behavior, experiment
results showed that the children did learn
(acquire) the behavior, and could imitate it at later
time if the reinforcing and punishing contingencies
changed.
II. Perceived Status of the Model

• Some of the earliest social learning studies showed


that the behavior of an entire group of people can
best be altered by having the individuals with
highest status (as perceived by the group).
• Such prominent models can have a powerful effect
especially when presented to a large number of
observers by mass media. For example:
- popular child in peer group
- famous teacher, singer, sports men etc.
III. Experience of the Observers

• Imitation also is more likely when:


- a person can already perform the component
sub-skills of the behavior that has
been modeled and
- the person’s usual involvement is similar to
that in which the model was originally
observed.
IV. No Trial Learning

• Sometimes the learner needs not even to


make response in order to learn it.
• That is, to remember the response and have it
available to perform at later time.
• The learner acquires and stores internal
responses through images and verbal coding
V. Sources of Observational Learning

A. Parent
• For many reasons, the family is the prime site for
observational learning.
• During childhood, parents are children’s first
models as well as their first teachers.
• They are also very powerful figures in young
children’s life,
- controlling all resources and
- caring for all needs.
• Children watch their parents do many things which look
like fun, and see that many skills their parents have are
more effective (lead to better reinforcement) than their
own skills
• N.B. - Not only are parents and older siblings prime models
but children are particularly susceptible to their
influence.
- People who are more dependent and less competent
are likely to do more imitating and young
children are certainly dependent and in a sense
incompetent.
• B. Others
• As the child grows up, models other than
parents and siblings, assume greater
importance.
• - Television, - teachers,
• - movie, - recordings stars,
• recreational leaders and peers, especially
peers who are popular, powerful, attractive,
and warmly disposed to the young person – all
provide an array of behaviors ripe for:
- imitating,
- comparing,
- discarding,
- modifying and combining.
• As a result, the individual acquires a broad and
complex repertory of behaviors which are the
product of his/her unique history.
VI. Significance of Imitative Learning

• Imitative behavior is a key to understanding


such important human psychological
phenomenon such as:
- language learning,
- attitude formation and
- personality development
• It also plays a role in certain kind of therapy
for behavior problems
Factors Affecting Learning
VII. Imitation in other Species

• It is species-specific capacity.
• - E.g. - Some birds like parrot can imitate
human language.
-And some birds learn or perfect their calls
by imitating older members of the species.

You might also like