Professional Documents
Culture Documents
Learning
Learning
Learning
Introduction to Learning
Defining Learning
Key Terms
• Ivan Pavlov: (1849–1936) A Russian physiologist known for his theories of classical conditioning.
• Albert Bandura: (1925–present) A psychologist and learning theorist who first proposed social learning theory and
can be credited for first having noted observational learning.
• B. F. Skinner: (1904–1990) An American psychologist, behaviorist, author, inventor, and social philosopher known
for his work on operant conditioning.
What Is Learning?
Learning is an adaptive function by which our nervous system changes in relation to stimuli in
the environment, thus changing our behavioral responses and permitting us to function in our
environment. The process occurs initially in our nervous system in response to environmental
stimuli. Neural pathways can be strengthened, pruned, activated, or rerouted, all of which cause
changes in our behavioral responses. Instincts and reflexes are innate behaviors—they occur
naturally and do not involve learning. In contrast, learning is a change in behavior or knowledge
that results from experience. The field of behavioral psychology focuses largely on measurable
behaviors that are learned, rather than trying to understand internal states such as
Types of Learning
There are three main types of learning: classical conditioning, operant conditioning, and
observational learning. Both classical and operant conditioning are forms of associative
learning, in which associations are made between events that occur
together. Observational learning is just as it sounds: learning by observing others.
Classical Conditioning
Classical conditioning is a process by which we learn to associate events, or stimuli, that
frequently happen together; as a result of this, we learn to anticipate events. Ivan Pavlov
conducted a famous study involving dogs in which he trained (or conditioned) the dogs to
associate the sound of a bell with the presence of a piece of meat. The conditioning is achieved
when the sound of the bell on its own makes the dog salivate in anticipation for the meat.
Operant Conditioning
Operant conditioning is the learning process by which behaviors are reinforced or punished,
thus strengthening or extinguishing a response. Edward Thorndike coined the term “law of
effect,” in which behaviors that are followed by consequences that are satisfying to the organism
are more likely to be repeated, and behaviors that are followed by unpleasant consequences
are less likely to be repeated. B. F. Skinner researched operant conditioning by conducting
experiments with rats in what he called a “Skinner box.” Over time, the rats learned that
stepping on the lever directly caused the release of food, demonstrating that behavior can be
influenced by rewards or punishments. He differentiated between positive and negative
reinforcement, and also explored the concept of extinction.
Observational Learning
Observational learning occurs through observing the behaviors of others and imitating those
behaviors—even if there is no reinforcement at the time. Albert Bandura noticed that children
often learn through imitating adults, and he tested his theory using his famous Bobo-doll
experiment. Through this experiment, Bandura learned that children would attack the Bobo doll
after viewing adults hitting the doll.
Classical Conditioning
Ivan Pavlov’s research on classical conditioning profoundly informed the psychology of learning
and the field of behaviorism
Key Terms
• behaviorism: An approach to psychology focusing on behavior, denying any independent significance for the mind
and assuming that behavior is determined by the environment.
• Hans Eysench: (1916–1997) A German psychologist who is best known for his work on intelligence and personality.
• behavior therapy: An approach to psychotherapy that focuses on a set of methods designed to reinforce desired
behaviors and eliminate undesired behaviors, without concerning itself with the psychoanalytic state of the subject.
• condition: To shape the behavior of an individual or animal. Ivan Pavlov (1849–1936) was a Russian scientist whose
work with dogs has been influential in understanding how learning occurs. Through his research, he established the
theory of classical conditioning.
Extinction is the decrease in the conditioned response when the unconditioned stimulus is no
longer presented with the conditioned stimulus. When presented with the conditioned stimulus
alone, the individual would show a weaker and weaker response, and finally no response. In
classical-conditioning terms, there is a gradual weakening and disappearance of the conditioned
response. Related to this, spontaneous recovery refers to the return of a previously
extinguished conditioned response following a rest period. Research has found that with
repeated extinction/recovery cycles, the conditioned response tends to be less intense with
each period of recovery.
If we look at Pavlov’s experiment, we can identify the four factors of classical conditioning at
work:
• The unconditioned response was the dogs’ natural salivation in response to seeing or smelling
their food.
• The unconditioned stimulus was the sight or smell of the food itself.
• The conditioned stimulus was the ringing of the bell, which previously had no association with
food.
• The conditioned response, therefore, was the salivation of the dogs in response to the ringing of
the bell, even when no food was present.
Key Terms
• John B. Watson: (1878–1958) An American psychologist who established the psychological school of behaviorism,
and is known for his controversial “Little Albert” experiment.
• conditioning: The process of modifying behavior. Since Ivan Pavlov’s original experiments, many studies have
examined the application of classical conditioning to human behavior.
Behavioral Therapies
Classical conditioning has been used as a successful form of treatment in changing or
modifying behaviors, such as substance abuse and smoking. Some therapies associated with
classical conditioning include aversion therapy, systematic
desensitization, and flooding. Aversion therapy is a type of behavior therapy designed to
encourage individuals to give up undesirable habits by causing them to associate the habit with
an unpleasant effect. Systematic desensitization is a treatment for phobias in which the
individual is trained to relax while being exposed to progressively more anxiety -provoking
stimuli. Flooding is a form of desensitization that uses repeated exposure to highly distressing
stimuli until the lack of reinforcement of the anxiety response causes its extinction.
Thorndike’s law of effect states that behaviors are modified by their positive or negative
consequences
Key Terms
• Law of Effect: A law developed by Edward L. Thorndike that states, “responses that produce a satisfying effect in a
particular situation become more likely to occur again in that situation, and responses that produce a discomforting
effect become less likely to occur again in that situation.”
• behavior modification: The act of altering actions and reactions to stimuli through positive and negative
reinforcement or punishment.
• trial and error: The process of finding a solution to a problem by trying many possible solutions and learning from
mistakes until a way is found.
Thorndike’s Experiments
Thorndike’s most famous work involved cats trying to navigate through various puzzle boxes. In
this experiment, he placed hungry cats into homemade boxes and recorded the time it took for
them to perform the necessary actions to escape and receive their food reward. Thorndike
discovered that with successive trials, cats would learn from previous behavior, limit ineffective
actions, and escape from the box more quickly. He observed that the cats seemed to learn, from
an intricate trial and error process, which actions should be continued and which actions should
be abandoned; a well-practiced cat could quickly remember and reuse actions that were
successful in escaping to the food reward.
Thorndike’s law of effect now informs much of what we know about operant conditioning and
behaviorism. According to this law, behaviors are modified by their consequences, and this
basic stimulus-response relationship can be learned by the operant person or animal. Once the
association between behavior and consequences is established, the response is reinforced, and
the association holds the sole responsibility for the occurrence of that behavior. Thorndike
posited that learning was merely a change in behavior as a result of a consequence, and that if
an action brought a reward, it was stamped into the mind and available for recall later.
From a young age, we learn which actions are beneficial and which are detrimental through a
trial and error process. For example, a young child is playing with her friend on the playground
and playfully pushes her friend off the swingset. Her friend falls to the ground and begins to cry,
and then refuses to play with her for the rest of the day. The child’s actions (pushing her friend)
are informed by their consequences (her friend refusing to play with her), and she learns not to
repeat that action if she wants to continue playing with her friend.
The law of effect has been expanded to various forms of behavior modification. Because the law
of effect is a key component of behaviorism, it does not include any reference to unobservable
or internal states; instead, it relies solely on what can be observed in human behavior. While this
theory does not account for the entirety of human behavior, it has been applied to nearly every
sector of human life, but particularly in education and psychology.
B. F. Skinner was a behavioral psychologist who expanded the field by defining and elaborating
on operant conditioning
Key Terms
• punishment: The act or process of imposing and/or applying a sanction for an undesired behavior when conditioning
toward a desired behavior.
• aversive: Tending to repel, causing avoidance (of a situation, a behavior, an item, etc.).
• superstition: A belief, not based on reason or scientific knowledge, that future events may be influenced by one’s
behavior in some magical or mystical way.
Skinner’s Experiments
Skinner’s most famous research studies were simple reinforcement experiments conducted on
lab rats and domestic pigeons, which demonstrated the most basic principles of operant
conditioning. He conducted most of his research in a special
cumulative recorder, now referred to as a “Skinner box,” which was used to analyze the
behavioral responses of his test subjects. In these boxes he would present his subjects with
positive reinforcement, negative reinforcement, or aversive stimuli in various timing intervals (or
“schedules”) that were designed to produce or inhibit specific target behaviors. In his first work
with rats, Skinner would place the rats in a Skinner box with a lever attached to a feeding tube.
Whenever a rat pressed the lever, food would be released.
After the experience of multiple trials, the rats learned the association between the lever and
food and began to spend more of their time in the box procuring food than performing any other
action. It was through this early work that Skinner started to understand the effects of behavioral
contingencies on actions. He discovered that the rate of response—as well as changes in
response features—depended on what occurred after the behavior was performed, not before.
Skinner named these actions operant behaviors because they operated on the environment to
produce an outcome. The process by which one could arrange the contingencies of
reinforcement responsible for producing a certain behavior then came to be called operant
conditioning.
To prove his idea that behaviorism was responsible for all actions, he later created a
“superstitious pigeon.” He fed the pigeon on continuous intervals (every 15 seconds) and
observed the pigeon’s behavior. He found that the pigeon’s actions would change depending on
what it had been doing in the moments before the food was dispensed, regardless of the fact
that those actions had nothing to do with the dispensing of food. In this way, he discerned that
the pigeon had fabricated a causal relationship between its actions and the presentation of
reward. It was this development of “superstition” that led Skinner to believe all behavior could be
explained as a learned reaction to specific consequences.
In his operant conditioning experiments, Skinner often used an approach called shaping.
Instead of rewarding only the target, or desired, behavior, the process of shaping involves the
reinforcement of successive approximations of the target behavior. Behavioral approximations
are behaviors that, over time, grow increasingly closer to the actual desired response. Skinner
believed that all behavior is predetermined by past and present events in the objective world. He
did not include room in his research for ideas such as free will or individual choice; instead, he
posited that all behavior could be explained using learned, physical aspects of the world,
including life history and evolution. His work remains extremely influential in the fields of
psychology, behaviorism, and education.
Shaping
Key Terms
• successive approximation: An increasingly accurate estimate of a response desired by a trainer.
• paradigm: An example serving as a model or pattern; a template, as for an experiment.
• shaping: A method of positive reinforcement of behavior patterns in operant conditioning.
As the subject moves through each behavior trial, rewards for old, less approximate behaviors
are discontinued in order to encourage progress toward the desired behavior. For example,
once the rat had touched the lever, Skinner might stop rewarding it for simply taking a step
toward the lever. In Skinner’s experiment, each reward led the rat closer to the target behavior,
finally culminating in the rat pressing the lever and receiving food. In this way, shaping uses
operant-conditioning principles to train a subject by rewarding proper behavior and discouraging
improper behavior.
Applications of Shaping
This process has been replicated with other animals—including humans—and is now common
practice in many training and teaching methods. It is commonly used to train dogs to follow
verbal commands or become house-broken: while puppies can rarely perform the target
behavior automatically, they can be shaped toward this behavior by successively rewarding
behaviors that come close. Shaping is also a useful technique in human learning. For example,
if a father wants his
daughter to learn to clean her room, he can use shaping to help her master steps toward the
goal. First, she cleans up one toy and is rewarded. Second, she cleans up five toys; then
chooses whether to pick up ten toys or put her books and clothes away; then cleans up
everything except two toys. Through a series of rewards, she finally learns to clean her entire
room.
Reinforcement and punishment are principles of operant conditioning that increase or decrease
the likelihood of a behavior.
Key Terms
• latency: The delay between a stimulus and the response it triggers in an organism.
Reinforcement and punishment are principles that are used in operant conditioning. Reinforcement means you are
increasing a behavior: it is any consequence or outcome that increases the likelihood of a particular behavioral
response (and that therefore reinforces the behavior). The strengthening effect on the behavior can manifest in
multiple ways, including higher frequency, longer duration, greater magnitude, and short latency of response.
Punishment means you are decreasing a behavior: it is any consequence or outcome that decreases the likelihood of
a behavioral response. Extinction, in operant conditioning, refers to when a reinforced behavior is extinguished
entirely. This occurs at some point after reinforcement stops; the speed at which this happens depends on the
reinforcement schedule, which is discussed in more detail in another section.
Positive reinforcers add a wanted or pleasant stimulus to increase or maintain the frequency of
a behavior. For example, a child cleans her room and is rewarded with a cookie.
• Negative reinforcers remove an aversive or unpleasant stimulus to increase or maintain the frequency of a behavior.
For example, a child cleans her room and is rewarded by not having to wash the dishes that night.
• Positive punishments add an aversive stimulus to decrease a behavior or response. For example, a child refuses to
clean her room and so her parents make her wash the dishes for a week.
• Negative punishments remove a pleasant stimulus to decrease a behavior or response. For example, a child
refuses to clean her room and so her parents refuse to let her play with her friend that afternoon.
A secondary reinforcer, also called a conditioned reinforcer, has no inherent value and only has
reinforcing qualities when linked or paired with a primary reinforcer. Before pairing, the
secondary reinforcer has no meaningful effect on a subject. Money is one of the best examples
of a secondary reinforcer: it is only worth something because you can use it to buy other
things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or
other secondary reinforcers.
Schedules of Reinforcement
Reinforcement schedules determine how and when a behavior will be followed by a reinforce
Key Terms
• extinction: When a behavior ceases because it is no longer reinforced.
• interval: A period of time.
• ratio: A number representing a comparison between two things.
There are several different types of intermittent reinforcement schedules. These schedules are
described as either fixed or variable and as either interval or ratio.
All of these schedules have different advantages. In general, ratio schedules consistently elicit
higher response rates than interval schedules because of their predictability. For example, if you
are a factory worker who gets paid per item that you
manufacture, you will be motivated to manufacture these items quickly and consistently.
Variable schedules are categorically less-predictable so they tend to resist extinction and
encourage continued behavior. Both gamblers and fishermen alike can understand the feeling
that one more pull on the slot-machine lever, or one more hour on the lake, will change their
luck and elicit their respective rewards. Thus, they continue to gamble and fish, regardless of
previously unsuccessful feedback.
Extinction of a reinforced behavior occurs at some point after reinforcement stops, and the
speed at which this happens depends on the reinforcement schedule. Among the reinforcement
schedules, variable-ratio is the most resistant to extinction, while fixedinterval is the easiest to
extinguish.
Latent Learning
Key Terms
• latent learning: A form of acquiring knowledge or skill that is not immediately expressed in an overt response; it
occurs without obvious reinforcement, to be applied later.
• reinforcement: The process whereby a behavior with desirable consequences is rewarded and comes to be
repeated.
Latent learning is a form of learning that is not immediately expressed in an overt response. It
occurs without any obvious reinforcement of the behavior or associations that are learned.
Interest in this type of learning, spearheaded by Edward C. Tolman, arose largely because the
phenomenon seemed to conflict with the widely held view that reinforcement was necessary for
learning to occur. Latent learning is not readily apparent to the researcher because it is not
shown behaviorally until there is sufficient motivation. This type of learning broke the constraints
of behaviorism, which stated that processes must be directly observable and that learning was
the direct consequence of conditioning to stimuli. Latent learning implies that learning can take
place without any behavioral changes being immediately present. This means that learning can
be completely cognitive and not instilled through behavioral modification alone. This cognitive
emphasis on learning
was important in the development of cognitive psychology. Latent learning can be a form of
observational learning (i.e., learning derived from the observation of other people or events),
though it can also occur independently of any observation.
Key Terms
• social learning: A cognitive process that takes place in a social context and can occur purely through observation or
direct instruction.
• vicarious reinforcement: Occurs when a person imitates the behavior of someone who has been reinforced for that
behavior.
• Albert Bandura: (1925–present) A psychologist and learning theorist who first proposed social learning theory and
can be credited for first noting observational learning.
• observational learning: Learning that occurs as a function of seeing, retaining, and, in the case of imitation learning,
replicating novel behavior executed by other people.
• vicarious punishment: Occurs when a person avoids the behavior of someone who has been punished for that
behavior.
Results indicated that after viewing the film, when children were left alone in a room with the
Bobo doll and props used by the adult aggressor, they imitated the actions they had witnessed.
Those in the model-reward and no-consequence conditions were more willing to imitate the
aggressive acts than those in the model-punished condition. Further testing indicated that
children in each condition had equal amounts of learning, and it was only the motivation factor
that kept behaviors from being similar in each condition
Bobo-doll experiment (Bandura): The Bobo-doll experiment was conducted by Albert Bandura
in 1961 and studied patterns of behavior associated with aggression. Bandura hoped that the
experiment would prove that aggression can be explained, at least in part, by social learning
theory. The theory of social learning states that behavior such as aggression is learned through
observing and imitating others. Four Conditions for Observational Learning According to
Bandura’s social learning theory, four conditions, or steps, must be met in order for
observational or social learning to occur:
Attention
Observers cannot learn unless they pay attention to what is happening around them. This
process is influenced by characteristics of the model, as well as how much the observer likes or
identifies with the model. It is also influenced by characteristics of the observer, such as the
observer’s expectations or level of emotional arousal.
Retention or Memory
Observers have to not only recognize the observed behavior, but also remember it. This
process depends on the observer’s ability to code or structure the information so that it is easily
remembered.
Initiation or Reproduction
Observers must be physically and intellectually capable of producing the act. In many cases the
observer possesses the necessary responses, but sometimes reproducing the observed actions
may involve skills the observer has not yet acquired. You will not be able to become a champion
juggler, for example, just by watching someone else do it.
Motivation
An observer must be motivated to reproduce the actions they have seen. You need to want to
copy the behavior, and whether or not you are motivated depends on what happened to the
model. If you saw that the model was reinforced for her behavior, you will be more motivated to
copy her; this is known as vicarious reinforcement. On the other hand, if you observed the
model being punished, you would be less motivated to copy her; this is called vicarious
punishment. In addition, the more an observer likes or respects the model, the more likely they
are to replicate the model’s behavior. Motivation can also come from external reinforcement,
such as rewards promised by an experimenter.
Insight learning occurs when a new behavior is learned through cognitive processes rather than
through interactions with the outside world.
Key Terms
• heuristic: An experience-based technique for problem solving, learning, and discovery that yields a solution that is
not guaranteed to be optimal.
• insight: Acute observation and deduction; penetration; discernment; perception.
Insight learning was first researched by Wolfgang Kohler (1887–1967). This theory of learning
differs from the trial-and-error ideas that were proposed before it. The key aspect of insight
learning is that it is achieved through cognitive processes, rather than interactions with the
outside world. There is no gradual shaping or trial and error involved; instead, internal
organizational processes cause new behavior. Sultan the Chimpanzee and Insight Learning
Kohler’s most famous study on insight learning involved Sultan the chimpanzee. Sultan was in a
cage and was presented with a stick, which he could use to pull a piece of fruit close enough to
the cage so that he could pick it up. After Sultan had learned to use the stick to reach the fruit,
Kohler moved the fruit out of range of the short stick. He then placed a longer stick within reach
of the short stick. Initially, Sultan tried to reach the fruit with the short stick and failed. Eventually,
however, Sultan learned to use the short
stick to reach the long stick, and then use the long stick to reach the fruit. Sultan was never
conditioned to use one stick to reach another; instead, it seemed as if Sultan had an epiphany.
The internal process that lead Sultan to use the sticks in this way is a basic example of insight.
Chimpanzees solving problems: Watch this video to see an experiment much like those
conducted by Wolfgang Köhler.
A basic assumption of strict behaviorism is that only behavior that can be seen may be studied,
and that human behavior is determined by conditioning. Insight learning suggests that we learn
not only by conditioning, but also by cognitive processes that cannot be directly observed.
Insight learning is a form of learning because, like other forms, it involves a change in behavior;
however, it differs from other forms because the process is not observable. It can be hard to
define because it is not behavioral, a characteristic that distinguishes it from most theories of
learning throughout the history of psychology.
Initially, it was thought that learning was the result of reproductive thinking. This means that an
organism reproduces a response to a given problem from past experience. Insight learning,
however, does not directly involve using past experiences to solve a problem. While past
experiences may help the process, an insight or novel idea is necessary to solve the problem.
Prior knowledge is of limited help in these situations. Crows learning through insight: In another
experiment, a crow creatively learns to bend a wire to get food out of a jar. In humans, insight
learning occurs whenever we suddenly see a problem in a new way, connect the problem to
another relevant problem/solution, release past experiences that
are blocking the solution, or see the problem in a larger, more coherent context. When we solve
a problem through insight, we often have a so-called aha or eureka moment. The solution
suddenly appears, even if previously no progress was being made. Famous examples of this
type of learning include Archimedes’s discovery of a method to determine the density of an
object (“Eureka!”) and Isaac Newton’s realization that a falling apple and the orbiting moon are
both pulled by the same force.
Potentiation, habituation, and sensitization are three ways in which stimuli in the environment
produce changes in the nervous system
Key Terms
• axon: A nerve fiber that is a long slender projection of a nerve cell, and which conducts nerve impulses away from
the body of the cell to a synapse.
• synapse: The junction between the terminal of a neuron and either another neuron or a muscle or gland cell, over
which nerve impulses pass.
• neurotransmitter: Any substance, such as acetylcholine or dopamine, responsible for sending nerve signals across a
synapse between two neurons.
• dendrite: A slender projection of a nerve cell that conducts nerve impulses from a synapse to the body of the cell.
• stimuli: In psychology, any energy patterns (e.g., light or sound) that are registered by the senses.
Learning occurs when stimuli in the environment produce changes in the nervous system. Three
ways in which this occurs include long-term potentiation, habituation, and sensitization.
Long-Term Potentiation
One way that the nervous system changes is through potentiation, or the strengthening of the
nerve synapses (the gaps between neurons). Long-term potentiation (LTP) is the persistent
strengthening of synapses based on recent patterns of activity: it occurs when a neuron shows
an increased excitability over time due to a repeated pattern, behavior, or response. The
opposite of LTP is long-term depression (LTD), which produces a long-lasting decrease in
synaptic strength.
Habituation
Recall that sensory adaptation involves the gradual decrease in neurological sensory response
caused by the repeated application of a particular stimulus over time.
Neurological Differences
Habituation and sensitization work in different ways neurologically. In neural communication, a
neurotransmitter is released from the axon of one neuron, crosses a synapse, and is then
picked up by the dendrites of an adjacent neuron. During
habituation, fewer neurotransmitters are released at the synapse. In sensitization, however,
there are more pre-synaptic neurotransmitters, and the neuron itself is more excitable.
Psychology in Education
How we learn and incorporate information is directly influenced by psychology and is a key
subject of interest for educational psychologists
Key Terms
• constructivism: A psychological epistemology that argues that humans generate knowledge and meaning from their
experiences.
• kinesthesia: Also known as proprioception or static position sense; the perception of the position, posture, and
movement of the body.
• cognitivism: The view that mental function can be understood as the internal manipulation of symbols according to a
set of rules.
• behaviorism: An approach to psychology that focuses strictly on observable behavior; this theory assumes that
behavior is determined by a person’s environment.
Psychology plays an important role in what we do on a day-to-day basis, and this is especially
true for students. How we learn and incorporate information is directly influenced by psychology,
whether we know it or not. Educational psychology is the study of how humans learn in
educational settings, the effectiveness of educational interventions, the psychology of teaching,
and the social psychology of schools as organizations. It is concerned with how students learn
and develop, often focusing on subgroups such as gifted children and those subject to specific
disabilities.
Within the realm of psychology, there are several theories that help explain the ways in which
people learn. By understanding these concepts, students are better able to understand and
capitalize on how they acquire knowledge in school. Behaviorism is based on both classical
conditioning (in which a stimulus is conditioned to create a response) and operant conditioning
(in which behavior is reinforced through a particular reward or punishment). For example, if you
study for your psychology test and receive a grade of A, you are rewarded; in theory, this makes
it more likely that you will study in the future for your next test. Cognitivism is the idea that
people develop knowledge and meaning through the
sequential development of several cognitive processes, including recognition, reflection,
application, and evaluation. For example, you read your psychology textbook (recognition), you
ponder what the ideas mean (reflection), you use the ideas in your everyday life (application)
and then you are tested on your knowledge (evaluation). All of these processes work together to
help you develop prior knowledge and integrate new concepts.
Constructivism is the concept of constructing new ideas based on previous knowledge. For
example, our prior experiences with a situation help us to understand new experiences and
information. Piaget is most famous for his work in constructivism, and many Montessori schools
are based on the constructivist school of thought.
Types of Learners
People also learn in a variety of ways. Styles of learning are generally grouped into three
primary categories: visual, auditory, and kinesthetic. Although most people are a combination of
these three types, we tend to have a particular strength in one area. Knowing your strongest
learning type can help you learn in the most effective way; depending on your learning style,
you’ll want to tweak your study skills to get the most of your education.
• Visual learners usually use objects such as flashcards or take and reread lecture notes. Visual
learners will highlight important passages in books or draw pictures/diagrams of ideas to help
better understand the concepts.
• Auditory learners understand concepts best by listening; many will record a lecture and play it
back to further understand the lesson. Many auditory learners will read aloud and tend to do well
on oral, rather than written, exams.
• Kinesthetic learners (related to kinesthesia) do best when they act out or repeat something
several times. Role-plays, experiments, and hands-on activities are great ways for kinesthetic
learners to understand and remember concepts.
Special-education programs are designed to help children with disabilities obtain an education
equivalent to their non-disabled peers.
Key Terms
• intelligence quotient: A score derived from one of several different standardized tests attempting to measure
intelligence.
• phonological: Of or relating to the study of the way sounds function in languages, including syllable structure, stress,
accent, intonation, and which sounds are distinctive units within a language.
• impairment: A deterioration or weakening; a disability or handicap; an inefficient part or factor.
There are a variety of learning disabilities that require special assistance in order to help
children learn effectively. Special education is the practice of educating students with disabilities
or special needs in an effective way that addresses their individual differences and needs.
Ideally, this process involves the individually planned and systematically monitored arrangement
of teaching procedures, adapted equipment and materials, and accessible settings. Some forms
of support include specialized classrooms; teacher’s aides; and speech, occupational, or
physical therapists. Special-education interventions are designed to help learners with special
needs achieve a higher level of personal self-sufficiency and success in school and their
community than may be available if they were only given access to a typical classroom
education. Certain laws and policies are designed to help children with learning disabilities
obtain an education equivalent to their non-disabled peers.
Intellectual Disabilities
ADHD
Attention -deficit hyperactivity disorder (ADHD) is considered a type of learning disability. This
disability is characterized by difficulty with focusing, paying attention, and controlling impulses.
Children with ADHD may have trouble sitting in their seat and focusing on the material
presented, or their distractions may keep them from fully learning and understanding the
lessons. To be diagnosed according to the Diagnostic and Statistical Manual of Mental
Disorders, 5th edition (DSM-5), symptoms must be observed in multiple settings for six months
or more and to a degree that is much greater than others of the same age. They must also
cause problems in the person’s social, academic, or work life.
Dyslexia
Dyslexia is characterized by difficulty with learning to read or write fluently and with accurate
comprehension, despite normal intelligence. This includes difficulty with phonological
awareness, phonological decoding, processing speed, auditory short-term memory, and/or
language skills or verbal comprehension. Dyslexia is the most recognized of reading disorders;
however not all reading disorders are linked to dyslexia.
Two laws exist to help ensure that children with learning disabilities receive the same level of
education as children without disabilities: IDEA and Section 504.
The Individuals with Disabilities Education Act (IDEA)
The Individuals with Disabilities Education Act (IDEA) provides federal funding to states to be
put toward the educational needs of children with disabilities. IDEA, which covers 13 categories
of disability, has two main components: Free and Appropriate Public Education (FAPE) and an
Individual Education Program (IEP). In addition to the disabilities listed above, IDEA covers
deaf-blindness, deafness, developmental delays, hearing impairments, emotional disturbance,
orthopedic or other health impairment, speech or language impairment, traumatic brain injury,
and visual impairment (including blindness).
The Free and Appropriate Public Education (FAPE) component of IDEA makes it mandatory for
schools to provide free and appropriate education to all students, regardless of intellectual level
and disability. FAPE is defined as an educational
program that is individualized for a specific child, designed to meet that child’s unique needs,
and from which the child receives educational benefit. An Individual Education Program (IEP) is
developed for each child who receives special education; each plan consists of individualized
goals for the child to work toward, and these plans are reevaluated annually.
IDEA also advocates for the Least Restrictive Environment (LRE), which means that—to the
greatest extent possible—a student who has a disability should have the opportunity to be
educated with non-disabled peers, have access to the generaleducation curriculum, and be
provided with supplementary aids and services necessary to achieve educational goals if placed
in a setting with non-disabled peers.
Section 504
Section 504 is a civil-rights law that protects people with disabilities from discrimination. All
students with disabilities are protected by Section 504, even if they are not provided for by
IDEA. Section 504 states that schools must ensure that a student with a disability is educated
among peers without disabilities. A re-evaluation is required prior to any significant changes in a
child’s placement, and a grievance procedure is in place for parents who may not agree with
their child’s educational placement.