Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

LEARNING.......................................................................................................................................................

TYPES OF LEARNING.................................................................................................................................... 2

SIMPLE NON-ASSOCIATIVE LEARNING.......................................................................................................................2


Habituation............................................................................................................................................................2
Sensitization:..........................................................................................................................................................3
ASSOCIATIVE LEARNING..........................................................................................................................................4
Operant Conditioning............................................................................................................................................4
Classical Conditioning..........................................................................................................................................7
OBSERVATIONAL LEARNING....................................................................................................................................9
Required conditions.............................................................................................................................................10
Effect on behavior................................................................................................................................................10

APPROACHES TO LEARNING.................................................................................................................... 11
ROTE LEARNING.....................................................................................................................................................11
INFORMAL LEARNING.............................................................................................................................................11
FORMAL LEARNING................................................................................................................................................11
NON-FORMAL LEARNING AND COMBINED APPROACHES.........................................................................................11

NEUROSCIENCE........................................................................................................................................... 12

HOW NEUROSCIENCE IMPACTS EDUCATION.....................................................................................................................12

STUDY SKILLS AND LEARNING TECHNIQUES FOR STUDENTS.........................................................12

BEST TYPES OF STUDYING......................................................................................................................................12


PREPARING FOR EXAMS..........................................................................................................................................12
The PQRST Method.............................................................................................................................................13
LEARNING
Learning is the acquisition and development of memories and behaviors,
including skills, knowledge, understanding, values, and wisdom. It is the
product of experience and the goal of education. Learning ranges from
simple forms of learning such as habituation and classical conditioning seen
in many animal species, to more complex activities such as play, seen only
in relatively intelligent animals.
For small children, learning is as natural as breathing. In fact, there is evidence for behavioral
learning prenatally, in which habituation has been observed as early as 32 weeks into gestation,
indicating that the central nervous system is sufficiently developed and primed for learning and
memory to occur very early on in development.
Learning has also been mathematically described as a differential equation of knowledge with
respect to time, or the change in knowledge in time due to a number of interacting factors
(constants and variables) such as initial knowledge, motivation, intelligence, knowledge
anchorage or resistance, etc. Thus, learning does not occur if there is no change in the amount of
knowledge even for a long time, and learning is negative if the amount of knowledge is
decreasing in time. Inspection of the solution to the differential equation also shows the sigmoid
and logarithmic decay learning curves, as well as the knowledge carrying capacity for a given
learner.

Types of Learning
Simple Non-Associative Learning
Habituation
In psychology, habituation is an example of non-associative learning in which there is a
progressive diminution of behavioral response probability with repetition of a stimulus. It is
another form of integration. An animal first responds to a stimulus, but if it is neither rewarding
nor harmful the animal reduces subsequent responses. One example of this can be seen in small
song birds - if a stuffed owl (or similar predator) is put into the cage, the birds initially react to it
as though it were a real predator. Soon the birds react less, showing habituation. If another
stuffed owl is introduced (or the same one removed and re-introduced), the birds react to it as
though it were a predator, showing that it is only a very specific stimulus that is habituated to
(namely, one particular unmoving owl in one place). Habituation has been shown in essentially
every species of animal, including the large protozoan Stentor coeruleus.
Habituation need not be conscious - for example, a short time after we get dressed, the stimulus
clothing creates disappears from our nervous systems and we become unaware of it. In this way,
habituation is used to ignore any continual stimulus, presumably because changes in stimulus
level are normally far more important than absolute levels of stimulation. This sort of habituation
can occur through neural adaptation in sensory nerves themselves and through negative feedback
from the brain to peripheral sensory organs.
The learning underlying habituation is a fundamental or basic process of biological systems and
does not require conscious motivation or awareness to occur. Indeed, without habituation we
would be unable to distinguish meaningful information from the background, unchanging
information.
Habituation is stimulus specific. It does not cause a general decline in responsiveness. It
functions like an average weighted history wavelet interference filter reducing the
responsiveness of the organism to a particular stimulus. Frequently one can see opponent
processes after the stimulus is removed.
Habituation is connected to associational reciprocal inhibition phenomena, opponent processes,
motion aftereffects, color constancy, size constancy, and negative afterimages.
Habituation is frequently used in testing psychological phenomena. Both infants and adults look
less and less at a particular stimulus the longer it is presented. The amount of time spent looking
at a new stimulus after habituation to the initial stimulus indicates the effective similarity of the
two stimuli. It is also used to discover the resolution of perceptual systems. For instance, by
habituating someone to one stimulus, and then observing responses to similar ones, one can
detect the smallest degree of difference that is detectable.
Dishabituation is when a second stimulus is used, which briefly increases habituated response, it
has been shown that this is a different mechanism from sensitization

Sensitization:
Sensitization is an example of non-associative learning in which the progressive amplification of
a response follows repeated administrations of a stimulus (Bell et al., 1995). An everyday
example of this mechanism is the repeated tonic stimulation of peripheral nerves that will occur
if a person rubs his arm continuously. After a while, this stimulation will create a warm sensation
that will eventually turn painful. The pain is the result of the progressively amplified synaptic
response of the peripheral nerves warning the person that the stimulation is harmful.
Sensitization is thought to underlie both adaptive as well as maladaptive learning processes in the
organism.
Sensitization primarily refers to AMPA receptor-associated sensitization. However, there are
others as well, e.g. sensitization in drug addiction.
A common mechanism for the AMPA receptor-associated types of sensitization is the activation
of AMPA receptors on the post-synaptic membrane. Repeated stimulation of the pre-synaptic
neuron will cause glutamate to be released into the synaptic cleft. The increased release of
glutamate will activate the AMPA receptors. AMPA receptors will allow for additional Na+ to
enter the post-synaptic neuron, thus increasing its depolarization. This will cause the post-
synaptic neuron to fire continuously, thereby creating a prolonged response. It is possible that the
intensity of the stimulation is what distinguishes the different types of sensitization, in that
kindling may require more intense stimulation than LTP. Another possibility are alterations in
the function of inhibiting GABAergic neurons. This, however, has not been established
(McEarchern & Shaw, 1999).

 For example, electrical or chemical stimulation of the rat hippocampus causes


strengthening of synaptic signals, a process known as long-term potentiation or LTP
(Collingridge, Isaac & Wang, 2005). LTP is thought to underlie memory and learning in
the human brain.
 A different type of sensitization is that of kindling, where repeated stimulation of
hippocampal or amygdaloid neurons in the limbic system eventually leads to seizures in
laboratory animals. Having been sensitized, very little stimulation is required to produce
the seizures. Thus, kindling has been suggested as a model for temporal lobe epilepsy in
humans, where stimulation of a repetitive type (flickering lights for instance) can cause
epileptic seizures (Morimoto, Fahnestock & Racine, 2004). Often, people suffering from
temporal lobe epilepsy report symptoms of negative affect such as anxiety and depression
that might result from limbic dysfunction (Teicher et al., 1993).
 A third type is central sensitization, where nociceptive neurons in the dorsal horns of the
spinal cord become sensitized by peripheral tissue damage or inflammation (Ji et al., 2003).
This type of sensitization has been suggested as a possible causal mechanism for chronic
pain conditions. These various types indicate that sensitization may underlie both
pathological and adaptive functions in the organism.
 Drug sensitization occurs in drug addiction, and causes an increase in the sensitivity to a
substance. It involves delta FosB and may be responsible for the high incidence of relapse
that occur in treated drug addicts

Associative Learning
Operant Conditioning
Operant conditioning is the use of consequences to modify the occurrence and form of behavior.
Operant conditioning is distinguished from classical conditioning (also called respondent
conditioning, or Pavlovian conditioning) in that operant conditioning deals with the modification
of "voluntary behavior" or operant behavior. Operant behavior "operates" on the environment
and is maintained by its consequences, while classical conditioning deals with the conditioning
of respondent behaviors which are elicited by antecedent conditions. Behaviors conditioned via a
classical conditioning procedure are not maintained by consequences.

Reinforcement, Punishment and Extinction


Reinforcement and punishment, the core tools of operant conditioning, are either positive
(delivered following a response), or negative (withdrawn following a response). This creates a
total of four basic consequences, with the addition of a fifth procedure known as extinction (i.e.
no change in consequences following a response).
It's important to note that organisms are not spoken of as being reinforced, punished, or
extinguished; it is the response that is reinforced, punished, or extinguished. Additionally,
reinforcement, punishment, and extinction are not terms whose use is restricted to the laboratory.
Naturally occurring consequences can also be said to reinforce, punish, or extinguish behavior
and are not always delivered by people.
 Reinforcement is a consequence that causes a behavior to occur with greater frequency.
 Punishment is a consequence that causes a behavior to occur with less frequency.
 Extinction is the lack of any consequence following a response. When a response is
inconsequential, producing neither favorable nor unfavorable consequences, it will occur
with less frequency.
Four contexts of operant conditioning: Here the terms "positive" and "negative" are not used in
their popular sense, but rather: "positive" refers to addition, and "negative" refers to subtraction.
What is added or subtracted may be either reinforcement or punishment. Hence positive
punishment is sometimes a confusing term, as it denotes the addition of punishment (such as
spanking or an electric shock), a context that may seem very negative in the lay sense. The four
procedures are:
1. Positive reinforcement occurs when a behavior (response) is followed by a favorable
stimulus (commonly seen as pleasant) that increases the frequency of that behavior. In the
Skinner box experiment, a stimulus such as food or sugar solution can be delivered when
the rat engages in a target behavior, such as pressing a lever.
2. Negative reinforcement occurs when a behavior (response) is followed by the removal of an
aversive stimulus (commonly seen as unpleasant) thereby increasing that behavior's
frequency. In the Skinner box experiment, negative reinforcement can be a loud noise
continuously sounding inside the rat's cage until it engages in the target behavior, such as
pressing a lever, upon which the loud noise is removed.
3. Positive punishment (also called "Punishment by contingent stimulation") occurs when a
behavior (response) is followed by an aversive stimulus, such as introducing a shock or
loud noise, resulting in a decrease in that behavior.
4. Negative punishment (also called "Punishment by contingent withdrawal") occurs when a
behavior (response) is followed by the removal of a favorable stimulus, such as taking away
a child's toy following an undesired behavior, resulting in a decrease in that behavior.

Schedules of Reinforcements
When an animal's surroundings are controlled, its behavior patterns after reinforcement become
predictable, even for very complex behavior patterns. A schedule of reinforcement is the
protocol for determining when responses or behaviors will be reinforced, ranging from
continuous reinforcement, in which every response is reinforced, and extinction, in which no
response is reinforced. Between these extremes is intermittent or partial reinforcement where
only some responses are reinforced.
Specific variations of intermittent reinforcement reliably induce specific patterns of response,
irrespective of the species being investigated (including humans in some conditions). The
orderliness and predictability of behaviour under schedules of reinforcement was evidence for B.
F. Skinner's claim that using operant conditioning he could obtain "control over behaviour", in a
way that rendered the theoretical disputes of contemporary comparative psychology obsolete.
The reliability of schedule control supported the idea that a radical behaviourist experimental
analysis of behavior could be the foundation for a psychology that did not refer to mental or
cognitive processes. The reliability of schedules also led to the development of Applied Behavior
Analysis as a means of controlling or altering behavior.
Many of the simpler possibilities, and
some of the more complex ones, were
investigated at great length by Skinner
using pigeons, but new schedules
continue to be defined and investigated.
Simple schedules have a single rule to
determine when a single type of reinforcer
is delivered for specific response.

 Fixed ratio (FR) schedules deliver


reinforcement after every nth
response
o Example: FR2 = every
second response is reinforced
o Lab example: FR5 = rat
reinforced with food after
each 5 bar-presses in a
Skinner box.
o Real-world example: FR10 =
Used car dealer gets a $1000 bonus for each 10 cars sold on the lot.
 Continuous ratio (CRF) schedules are a special form of a fixed ratio. In a continuous ratio
schedule, reinforcement follows each and every response.
o Lab example: each time a rat presses a bar it gets a pellet of food
o Real world example: each time a dog defecates outside its owner gives it a treat
 Fixed interval (FI) schedules deliver reinforcement for the first response after a fixed length
of time since the last reinforcement, while premature responses are not reinforced.
o Example: FI1" = reinforcement provided for the first response after 1 second
o Lab example: FI15" = rat is reinforced for the first bar press after 15 seconds passes since
the last reinforcement
o Real world example: FI24 hour = calling a radio station is reinforced with a chance to
win a prize, but the person can only sign up once per day
 Variable ratio (VR) schedules deliver reinforcement after a random number of responses
(based upon a predetermined average)
o Example: VR3 = on average, every third response is reinforced
o Lab example: VR10 = on average, a rat is reinforced for each 10 bar presses
o Real world example: VR37 = a roulette player betting on specific numbers will win on
average one every 37 tries (on a U.S. roulette wheel, this would be VR38)
 Variable interval (VI) schedules deliver reinforcement for the first response after a random
average length of time passes since the last reinforcement
o Example: VI3" = reinforcement is provided for the first response after an average of 3
seconds since the last reinforcement.
o Lab example: VI10" = a rat is reinforced for the first bar press after an average of 10
seconds passes since the last reinforcement
o Real world example: a predator can expect to come across a prey on a variable interval
schedule
Skinner's law of effect
Operant conditioning, sometimes called instrumental conditioning or instrumental learning, was
first extensively studied by Edward L. Thorndike (1874-1949), who observed the behavior of
cats trying to escape from home-made puzzle boxes. When first constrained in the boxes, the cats
took a long time to escape. With experience, ineffective responses occurred less frequently and
successful responses occurred more frequently, enabling the cats to escape in less time over
successive trials. In his Law of Effect, Thorndike theorized that successful responses, those
producing satisfying consequences, were "stamped in" by the experience and thus occurred more
frequently. Unsuccessful responses, those producing annoying consequences, were stamped out
and subsequently occurred less frequently. In short, some consequences strengthened behavior
and some consequences weakened behavior. B.F. Skinner (1904-1990) formulated a more
detailed analysis of operant conditioning based on reinforcement, punishment, and extinction.
Following the ideas of Ernst Mach, Skinner rejected Thorndike's mediating structures required
by "satisfaction" and constructed a new conceptualization of behavior without any such
references. Moreover, Thorndike's work with puzzle boxes produced no meaningful data to be
studied other than a measure of escape times. So while experimenting with some homemade
feeding mechanisms Skinner invented the operant conditioning chamber which allowed him to
measure rate of response as a key dependent variable using a cumulative record of lever presses
or key pecks.

Classical Conditioning
A process of behavior modification by which a subject comes to respond in a desired manner to
previously neutral stimulus (which does not normally evoke the response) that has been
repeatedly presented along with an unconditioned stimulus (which unfailingly evokes a
particular response) that elicits the desired response.
"Classical Conditioning" is defined as "a process of learning by temporal association in which
two events that repeatedly occur close together in time become fused in a person's mind and
produce the same response" (Comer, 2004)
Many of our behaviors today are shaped by the pairing of stimuli. Certain stimuli, such as a
specific day of the year, result in fairly intense emotions. This specific day causes the emotion
due to its pairing with perhaps the death of a loved one.
Let's review these concepts.

1. Unconditioned Stimulus: a thing that can already elicit a response.


2. Unconditioned Response: a thing that is already elicited by a stimulus.
3. Unconditioned Relationship: an existing stimulus-response connection.
4. Conditioning Stimulus: a new stimulus we deliver the same time we give the old
stimulus.
5. Conditioned Relationship: the new stimulus-response relationship we created by
associating a new stimulus with an old response.

There are two key parts. First, we start with an existing relationship, Unconditioned Stimulus -->
Unconditioned Response. Second, we pair a new thing (Conditioning Stimulus) with the existing
relationship, until the new thing has the power to elicit the old response.
Pavlov's experiment
Ivan Pavlov's experimental device involved a holding
harness for a dog, along with a tube that collected saliva
(Comer, 2004). The amount of saliva was then recorded
on a revolving cylinder called a kymograph. The entire
device could be viewed by the experimenter through
one-way glass.
In Pavlov's experiments, he used meat to make dogs
salivate. This meat is called the unconditioned stimulus.
The salivation caused by the presence of the meat is
called the unconditioned response. In one of his experiments, Pavlov paired the presence of the
meat with the sound of a metronome (Comer, 2004). The sound of the metronome is called the
conditioned stimulus. After many such pairings, the sound of the metronome alone caused
salivation, which is then called the conditioned response.
Following his initial discovery, Pavlov spent more than three decades studying the processes
underlying classical conditioning. He and his associates identified four main processes:
acquistion, extinction, generalization, and discrimination.
Acquistion:
The acquistion phase is the initial learning of the conditioned response-for example, the dog
learning to salivate at the sound of a bell.
Extinction:
The term extinction is used to describe the elimination of the conditioned response by repeatedly
presenting the conditoned stimulus without the unconditioned stimulus- for example, repeatedly
ringing the bell without presenting food afterward.
Generalization:
After an animal has learned a conditioned response to one stimulus, it may also respond to
similar stimuli without further training- for example, using a different sounding bell.
Discrimination:
Discrimination is the opposite of generalization in which an individual learns to produce a
conditioned response to one stimulus but not to another stimulus that is similar- for example, a
buzzer won't work like the bell.
Spontaneous recovery
This is the re-occurrence of a classically conditioned response after extinction has occurred. The
time difference between the conditioned stimulus and the unconditioned stimulus is referred to as
latency.

Theories of classical conditioning


There are two competing theories of how classical conditioning works. The first, stimulus-
response theory, suggests that an association to the unconditioned stimulus is made with the
conditioned stimulus within the brain, but without involving conscious thought. The second
theory stimulus-stimulus theory involves cognitive activity, in which the conditioned stimulus is
associated to the concept of the unconditioned stimulus, a subtle but important distinction.
Stimulus-response theory:
Stimulus-response theory referred to as S-R theory, is a theoretical model of behavioral
psychology that suggests humans and other animals can learn to associate a new stimulus- the
conditioned stimulus (CS)- with a pre-existing stimulus - the unconditioned stimulus (UCS), and
can think, feel or respond to the CS as if it were actually the UCS.
Stimulus-stimulus theory:
Stimulus-stimulus theory, referred to as S-S theory, is a theoretical model of classical
conditioning that suggests a cognitive component is required to understand classical conditioning
and that stimulus-response theory is an inadequate model. It proposes that a cognitive component
is at play. S-R theory suggests that an animal can learn to associate a conditioned stimulus (CS)
such as a bell, with the impending arrival of food termed the unconditioned stimulus, resulting in
an observable behavior such as salivation. Stimulus-stimulus theory suggests that instead the
animal salivates to the bell because it is associated with the concept of food, which is a very fine
but important distinction.
To test this theory, psychologist Robert Rescorla undertook the following experiment. Rats
learned to associate a loud noise as the unconditioned stimulus, and a light as the conditioned
stimulus. The response of the rats was to freeze and cease movement. What would happen then if
the rats were habituated to the UCS? S-R theory would suggest that the rats would continue to
respond to the UCS, but if S-S theory is correct, they would be habituated to the concept of a
loud sound (danger), and so would not freeze to the CS. The experimental results suggest that S-
S was correct, as the rats no longer froze when exposed to the signal light.

Observational Learning
Observational learning is learning that occurs as a function of observing, retaining and, in the
case of imitation learning, replicating novel behavior executed by others. It is most associated
with the work of psychologist Albert Bandura, who implemented some of the seminal studies in
the area and initiated social learning theory. It involves the process of learning to copy or model
the action of another through observing another doing it. Further research has been used to show
a connection between observational learning and both classical and operant conditioning.
Although observational learning can take place at any stage in life, it is thought to be particularly
important during childhood, particularly as authority becomes important. The best role models
are those a year or two older for observational learning. Because of this, social learning theory
has influenced debates on the effect of television violence and parental role models. Bandura's
Bobo doll experiment is widely cited in psychology as a demonstration of observational learning
and demonstrated that children are more likely to engage in violent play with a life size
rebounding doll after watching an adult do the same. However, it may be that children will only
reproduce a model's behavior if it has been reinforced. This may be the problem with television
because it was found, by Otto Larson and his coworkers (1968), that 56% of the time children's
television characters achieve their goals through violent acts.
Observational learning allows for learning
without any change in behavior and has
therefore been used as an argument against strict
behaviorism which argued that behavior change
must occur for new behaviors to be acquired.
Bandura noted that "social imitation may hasten
or short-cut the acquisition of new behaviors
without the necessity of reinforcing successive
approximations as suggested by Skinner (1953).
It is possible to treat observational learning as
merely a variation of operant training.
According to this view, first proposed by Neal
Miller and John Dollard, the changes in an
observer's behavior are due to the consequences of the observer's behavior, not those of the
model.
As an interesting aside, there are a number of variables which have confounded the study of
observational learning in animals. One of these is the Venus Effect in which animals are sexually
stimulated by the model and this interferes with the ability to observe behavior thereby limiting
the ability to make associations based on the behavior of the model.

Required conditions
Bandura called the process of social learning modeling and gave four conditions required for a
person to successfully model the behaviour of someone else:

 Attention to the model: A person must first pay attention to a person engaging in a certain
behavior (the model)
 Retention of details: Once attending to the observed behavior, the observer must be able
to effectively remember what the model has done.
 Motor reproduction: The observer must be able to replicate the behavior being observed.
For example, juggling cannot be effectively learned by observing a model juggler if the
observer does not already have the ability to perform the component actions (throwing and
catching a ball).
 Motivation and Opportunity: The observer must be motivated to carry out the action they
have observed and remembered, and must have the opportunity to do so. For example, a
suitably skilled person must want to replicate the behavior of a model juggler, and needs to
have an appropriate number of items to juggle at hand.

Effect on behavior
Social learning may affect behavior in the following ways:

 Teaches new behaviors


 Increases or decreases the frequency with which previously learned behaviors are carried
out
 Can encourage previously forbidden behaviors
 Can increase or decrease similar behaviors. For example, observing a model excelling in
piano playing may encourage an observer to excel in playing the saxophone.

Approaches to learning
Rote learning
Rote learning is a technique which avoids understanding the inner complexities and inferences of
the subject that is being learned and instead focuses on memorizing the material so that it can be
recalled by the learner exactly the way it was read or heard. The major practice involved in rote
learning techniques is learning by repetition, based on the idea that one will be able to quickly
recall the meaning of the material the more it is repeated. Rote learning is used in diverse areas,
from mathematics to music to religion. Although it has been criticized by some schools of
thought, rote learning is a necessity in many situations.

Informal learning
Informal learning occurs through the experience of day-to-day situations (for example, one
would learn to look ahead while walking because of the danger inherent in not paying attention
to where one is going). It is learning from life, during a meal at table with parents, Play,
exploring.

Formal learning
Formal learning is learning that takes place within a teacher-student relationship, such as in a
school system.

Non-formal learning and combined approaches


The educational system may use a combination of formal, informal, and non-formal learning
methods. The UN and EU recognize these different forms of learning. In some schools students
can get points that count in the formal-learning systems if they get work done in informal-
learning circuits. They may be given time to assist international youth workshops and training
courses, on the condition they prepare, contribute, share and can proof this offered valuable new
insights, helped to acquire new skills, a place to get experience in organising, teaching, etc.

In order to learn a skill, such as solving a Rubik's cube quickly, several factors come into play at
once:

 Directions help one learn the patterns of solving a Rubik's cube


 Practicing the moves repeatedly and for extended time helps with "muscle memory" and
therefore speed
 Thinking critically about moves helps find shortcuts, which in turn helps to speed up
future attempts.
 The Rubik's cube's six colors help anchor solving it within the head.
 Occasionally revisiting the cube helps prevent negative learning or loss of skill

Neuroscience
Neuroscience is the study of the human nervous system, the brain, and the biological basis of
consciousness, perception, memory, and learning.

How Neuroscience Impacts Education


When educators take neuroscience into account, they organize a curriculum around real
experiences and integrated, "whole" ideas. Plus, they focus on instruction that promotes complex
thinking and the "growth" of the brain. Neuroscience proponents advocate continued learning
and intellectual development throughout adulthood.

Study Skills and Learning


Techniques for Students
Study skills are strategies and methods of purposeful learning, usually
centered around reading and writing. Effective study skills are considered essential for students
to acquire good grades in school, and are useful in general to improve learning throughout one's
life, in support of career and other interests.

Best types of studying


A "Siam": Doing a "Siam" became popular in the early 2000's in Eastend University. Young,
bright, but lazy students would spend more time socialising than studying. Upon arrival of
deadlines to hand-in course work or examination, Siam's were used for increase success
potential.
This consists of studying intensively using energy drinks and caffeine pills the nights just before
the deadline. Some key study skills include:

 Removing distractions and improving concentration


 Maintaining a balance between homework and other activities
 Reducing stress, such as that caused by test anxiety
 Strategies for writing essays
 Speed reading
 Notetaking
 Subject-specific study strategies
 Preparing for exams

Preparing for exams


Preparing for an exam requires a good understanding of what is
expected of you, a rigid work-life balance than maximizes your
energy and strengths, a certain amount of self discipline, and a set
of study skills that are effective, varied, and interesting.
It is a basic premise that the more that you use information (read it,
speak about it, draw it, write it, use it etc...) the more you remember
and the longer you will remember it.

The PQRST Method


The method that many students who like to add an overt structure to their learning to keep them
on track is the PQRST method. It helps the student focus on studying and prioritizing the
information in a way that relates directly to how they will be asked to use that information in an
exam. The method can also be modified to suit any particular form of learning in most subjects.
It can also allow more accurate timing of work so instead of having to decide how much time to
attribute to one whole topic you can decide how long it might take to preview the material and
then each step after that.

 Preview: Look at the topic you have to learn glancing over the major headings or the
points in the syllabus.
 Question: Formulate questions that you would like to be able to answer once you have
finished the topic. It is important that you match as much as possible what you would like
to know to your syllabus or course direction. This allows a certain flexibility to take in
other topics that may aid your learning of the main point or if you are just interested.
Make sure that your questions are neither more specific or more open-ended than they
might be in an exam.
 Read: Read through your reference material that relates to the topic you want to learn for
your exam being mindful to pick out the information that best relates to the questions you
wish to answer.
 Summary: This is the most flexible part of the method and allows individual students to
bring any ways that they used to summarize information into the process. This can
include making written notes, spider diagrams, flow diagrams, labeled diagrams,
mnemonics, making a voice recording of you summarizing the topic, or any method that
feels most appropriate for what has to be learned. You can combine several methods as
long as this doesn't extend the process too long as you may lose sight that you are merely
seeking to use the information in the most appropriate way.
 Test: Use this step to assess whether you have focused on the important information and
stayed on topic. Answer the questions that you set for yourself in the Question section as
fully as you can as this using of the information is another way of using the information
and remembering more of it. This section also reminds you to continually manipulate the
information so that is focused on whatever form of assessment that it is needed for. It is
sometimes easy to lose sight of the point of learning and see it as a task to be completed
mundanely. Try to avoid adding questions that you didn't formulate in the Q section.

You might also like