Professional Documents
Culture Documents
PERCEPTION
PERCEPTION
PERCEPTION
The process by which we recognise, interpret or give meaning to the information provided by
sense organs is called perception.
In interpreting stimuli or events, individuals often construct them in their own ways.
Thus, perception is not merely an interpretation of objects or events of the external or internal
world as they exist, instead it is also a construction of those objects and events from one’s own
point of view.
The characteristic feature of bottom–up theories of perception is the fact that the content and
quality of sensory input play a determinative role in influencing the final percept.
Sensory input, in their view, represents the cornerstone of cognition and by its own nature it
determines further sensory data processing.
Therefore, we call this data–driven processing perception, because it originates with the
stimulation of the sensory receptors.
Psychologist James Gibson believed that our cognitive apparatus was created and formed by a long
evolutionary influence of external environment which is apparent in its structure and abilities.
Gibson opposed the top-down model and argued that perception is direct.
He stated that sensation is perception and there is no need for extra interpretation, as there is
enough information in our environment to make sense of the world in a direct way.
His theory is sometimes known as the "ecological theory" because of the claim that perception can
be explained solely in terms of the environment.
An example of bottom up-processing involves presenting a flower at the center of a person's
field. The sight of the flower and all the information about the stimulus are carried from the
retina to the visual cortex in the brain. The signal travels in one direction.
Gibson claimed that perception is, in an important sense, direct. He worked during World War II
on problems of pilot selection and testing and came to realise:
The leading proponent James Gibson (1966,1979) and his followers at Cornell University stated:
–“Direct perception assumes that the richness of the optic array just matches the richness of the
world”.
In his early work on aviation he discovered what he called 'optic flow patterns'. When pilots
approach a landing strip the point towards which the pilot is moving appears motionless, with
the rest of the visual environment apparently moving away from that point.
According to Gibson such optic flow patterns can provide pilots with unambiguous information
about their direction, speed and altitude.
Three important components of Gibson's Theory are 1. Optic Flow Patterns; 2. Invariant
Features; and 3. Affordances.
Optic Array is the pattern of light reaching the eye, which is thought to contain all the visual
information available on the retina.
Changes in the flow of the optic array contain important information about what type of
movement is taking place. For example:
Any flow in the optic array means that the perceiver is moving, if there is no flow the perceiver is
static.
The flow of the optic array will either be coming from a particular point or moving towards one.
The centre of that movement indicates the direction in which the perceiver is moving.
If a flow seems to be coming out from a particular point, this means the perceiver is moving
towards that point; but if the flow seems to be moving towards that point, then the perceiver is
moving away.
We rarely see a static view of an object or scene. When we move our head and eyes or walk
around our environment, things move in and out of our viewing fields.
Textures expand as you approach an object and contract as you move away. There is a pattern
or structure available in such texture gradients which provides a source of information about the
environment.
This flow of texture is INVARIANT, ie it always occurs in the same way as we move around our
environment and, according to Gibson, is an important direct cue to depth. Two good examples
of invariants are texture and linear perspective.
3. Affordances
Are, in short, cues in the environment that aid perception. Important cues in the environment
include:
OPTICAL ARRAY The patterns of light that reach the eye from the environment.
RELATIVE BRIGHTNESS Objects with brighter, clearer images are perceived as closer.
TEXTURE GRADIENT The grain of texture gets smaller as the object recedes. Gives the impression
of surfaces receding into the distance.
RELATIVE SIZE When an object moves further away from the eye the image gets smaller. Objects
with smaller images are seen as more distant.
SUPERIMPOSITION If the image of one object blocks the image of another, the first object is seen
as closer.
HEIGHT IN THE VISUAL FIELD Objects further away are generally higher in the visual field.
Visual Illusions
Gibson's emphasis on DIRECT perception provides an explanation for the (generally) fast and
accurate perception of the environment.
However, his theory cannot explain why perceptions are sometimes inaccurate, eg in illusions.
He claimed the illusions used in experimental work constituted extremely artificial perceptual
situations unlikely to be encountered in the real world, however this dismissal cannot
realistically be applied to all illusions.
For example, Gibson's theory cannot account for perceptual errors like the general tendency for
people to overestimate vertical extents relative to horizontal ones.
Neither can Gibson's theory explain naturally occurring illusions. For example if you stare for
some time at a waterfall and then transfer your gaze to a stationary object, the object appears
to move in the opposite direction.
TEMPLATE MATCHING
Template matching theory describes the most basic approach to human pattern recognition. It is
a theory that assumes every perceived object is stored as a "template" into long-term memory.
Incoming information is compared to these templates to find an exact match. In other words, all
sensory input is compared to multiple representations of an object to form one single
conceptual understanding.
The theory defines perception as a fundamentally recognition-based process. It assumes that
everything we see, we understand only through past exposure, which then informs our future
perception of the external world.
For example, A, A, and A are all recognized as the letter A, but not B. This viewpoint is limited,
however, in explaining how new experiences can be understood without being compared to an
internal memory template.
As the simplest theoretical hypothesis in pattern recognition, the Theory of Template mainly
considers that people store various mini copies of exterior patterns formed in the past in the
long-term memory. These copies, named templates, correspond with the exterior stimulation
patterns one by one.
When a simulation acts on people’s sense organs, the simulating information is first coded,
compared and matched with pattern stored in brain, then identified as one certain pattern in
brain which matches best.
Thus the pattern recognition effect is produced, otherwise the stimulation can not be
distinguished and recognized. Because every template relates to a certain meanings and some
other information, the pattern recognized then will be explained and processed in other ways.
In daily life we can also find out some examples of template matching. Comparing with
template, machine can recognize the seals on paychecks rapidly.
Although it can explains some human pattern recognition, the Theory of Template, meanwhile,
has some obvious restrictions.
According to the Theory of Template, people have to store an appropriate template before
recognize a pattern.
Although pre-processing course is added, these templates are still numerous, not only bringing
heavy burden to memory but also leading pattern recognition less flexible and stiffer.
The Theory of Template doesn’t entirely explain the process of human pattern recognition, but
the template and template matching cannot be entirely denied.
As one aspect or link in the process of human pattern recognition, the template still works
anyway.
In some other models of pattern recognition, some mechanisms which are similar to template
matching will also come out.
It is apparent that the template-matching model won’t work, because a huge number of
different templates would be needed just to recognize one letter.
When we multiply this by how many objects there are in the environment, the number becomes
astronomical.
First, for such a model to provide a complete explanation, we would need to have stored an
impossibly large number of templates.
Second, as technology develops and our experiences change, we become capable of recognizing
new objects such as DVDs, laptop computers, and smartphones. Template-matching models
thus have to explain how and when templates are created and how we keep track of an ever-
growing number of templates.
A third problem is that people recognize many patterns as more or less the same thing, even
when the stimulus patterns differ greatly.
Template matching works only with relatively clean stimuli when we know ahead of time what
templates may be relevant.
It does not adequately explain how we perceive as effectively as we typically do the “noisy”
patterns and objects—blurred or faint letters, partially blocked objects, sounds against a
background of other sounds—that we encounter every day.
Another kind of perceptual model, one that attempts to correct some of the shortcomings of
both template-matching and featural analysis models, is known as prototype matching.
Such models explain perception in terms of matching an input to a stored representation of
information, as do template models.
In this case, however, the stored representation, instead of being a whole pattern that must be
matched exactly or closely (as in template-matching models), is a prototype, an idealized
representation of some class of objects or events—the letter R, a cup, a VCR, a collie, and so
forth.
The Theory of Prototype, also named the Theory of Prototype Matching, has the outstanding
characteristic that memory is not storing templates which matches one-by-one with outside
patterns but prototypes.
The prototype, rather than an inside copy of a certain pattern, is considered as inside attribute of
one kind of objects, which means abstractive characteristics of all individuals in one certain type
or category.
This theory reveals basic features of one type of objects. For instances, people know various
kinds of airplanes, but a long cylinder with two wings can be the prototype of airplane.
Therefore, according to the theory of Prototype, in the process of pattern recognition, outside
simulation only needs to be compared with the prototype, and the sense to objects comes from
the matching between input information and prototype.
Once outside simulating information matches best with a certain prototype in brain, the
information can be ranged in the category of that prototype and recognized.
In a certain extent the template matching is covered in the Theory of Prototype, which appears
more flexible and more elastic. However, this model also has some drawbacks, only having up-
down processing but no bottom-up processing, which is sometimes more important for the
prototype matching in human perceptional process.
Prototype-matching models describe perceptual processes as follows. When a sensory device
registers a new stimulus, the device compares it with previously stored prototypes. An exact
match is not required; in fact, only an approximate match is expected. Prototype-matching
models thus allow for discrepancies between the input and the prototype, giving prototype
models a lot more flexibility than template models. An object is “perceived” when a match is
found.
Prototype models differ from template and featural analysis models in that they do not require
that an object contain any one specific feature or set of features to be recognized. Instead, the
more features a particular object shares with a prototype, the higher the probability of a match.
Moreover, prototype models take into account not only an object’s features or parts but also the
relationships among them.
FEATURAL ANALYSIS
Some psychologists believe that the analysis of a whole into its parts underlies the basic
processes used in perception.
Instead of processing stimuli as whole units, we might instead break them down into their
components, using our recognition of those parts to infer what the whole represents.
The parts searched for and recognized are called features. Recognition of a whole object, in this
model, thus depends on recognition of its features.
The Theory of Feature is other theory explaining pattern perception and shape perception.
According to this theory, people try to match the features of pattern with those stored in
memory, rather than the entire pattern with template or prototype.
This model is the most attractive one currently, the Model of Feature Analysis has been applied
widely in computer pattern recognition. However, it is just a bottom-up processing model,
lacking up-down processing. Therefore, it still has some drawbacks.
Feature detection theory proposes that the nervous system sorts and filters incoming stimuli to
allow the human (or animal) to make sense of the information.
In the organism, this system is made up of feature detectors, which are individual neurons, or
groups of neurons, that encode specific perceptual features.
The theory proposes an increasing complexity in the relationship between detectors and the
perceptual feature. The most basic feature detectors respond to simple properties of the stimuli.
Further along the perceptual pathway, higher organized feature detectors are able to respond to
more complex and specific stimuli properties.
When features repeat or occur in a meaningful sequence, we are able to identify these patterns
because of our feature detection system.
One source of evidence for feature matching comes from Hubel and Wiesel's research, which
found that the visual cortex of cats contains neurons that only respond to specific features (e.g.
one type of neuron might fire when a vertical line is presented, another type of neuron might
fire if a horizontal line moving in a particular direction is shown).\
PANDEMONIUM THEORY: The theory was developed by the artificial intelligence pioneer Oliver
Selfridge in 1959. It describes the process of object recognition as a hierarchical system of
detection and association by a metaphorical set of "demons" sending signals to each other. This
model is now recognized as the basis of visual perception in cognitive science.
Pandemonium (Selfridge, 1959): Is a data-driven bottom up system
recognition model based on feature analysis – objects are recognised from an
analysis of their component
Pandemonium is composed of four types of recognition units (demons):
Sta Demon
Function
ge name
1 Image Records the image that is received in the retina.
demon
2 Feature There are many feature demons, each representing a
demons specific feature. For example, there is a feature demon for
short straight lines, another for curved lines, and so forth.
Each feature demon's job is to "yell" if they detect a
feature that they correspond to. Note that, feature demons
are not meant to represent any specific neurons, but to
represent a group of neurons that have similar functions.
For example, the vertical line feature demon is used to
represent the neurons that respond to the vertical lines in
the retina image.
3 Cognitive Watch the "yelling" from the feature demons. Each
demons cognitive demon is responsible for a specific pattern (e.g.,
a letter in the alphabet). The "yelling" of the cognitive
demons is based on how much of their pattern was
detected by the feature demons. The more features the
cognitive demons find that correspond to their pattern, the
louder they "yell". For example, if the curved, long straight
and short angled line feature demons are yelling really
loud, the R letter cognitive demon might get really excited,
and the P letter cognitive demon might be somewhat
excited as well; but the Z letter cognitive demon is very
likely to be quiet.
4 Decision Represents the final stage of processing. It listens to the
demon "yelling" produced by the cognitive demons. It selects the
loudest cognitive demon. The demon that gets selected
becomes our conscious perception. Continuing with our
previous example, the R cognitive demon would be the
loudest, seconded by P; therefore we will perceive R, but if
we were to make a mistake because of poor displaying
conditions (e.g., letters are quickly flashed or have parts
occluded), it is likely to be P.
Note that, the "pandemonium" simply represents the
cumulative "yelling" produced by the system.
Criticism
A major criticism of the pandemonium architecture is that it adopts a completely
bottom-up processing: recognition is entirely driven by the physical
characteristics of the targeted stimulus. This means that it is unable to account
for any top-down processing effects, such as context effects (e.g., pareidolia),
where contextual cues can facilitate (e.g., word superiority effect: it is relatively
easier to identify a letter when it is part of a word than in isolation) processing.
However, this is not a fatal criticism to the overall architecture, because is
relatively easy to add a group of contextual demons to work along with the
cognitive demons to account for these context effects.
Recognition-by-Components Theory
First proposed by Irving Biederman (1987), this theory states that humans
recognize objects by breaking them down into their basic 3D geometric
shapes called geons, basic shapes or components that are combined in object
recognition; an abbreviation for “geometric ions” proposed by Biederman.
(i.e. cylinders, cubes, cones, etc.).
An example is how we break down a common item like a coffee cup: we
recognize the hollow cylinder that holds the liquid and a curved handle off the
side that allows us to hold it. Even though not every coffee cup is exactly the
same, these basic components help us to recognize the consistency across
examples (or pattern).
This also works for more complex objects, which in turn are made up of a
larger number of geons. Perceived geons are then compared with objects in
our stored memory to identify what it is we are looking at.
RBC suggests that there are fewer than 36 unique geons that when combined
can form a virtually unlimited number of objects.
Viewpoint invariance
TOP-DOWN
Navon approach
Forty years ago, David Navon tried to tackle a central problem concerning the
course of perceptual processing: “Do we perceive a visual scene feature-by-
feature? Or is the process instantaneous and simultaneous as some Gestalt
psychologists believed? Or is it somewhere in between?”.
To examine this, Navon developed a now classical paradigm, which involved
the presentation of compound stimuli; a large letter (global level) composed
of smaller letters (local level) in which the global and the local letters could be
the same (consistent) or different (inconsistent).
Images and other stimuli contain both local features (details, parts) and
global features (the whole).
Precedence refers to the level of processing (global or local) to which
attention is first directed. Global precedence occurs when an individual
more readily identifies the global feature when presented with a stimulus
containing both global and local features.
The global aspect of an object embodies the larger, overall image as a whole,
whereas the local aspect consists of the individual features that make up this
larger whole.
Global processing is the act of processing a visual stimulus holistically.
Although global precedence is generally more prevalent than local
precedence, local precedence also occurs under certain circumstances and
for certain individuals.
Global precedence was first studied using the Navon figure, where many
small letters are arranged to form a larger letter that either does or does not
match.
Variations of the original Navon figure include both shapes and objects.
Individuals presented with a Navon figure will be given one of two tasks. In
one type of task, participants are told before the presentation of the stimulus
whether to focus on a global or local level, and their accuracy and reaction
times are recorded.
In another type of task, participants are first presented with a target stimulus,
and later presented with two different visuals.
One of the visuals matches the target stimulus on the global level, while the
other visual matches the target stimulus on the local level. In this condition,
experimenters note which of the two visuals, the global or local, is chosen to
match the target stimulus.
He found two effects which he argued supported “..the notion that global
processing is a necessary stage of perception prior to more fine-grained
analysis” : (i) responses to the global level were faster than responses to the
local level, and (ii) when the levels were inconsistent, information at the
global level interfered with (slowed down) responses to the local level, but not
the other way around.
Additionally, global interference effect, which occurs when the global aspect
is automatically processed even when attention is directed locally, causes
slow reaction time.
Navon's study global precedence and his stimuli, or variations of it, are still
used in nearly all global precedence experiments.
Navon Effect
A Navon figure is made of a larger recognisable shape, such as a letter,
composed of copies of a smaller different shape. Navon figures are used in
tests of visual neglect.
Reading Navon figures has been found to affect a range of tasks.
It has been shown that just 5 minutes reading out the small letters of Navon
figures has a detrimental effect on face recognition.
The size of the Navon effect has been found to be influenced by the
properties of the image.
The effect is short lived (lasting less than a couple of minutes).
The Navon effects has also been found in other tasks such as golf putting
where reading the small Navon letters leads to poorer putting performance.
CONTEXT EFFECT
A context effect is an aspect that describes the influence of environmental
factors on one's perception of a stimulus.
The impact of context effects is considered to be part of top-down approach.
The concept is supported by the theoretical approach to perception known
as constructive perception.
Context effects can impact our daily lives in many ways such as word
recognition, learning abilities, memory, and object recognition.
It can have an extensive effect on marketing and consumer decisions. For
example, research has shown that the comfort level of the floor that shoppers
are standing on while reviewing products can affect their assessments of
product's quality, leading to higher assessments if the floor is comfortable
and lower ratings if it is uncomfortable. Because of effects such as this,
context effects are currently studied predominantly in marketing.
Impact
Context effects can have a wide range of impacts in daily life. In reading
difficult handwriting context effects are used to determine what letters make
up a word.
This helps us analyze potentially ambiguous messages and decipher them
correctly. It can also affect our perception of unknown sounds based on the
noise in the environment.
For example, we may fill in a word we cannot make out in a sentence based
on the other words we could understand. Context can prime our attitudes and
beliefs about certain topics based on current environmental factors and our
previous experiences with them.
Context effects also affect memory. We are often better able to recall
information in the location in which we learned it or studied it.
For example, while studying for a test it is better to study in the environment
that the test will be taken in (i.e. classroom) than in a location where the
information was not learned and will not need to be recalled. This
phenomenon is called transfer-appropriate processing.
Configural-Superiority Effect
A type of context effect by which objects presented in certain configurations are
easier to recognize than the objects presented in isolation, even if the objects in
the configuration are more complex than those in isolation.
ATTENTION
Selective attention
Divided Attention
Attention to two or more channels of information at the same time, so that two
or more tasks may be performed concurrently. It may involve the use of just one
sense (e.g., hearing) or two or more senses (e.g., hearing and vision).
Investigating Divided Attention in the Lab
Early work in the area of divided attention had participants view a videotape
in which the display of a basketball game was superimposed on the display of
a handslapping game.
Participants could successfully monitor one activity and ignore the other.
However, they had great difficulty in monitoring both activities at once, even
if the basketball game was viewed by one eye and the hand-slapping game
was watched separately by the other eye (Neisser & Becklen, 1975).
Neisser and Becklen hypothesized that improvements in performance
eventually would have occurred as a result of practice. They also
hypothesized that the performance of multiple tasks was based on skill
resulting from practice. They believed it not to be based on special cognitive
mechanisms.
The following year, investigators used a dual-task paradigm to study divided
attention during the simultaneous performance of two activities: reading
short stories and writing down dictated words (Spelke, Hirst, & Neisser, 1976).
The researchers would compare and contrast the response time (latency) and
accuracy of performance in each of the three conditions. Of course, higher
latencies mean slower responses.
As expected, initial performance was quite poor for the two tasks when the
tasks had to be performed at the same time. However, Spelke and her
colleagues had their participants practice to perform these two tasks 5 days a
week for many weeks (85 sessions in all). To the surprise of many, given
enough practice, the participants’ performance improved on both tasks.
They showed improvements in their speed of reading and accuracy of reading
comprehension, as measured by comprehension tests. They also showed
increases in their recognition memory for words they had written during
dictation. Eventually, participants’ performance on both tasks reached the
same levels that the participants previously had shown for each task alone.
When the dictated words were related in some way (e.g., they rhymed or
formed a sentence), participants first did not notice the relationship. After
repeated practice, however, the participants started to notice that the words
were related to each other in various ways. They soon could perform both
tasks at the same time without a loss in performance.
Spelke and her colleagues suggested that these findings showed that
controlled tasks can be automatized so that they consume fewer attentional
resources. Furthermore, two discrete controlled tasks may be automatized to
function together as a unit. The tasks do not, however, become fully
automatic.
For one thing, they continue to be intentional and conscious. For another,
they involve relatively high levels of cognitive processing.
Alternating attention
Alternating attention is the ability to shift the focus of attention and move
between two or more activities with different cognitive requirements.
Mental flexibility is thereby required to enable the switch and to perform the
different tasks efficiently, without the cognitive load of one task limiting the
performance of the others, or task switching itself altering concentration.
MODELS OF ATTENTION
BOTTLENECK MODELS OF ATTENTION
A bottleneck restricts the rate of flow, as, say, in the narrow neck of a milk
bottle. The narrower the bottleneck, the lower the rate of flow.
Broadbent's, Treisman's and Deutsch and Deutsch Models of Attention are all
bottleneck models because they predict we cannot consciously attend to all
of our sensory input at the same time.
This limited capacity for paying attention is therefore a bottleneck and the
models each try to explain how the material that passes through the
bottleneck is selected.
BROADBENT’S FlLTER MODEL
Donald Broadbent is recognised as one of the major contributors to the
information processing approach, which started with his work with air traffic
controllers during the war.
In that situation a number of competing messages from departing and
incoming aircraft are arriving continuously, all requiring attention. The air
traffic controller finds s/he can deal effectively with only one message at a
time and so has to decide which is the most important.
Broadbent designed an experiment (dichotic listening) to investigate the
processes involved in switching attention which are presumed to be going on
internalb in our heads.
Broadbent argued that information from all of the stimuli presented at any
given time enters a sensory buffer. One of the inputs is then selected on the
basis of its physical characteristics for further processing by being allowed to
pass through a filter.
Because we have only a limited capacity to process information, this filter is
designed to prevent the information-processing system from becoming
overloaded.
The inputs not initially selected by the filter remain briefly in the sensory
buffer, and if they are not processed they decay rapidly. Broadbent assumed
that the filter rejected the non-shadowed or unattended message at an early
stage of processing.
Broadbent (1958) looked at air-traffic control type problems in a laboratory.
Broadbent wanted to see how people were able to focus their attention
(selectively attend), and to do this he deliberately overloaded them with
stimuli - they had too many signals, too much information to process at the
same time.
One of the ways Broadbent achieved this was by simultaneously sending one
message (a 3-digit number) to a person's right ear and a different message (a
different 3-digit number) to their left ear.
Participants were asked to listen to both messages at the same time and
repeat what they heard, this is known as a 'dichotic listening task.
In the example above the participant hears 3 digits in their right ear (7,5,6)
and 3 digits in their left ear (4,8,3). Broadbent was interested in how these
would be repeated back. Would the participant repeat the digits back in the
order that they were heard (order of presentation), or repeat back what was
heard in one ear followed by the other ear (ear-by-ear).
He actually found that people made fewer mistakes repeating back ear by ear
and would usually repeat back this way.
SINGLE CHANNEL MODEL
Results from this research led Broadbent to produce his 'filter' model of how
selective attention operates. Broadbent concluded that we can pay attention
to only one channel at a time - so his is a single channel model.
In the dichotic listening task each ear is a channel. We can listen either to the
right ear (that's one channel) or the left ear (that's another channel).
Broadbent also discovered that it is difficult to switch channels more than
twice a second.
So you can only pay attention to the message in one ear at a time - the
message in the other ear is lost, though you may be able to repeat back a few
items from the unattended ear. This could be explained by the short-term
memory store which holds onto information in the unattended ear for a short
time.
Broadbent thought that the filter, which selects one channel for attention,
does this only on the basis of PHYSICAL CHARACTERISTICS of the information
coming in: for example, which particular ear the information was coming to,
or the type of voice.
According to Broadbent the meaning of any of the messages is not taken into
account at all by the filter. All SEMANTIC PROCESSING (processing the
information to decode the meaning, in other words understand what is said)
is carried out after the filter has selected the channel to pay attention to.
So whatever message is sent to the unattended ear is not understood.
Because we have only a limited capacity to process information, this filter is
designed to prevent the information-processing system from becoming
overloaded. The inputs not initially selected by the filter remain briefly in the
sensory buffer store, and if they are not processed they decay rapidly.
Broadbent assumed that the filter rejected the non-shadowed or unattended
message at an early stage of processing.
EVALUATION OF BROADBENT'S MODEL
(1) Broadbent's dichotic listening experiments have been criticised because:
(a) The early studies all used people who were unfamiliar with shadowing and so
found it very difficult and demanding. Eysenck & Keane (1990) claim that the
inability of naive participants to shadow successfully is due to their unfamiliarity
with the shadowing task rather than an inability of the attentional system.
(b) Participants reported after the entire message had been played - it is possible
that the unattended message is analysed thoroughly but participants forget.
(c) Analysis of the unattended message might occur below the level of conscious
awareness. For example, research by von Wright et al (1975) indicated analysis
of the unattended message in a shadowing task. A word was first presented to
participants with a mild electric shock. When the same word was later presented
to the unattended channel, participants registered an increase in GSR (indicative
of emotional arousal and analysis of the word in the unattended channel).