Professional Documents
Culture Documents
Unit 1 - Attention
Unit 1 - Attention
Unit 1 - Attention
What is attention?
The concept of attention has too many meanings to justify a single term. Attention is used to
refer to both the explanation (the set of phenomena in need of an answer) and explanans (the
set of processes during explaining)
Divided Attention
➔ Divided attention is also studied by presenting at least two stimulus inputs at the same
time.
➔ However, individuals are instructed they must attend (and respond) to all stimulus inputs.
Divided attention is also known as multi-tasking.
➔ Studies of divided attention provide useful information about our processing limitations
and the capacity of our attentional mechanisms.
Selective attention allows us to focus on specific information while ignoring distractions. This is
crucial for navigating our complex world. Studies show we miss a lot of irrelevant details in
selective attention tasks (like following one conversation at a party- Cocktail Party
Phenomenon).
In general, people can process only one message at a time (Cowan, 2005). However,
people are more likely to process the unattended message when
• (1) both messages are presented slowly,
• (2) the main task is not challenging, and
• (3) the meaning of the unattended message is immediately relevant
This problem, first explored by Colin Cherry in 1953, describes the challenge of focusing on one
conversation amidst multiple conversations or background noise.
A listener, when attending to one voice among many, may face the following two challenges
(McDermott, 2009):
1. Sound Segregation: Listeners must separate and identify which sounds belong together.
This is complex because different sound sources often overlap.
2. Directing the sound source of interest: Once segregation is achieved, listeners must
then direct their attention to the sound source of interest while ignoring others.
There's a question of how much processing occurs for unattended auditory input while focusing
on one source: Cherry's Dichotic Listening Task
Top-down processes, including attention, play a crucial role in this segregation (Robinson &
McAlpine, 2009).
➔ Broadbent (1958) proposed the filter theory of attention, suggesting that individuals
have limits on the amount of information they can attend to simultaneously.
➔ The theory posits the existence of an attentional filter that selectively allows certain
information to pass through based on physical characteristics, such as location or pitch,
while blocking the rest.
➔ Only information that passes through the filter is processed for meaning, explaining why
unattended messages are often poorly recalled.
➔ The filter operates early in processing, typically before the meaning of the message is
identified.
Challenges to Filter Theory:
➔ Moray (1959) discovered the "cocktail party effect," where participants recall hearing
their own name in the unattended message, challenging the notion that all unattended
messages are filtered out.
➔ Treisman (1960) found evidence suggesting that participants process unattended
messages based on meaning, contrary to the predictions of filter theory.
➔ Wood and Cowan (1995) investigated the effect of backward speech in unattended
messages on participants' attention, concluding that attentional shifts to unattended
messages occur unintentionally and without awareness.
➔ Wood and Cowan (1995) conducted a dichotic listening task with 168 undergraduate
participants to investigate whether information from the unattended channel can be
recognized.
Experimental Setup:
➔ Participants shadowed an excerpt from "The Grapes of Wrath" in the attended channel
(right ear) and listened to "2001: A Space Odyssey" in the unattended channel (left ear).
➔ Five minutes into the task, the speech in the unattended channel switched to backward
speech for 30 seconds.
➔ Previous experiments showed that roughly half of the participants noticed the switch,
while the other half did not.
➔ Two experimental groups differed in how long the "normal" speech was presented after
the backward speech: two and a half minutes for one group and one and a half minutes
for the other. A control group heard no backward speech.
Findings:
➔ Participants who noticed the backward speech showed a disruption in shadowing the
attended message, indicated by a peak in shadowing errors during the 30 seconds of the
backward-speech presentation.
➔ The effect was more pronounced for participants who reported noticing the backward
speech, while control participants and those who did not notice the backward speech
showed no increase in shadowing errors.
➔ Wood and Cowan analyzed shadowing errors in 5-second intervals before, during, and
after the backward-speech segment. They found that error rates peaked 10 to 20
seconds after the backward speech began.
➔ The attentional shift to the unattended message was unintentional and occurred without
awareness, as indicated by the uniform timing of error peaks among participants who
noticed the backward speech.
➔ Role of Working Memory:
➔ Conway, Cowan, and Bunting (2001) demonstrated that individuals with lower
working-memory spans are more likely to detect their names in unattended messages,
suggesting a link between working memory and attentional control.
In attenuation theory, unattended messages are not completely blocked but rather their
"volume" is turned down, making them harder to process.
According to Treisman, incoming messages undergo three types of analysis: physical, linguistic,
and semantic.
Contextual Priming:
➔ MacKay (1973) demonstrated that words in the unattended message can help
disambiguate sentences in the attended message.
➔ Expected information is easier to process, leading to enhanced understanding of
ambiguous sentences.
➔ Attenuation theory contrasts with filter theory in allowing multiple analyses of all
messages.
➔ While filter theory discards unattended messages after processing physical
characteristics, attenuation theory weakens unattended messages but retains their
information.
DEUTSCH & DEUTSCH’S LATE SELECTION THEORY (1963)- LATER DEVELOPED BY NORMAN
(1968)
➔ Essentially, this model depicts the allocation of mental resources to various cognitive
tasks.
➔ An analogy could be made to an investor depositing money in one or more of several
different bank accounts—here, the individual “deposits” mental capacity to one or more
of several different tasks.
➔ Many factors influence this allocation of capacity, which itself depends on the extent and
type of mental resources available.
➔ The availability of mental resources, in turn, is affected by the overall level of arousal, or
state of alertness.
➔ Kahneman (1973) argued that one effect of being aroused is that more cognitive
resources are available to devote to various tasks.
➔ Paradoxically, however, the level of arousal also depends on a task’s difficulty. This
means we are less aroused while performing easy tasks, such as adding 2 and 2, than we
are when performing more difficult tasks, such as multiplying a Social Security number
by pi.
➔ We therefore bring fewer cognitive resources to easy tasks, which, fortunately, require
fewer resources to complete.
➔ Arousal thus affects our capacity (the total of our mental resources) for tasks. But the
model must still specify how we allocate our resources to all the cognitive tasks that
confront us.
SCHEMA THEORY
Neisser's Schema Theory of Attention: Picking Apples, Not Filtering Them
A Different Approach:
Psychologists traditionally viewed attention as a filter, letting some information in and blocking
the rest. Ulric Neisser (1976) proposed a different theory called schema theory.
The Apple Picking Analogy:
Imagine picking apples from a tree. You focus on the ripe ones you want and leave the others
on the tree. Neisser says attention works similarly. We don't filter out unwanted information, we
simply don't pick it up (like the unripe apples) in the first place.
The Experiment:
Neisser and Becklen (1975) conducted a study where participants watched two superimposed
films: a hand-slapping game and a basketball game. They were asked to focus on one film and
press a button when a specific event happened (e.g., hand slap).
Findings:
➔ People could easily follow the chosen film, ignoring the other.
➔ They even missed unexpected events in the unattended film (e.g., someone throwing a
ball in the hand game).
Neisser's Explanation:
These results suggest skilled perception, not filtering, explains attention. We focus on the
relevant information (picking the ripe apples) and our perception guides what we see next.
There's no need for filters; our natural skills handle what's important in the chosen scene.
LOAD INDUCED BLINDNESS & PERCEPTUAL LOAD THEORY (MACDONALD & LAVIE, 2008)
Load-Induced Blindness:
Load-induced blindness refers to the phenomenon where the cognitive load of a task leads to a
decreased awareness or perception of other stimuli, even if they are salient or relevant.
➔ Mechanism:
◆ High cognitive load from a primary task can reduce the capacity to process
peripheral stimuli or secondary tasks.
◆ Attentional resources are fully engaged in the primary task, leaving fewer
resources available for processing other stimuli.
➔ Examples:
◆ In driving, focusing intensely on navigating complex traffic conditions may lead
to reduced awareness of pedestrians or road signs.
◆ During a conversation in a noisy environment, concentrating on understanding
speech may result in overlooking visual cues or other auditory stimuli.
The debate about early vs. late selection in attention (when information is filtered) might
not have a simple answer. A proposed solution is the perceptual load model.
➔ Broad attentional focus: When a task is easy (low load), people tend to pay attention to
a wider range of things, making them more susceptible to distractions.
➔ According to the theory, brain activation associated with distractors should be less when
individuals are performing a task involving high perceptual load. This finding has been
obtained with visual tasks and distractors (e.g., Schwartz et al., 2005) and also with
auditory tasks and distractors
➔ Biggs and Gibson (2018) argued this happens because observers generally adopt a
broad attentional focus when perceptual load is low.
➔ They tested this hypothesis using three low-load conditions in which participants decided
whether a target X or N was presented and a distractor letter was sometimes presented
➔ They argued that observers would adopt the smallest attentional focus in the circle
condition and the largest attentional focus in the solo condition.
➔ As predicted, distractor interference was greatest in the solo condition and least in the
circle condition. Thus, distraction effects depend strongly on size of attentional focus as
well as perceptual load.
Sörqvist et al. (2016) argued high cognitive load can reduce rather than increase
distraction.
➔ They pointed out that cognitive load is typically associated with high levels of
concentration and our everyday experience indicates high concentration generally
reduces distractibility.
➔ As predicted, they found neural activation associated with auditory distractors was
reduced when cognitive load on a visual task was high rather than low.
Strengths:
➔ Predicting Distraction:
◆ The theory effectively predicts when distractions will be strong or weak based on
perceptual load (complexity of the task).
◆ High perceptual load reduces distraction.
◆ Applied research supports this - drivers with high perceptual load (complex road
situation) are more likely to miss hazards.
Weaknesses:
➔ Terminology:
◆ Terms like "perceptual load" and "cognitive load" are unclear, making it difficult
to precisely test the theory.
➔ Interaction, not Independence:
◆ The theory assumes perceptual and cognitive load have separate effects, but
research suggests they interact.
◆ Perceptual load's influence depends on available cognitive resources.
➔ Confounding Factors:
◆ Perceptual load and attentional focus can be intertwined, making it hard to
separate their effects.
➔ Cognitive Load Inconsistencies:
◆ The theory suggests high cognitive load increases distraction, but research shows
it can also reduce distraction if the task and distractor are easily distinguished.
➔ Omissions:
◆ The theory doesn't consider factors like:
● Salience (how noticeable) of distractions.
● Spatial distance between distractions and the task.
INATTENTIONAL BLINDNESS
➔ Inattentional Blindness and Change Blindness:
◆ Change blindness: inability to notice significant changes in a scene when
disrupted.
◆ Linked to inattentional blindness: failure to perceive a stimulus despite being in
plain sight without attention.
➔ Everyday Example of Inattentional Blindness:
◆ Experienced pilot focusing on airspeed indicator but failing to notice another
airplane blocking the runway.
➔ Demonstration of Inattentional Blindness:
◆ Neisser and Becklen experiment: participants fail to see unexpected events.
◆ Simons' study using video technology partially replicates Neisser and Becklen's
findings.
➔ Experimental Conditions and Unexpected Events:
◆ Four conditions: Easy (counting basketball passes), Hard (tracking bounce and
aerial passes).
◆ Unexpected events: Umbrella-woman walks across the scene, or a person in a
gorilla costume.
➔ Participants' Responses:
◆ After viewing, participants wrote down their counts and described anything
unusual.
◆ 46% of participants failed to notice the unexpected events.
◆ Only 44% noticed the gorilla, more so among those watching the black team.
● Participants watched a video of people passing a basketball and were asked to count
the number of passes made by a specific team (white shirts).
● Unbeknownst to them, a person in a gorilla suit walked through the scene, performing
clear actions.
● Surprisingly, only 42% of observers noticed the gorilla!
Why Did People Miss the Gorilla?
● Selective Attention: People were focused on counting passes (white shirts) and filtered
out other information (the gorilla) due to limited attentional resources.
● Task-Relevance: In a variation where participants counted passes by people in black
(similar color to the gorilla), detection rates were higher (83%). This suggests a bias
towards noticing task-relevant stimuli.
● Eye Fixations: Rosenholtz et al. (2016) found that participants who fixated closer to the
gorilla (while counting black shirts) were more likely to detect it. This supports the role of
selective attention.
● Peripheral Vision: However, Rosenholtz et al. also found some observers counting black
shirts (with fixations similar to those counting white shirts) still missed the gorilla. This
suggests limitations in peripheral vision might also play a role.
The presence of inattentional blindness can lead us to underestimate the amount of processing
of the undetected stimulus. Schnuerch et al. (2016) found categorising attended stimuli was
slower when the meaning of an undetected stimulus conflicted with that of the attended
stimulus.
● Most (2013) suggests attentional sets based on semantic categories (like letters or
numbers) also play a role.
● People were less likely to miss an unexpected letter (E) if they were tracking other letters
compared to tracking numbers, even if the stimuli looked identical (except mirrored).
● Légal et al. (2017) explored how demanding tasks that require more top-down processing
(counting specific types of passes) increase inattentional blindness (missing the gorilla).
● Conversely, subliminally presenting detection-related words (identify, notice) before the
video boosted gorilla detection, highlighting the influence of top-down attention.
CHANGE BLINDNESS
➔ Change blindness, which is “the failure to detect changes in visual scenes” (Ball et al.,
2015, p. 2253)
➔ Studies by Levin et al. (2002) highlight our tendency to overestimate our ability to detect
changes. Participants significantly overestimated how likely they were to notice changes
in videos (plates changing color or a disappearing scarf) compared to the actual
detection rate observed in the experiment (0%).
In real-world situations, we often detect changes through accompanying motion cues (e.g.,
someone removing their scarf).
Laboratory Techniques: Researchers use various methods to prevent motion detection and
induce change blindness:
Saccades: Presenting the change during a rapid eye movement (saccade) disrupts the visual
processing stream.
Flicker Paradigm: Briefly flashing the original and changed images with a short blank interval
in between can also lead to change blindness.
Change Blindness:
● Definition: Failure to detect changes in a visual scene, even when focusing on the area
where the change occurs.
● Instructions: Even with instructions to look for changes, detection can be difficult.
● Memory: Requires comparing the pre-change scene with the post-change scene in
memory.
● Attention: Can occur even when attention is focused on the general area of the change.
● Processing: More complex, involving encoding pre- and post-change stimuli, comparing
them, and consciously recognizing the difference.
Inattentional Blindness:
Neither theory fully explains change blindness on its own. The attentional approach sheds light
on how limited focus can lead to missed changes. The peripheral vision approach highlights the
potential role of limitations in processing information from the outer edges of our vision.
● Stable World, Efficient Processing: The visual world is generally stable for short periods.
Prioritizing stability allows for efficient processing and a continuous perception of our
environment.
● Perceptual Biases: Studies by Fischer and Whitney (2014) and Manassi et al. (2018) show
how our visual system prioritizes stability. We tend to perceive visual elements
(orientation, location) as being more consistent with what we saw previously, even if
there's a change.
● Serial Dependence: This phenomenon explains how past visual experiences influence
our perception of present stimuli. It involves multiple stages of processing and
potentially memory, leading to a bias towards stability.
Our visual system prioritizes stability, which can sometimes lead to missing changes. This might
seem like a flaw, but it allows for a more consistent and efficient overall perception of the world.
Change blindness might be less pronounced in situations where detecting changes is crucial
(e.g., driving).
Individual differences might influence susceptibility to change blindness.
There has been much more research on visual attention than auditory attention. The main
reason is that vision is our most important sense modality with more of the cortex devoted to it
than any other sense
Look around you and look at any interesting objects. Was your visual
can be seen outside its beam and it can be redirected to focus on any given
Chen and Cave (2016, p. 1822) argued the optimal attentional zoom setting “includes all possible
target locations and excludes possible distractor locations”.
Most findings indicated people’s attentional zoom setting is close to optimal. However, Collegio
et al. (2019) obtained contrary findings. Drawings of large objects (e.g., jukebox) and small
objects (e.g., watch) were presented so their retinal size was the same. The observer’s area of
focal attention was greater with large objects because they made top-down inferences
concerning their real-world sizes. As a result, the area of focal attention was larger than optimal
for large objects.
Goodhew et al. (2016) pointed out that nearly all research has focused only on spatial perception
(e.g., identification of a specific object). They focused on temporal perception (was a disc
presented continuously or were there two presentations separated by a brief interval?). Spatial
resolution is poor in peripheral vision but temporal resolution is good. As a consequence, a small
attentional spotlight is more beneficial for spatial than temporal acuity.
Multiple spotlights theory
The multiple spotlights theory says our attention can be like multiple flashlights. We can shine
them on two different things at the same time, even if they're far apart.
This surprised some scientists because they thought attention was more like a single spotlight.
They worried that splitting our attention might make it harder to do things well. But research
shows we can actually do this without problems.
Split Attention
The multiple spotlights theory (Awh & Pashler, 2000) challenges the idea that attention is like a
single spotlight. Here's the key point: Our attention can be divided and focused on two or more
separate areas at the same time, even if those areas are not close together. This is called split
attention
This is controversial because some scientists, like Jans et al. (2010), argue that attention is
linked to physical actions. They believe focusing on two separate things might make it difficult
to perform those actions effectively.
➔ Task: Identify two digits presented in separate locations with some space between them.
➔ Prediction (Zoom Lens Theory): Attention should cover both digits and the space in
between, making it easy to see anything there.
➔ Result: People were actually bad at identifying digits presented in the middle space
between the cued locations.
➔ Explanation (Multiple Spotlights): This suggests attention wasn't spread out like a
zoom lens, but rather focused on two separate areas like spotlights, missing the
information in the gap.
2. Morawetz et al. (2007): Brain on Split Duty
➔ Task: Attend to letters and digits in specific locations while ignoring others.
➔ Method: Measured brain activity while participants performed the task.
➔ Result: Brain activity showed two distinct peaks corresponding to the attended locations,
with less activity in the region between them.
➔ Explanation: This brain activity pattern suggests separate "spotlight" areas of focus,
neglecting the unattended space in the middle.
➔ Task: Monkeys attended to two moving stimuli while ignoring a distractor in between.
➔ Method: Recorded brain activity of monkeys performing the task.
➔ Result: When the distractor was present between the attended stimuli, brain activity for
the distractor decreased compared to other conditions.
➔ Explanation: This suggests that split attention involves actively reducing processing of
distractions located between the areas we're focusing on, like a mental "off switch" for
the middle space.
In most research demonstrating split attention, the two non-adjacent stimuli being attended
simultaneously were each presented to a different hemifield (one half of the visual field). Note
that the right hemisphere receives visual signals from the left hemifield and the left hemisphere
receives signals from the right hemifield.
➔ Task: Identify targets presented in opposite or the same halves of the visual field.
➔ Finding: Performance was better when targets were presented in opposite halves (left
and right) compared to the same half. Brain activity also showed better filtering of
distractions between targets presented in opposite halves.
➔ Explanation: This suggests our brains process information from each half of the visual
field differently. Split attention seems to work best when focusing on opposite sides,
potentially because each brain hemisphere can handle one "spotlight" more efficiently.
The spotlight and zoom lens metaphors are helpful ways to understand how we selectively focus
our attention because they capture two key aspects of visual attention:
➔ Space-based attention: This is like a spotlight focusing on a specific region of space. We
might use this when searching for something in a particular area, like looking for a lost
key on the counter.
➔ Object-based attention: This is like a zoom lens adjusting to focus on a particular object.
We might use this when trying to read a specific word on a busy page or examining a
detailed painting.
➔ Feature-based Attention: While object-based attention focuses on whole objects,
feature-based attention allows us to focus on specific characteristics like color, motion,
or shape. Example: Your friend in the red shirt - In this scenario, you're using
feature-based attention by focusing on the color red to find your friend in the crowd,
even though you're not necessarily looking at specific objects or locations.
2. Automaticity vs. Strategy: This raises a question about whether object-based attention
is automatic (always happening) or strategic (influenced by task goals).
- Evidence for Strategic Control: The study by Drummond and Shomstein (2010)
is presented as evidence against object-based attention being fully automatic.
They found no object-based attention effect when cues perfectly predicted the
target location (100% certainty). This suggests that when we know exactly where
to look, we can override any automatic tendency to focus on the entire object.
-
3. Coexistence of space-based and object-based attention:
- Hollingworth et al. (2012): This study used a task similar to Egly et al. (1994).
They found evidence for both types of attention. When the target was far from
the cue within the same object, performance was worse (object-based effect).
However, within the same object, reaction times also increased with the distance
between cue and target (space-based effect). This suggests both types of
attention operate simultaneously.
-
- Kimchi et al. (2016): This study also supports the idea of coexisting attention.
Participants responded faster to targets within objects (object-based) and closer
to objects outside them (space-based). This again suggests parallel processing of
object and space information.
-
4. Is object based attention overestimated?
- Limited Evidence for Object-Based Attention: Studies by Pilz et al. (2012) found
that space-based attention was more prevalent than object-based attention.
Only a small portion of participants showed clear evidence of object-based
effects.
- Spatial Cues Bias Results: Donovan et al. (2017) argue that many studies rely
on spatial cues (like arrows pointing to locations) which might indirectly influence
object-based attention. When they removed spatial cues, they found no
object-based attention effect. This suggests previous research might have been
biased by the way experiments were designed.
- The Inhibition of Return (IOR) Phenomenon:
When we search our environment, revisiting the same spot repeatedly is
inefficient. IOR helps avoid this by reducing the likelihood of returning attention
to a recently focused location.
○ Location-based IOR:
■ List and Robertson (2007) found stronger evidence for location-based
IOR using a task similar to Egly et al. (1994). This suggests focusing on a
specific area reduces the likelihood of revisiting that area, regardless of
objects within it.
○ Object-based IOR:
■ Theeuwes et al. (2014) suggest both location and object-based IOR can
co-exist. They argue that attention to a location automatically includes
attention to any object present there, and vice versa. This suggests IOR
might be influenced by both factors.
➔ Bartsch et al. (2018): This research provides evidence for feature-based attention. They
found that attention to color-defined targets wasn't limited to the area where a spatial
cue pointed (like in Figure 5.4). Instead, it influenced processing across the entire visual
field. This means we can search for a specific color (feature) even if we don't know its
exact location.
➔ Naturalistic Setting- Chen and Zelinsky (2019): This study emphasizes the importance
of studying attention in natural contexts. They found that while attention initially focuses
on specific areas (space-based), these regions might be chosen because they contain
relevant features that contribute to building our perception of objects.
1. Variable Findings: Studies haven't produced definitive conclusions about the dominance
of object-based or space-based attention.
2. Flexibility: The relative importance of each type might be flexible, influenced by factors
like individual differences and task demands.
3. Spatial Cue Bias: Research emphasizing object-based attention often uses spatial cues,
potentially influencing results (Donovan et al., 2017).
4. Limited Understanding of Interaction: We lack a comprehensive understanding of how
space-based, object-based, and feature-based attention interact (Kravitz & Behrmann,
2011).
5. Artificial Settings: Most research uses artificial stimuli and tasks. It needs to be clarified
how well these findings translate to natural viewing conditions (Chen and Zelinsky, 2019).
ATTENTION NETWORKS
For attention, several theorists (e.g., Posner, 1980; Corbetta & Shulman, 2002) have argued there
are two major networks. One attention network is goal-directed or endogenous whereas the
other is stimulus-driven or exogenous.
Posner's (1980) influential work on covert attention, examining how we shift attention without
moving our eyes.
The Experiment:
Key Findings:
● Reaction times were fastest for valid cues, slower for neutral cues, and slowest for invalid
cues. This suggests an attentional benefit for cued locations.
● Interestingly, when valid cues were presented infrequently, participants ignored them in
the center of the screen but not on the periphery.
Posner's Two Attention Systems:
1. Endogenous System:
○ Controlled by our intentions and goals.
○ Activated by informative cues in the center of the screen (e.g., arrow pointing
left).
○ Allows us to strategically direct attention.
2. Exogenous System:
○ Automatic and involuntary.
○ Triggered by peripheral cues (e.g., sudden flash of light) or salient stimuli (bright
color).
○ Rapidly shifts attention without conscious effort.
● Peripheral cues, even if uninformative (like a neutral cue), can still capture attention
through the exogenous system.
● This explains why participants couldn't completely ignore peripheral cues, even when
they were valid only a small portion of the time.
● Flexibility: The DAN allows goal-directed focus, while the VAN ensures we don't miss
unexpected but potentially important cues.
● Effective Interaction: The two networks typically work together for efficient attention
allocation.
Supporting Evidence:
➔ Hahn et al. (2006): Compared brain activity during top-down and bottom-up tasks.
◆ Minimal overlap in brain regions involved, supporting the distinction between
DAN and VAN.
◆ Activated brain regions corresponded well to those identified by Corbetta and
Shulman.
➔ Chica et al. (2013): Reviewed research on the two systems and found 15 key differences:
◆ Stimulus-driven attention (VAN) is faster, more object-based, and less susceptible
to interference compared to top-down attention (DAN).
◆ These numerous differences bolster the argument for separate systems.
➔ Vossel et al. (2014): Neuroimaging studies show distinct neural circuits for DAN and VAN,
even at rest.
◆ However, these studies can't definitively prove a brain area's involvement in
specific attention processes.
Brain Stimulation and Causal Evidence:
➔ Chica et al. (2011): Used transcranial magnetic stimulation (TMS) to disrupt activity in
specific brain regions.
◆ TMS to the right temporo-parietal junction (VAN) impaired bottom-up attention
but not top-down attention.
◆ TMS to the right intraparietal sulcus (involved in both networks) impaired both
attention systems.
➔ Chica et al.'s (2011) findings suggest the two attention systems can also work together.
◆ Disrupting a key area in one network (VAN) didn't affect the other (DAN),
indicating some independence.
◆ Disrupting a shared area (intraparietal sulcus) impaired both networks,
suggesting their ability to interact.
➔ Shomstein et al. (2010): Patients with damage to the superior parietal lobule (DAN) had
difficulty with top-down attention tasks.
➔ Shomstein et al. (2010): Patients with damage to the temporo-parietal junction (VAN)
had difficulty with stimulus-driven attention tasks.
➔ These findings support the idea that distinct brain areas underlie the two attention
networks.
Network Interactions:
Wen et al. (2012): Studied how the two networks influence each other.
➔ Stronger Top-Down Influence on Bottom-Up: When the top-down system (DAN)
strongly suppressed activity in the bottom-up system (VAN), participants performed
better. This suggests the DAN can focus attention by reducing interference from
irrelevant stimuli detected by the VAN.
➔ Stronger Bottom-Up Influence on Top-Down: When the bottom-up system (VAN)
strongly influenced the top-down system (DAN), participants performed worse. This
suggests that unattended stimuli activating the VAN can disrupt the current focus
maintained by the DAN.
Limitations of Corbetta & Shulman's Two Attention Network Model
Despite its successes, Corbetta and Shulman's (2002) two attention network model has some
limitations:
1. Brain Area Specificity: While distinct networks are linked to each attention system, the
exact brain areas involved remain somewhat unclear. There might be more overlap than
originally proposed.
2. Shared Processing Regions: Some brain regions, particularly in the parietal lobe, seem
to be involved in both top-down and bottom-up attention, suggesting a more complex
interplay between the networks.
3. Additional Attention Networks: Research has identified other brain networks crucial for
attention, such as the cingulo-opercular network (alertness), default mode network
(internal focus), and fronto-parietal network (cognitive control). These weren't included in
the original model.
4. Limited Understanding of Network Interactions: We still have much to learn about
how the different attention networks interact with each other. More research is needed to
understand the dynamics of their collaboration and competition.
➔ Integration of Bottom-Up and Top-Down Processing: Meyer et al. (2018) found both
types of attention activate parts of the dorsal attention network (DAN), suggesting it
plays a crucial role in combining these processes for effective attention allocation.
➔ Sustained Top-Down Influences: Meehan et al. (2017) showed that top-down influences
within the DAN can persist for a relatively long duration, suggesting a more extended
role for goal-directed attention beyond just pre-stimulus anticipation.
VISUAL SEARCH
We spend much time searching for various objects (e.g., a friend in a crowd). The processes
involved have been studied in research on visual search where a specified target is detected as
rapidly as possible.
According to the theory, we need to distinguish between object features (e.g., colour; size; line
orientation) and the objects themselves.
1. Basic visual features are processed rapidly and pre-attentively in parallel across the
visual scene.
2. Stage (1) is followed by a slower serial process with focused attention providing the
“glue” to form objects from the available features (e.g., an object that is round and has
an orange colour is perceived as an orange). In the absence of focused attention,
features from different objects may be combined randomly producing an illusory
conjunction.
➔ Targets defined by a single feature (e.g., a blue letter or an S) should be detected rapidly
and in parallel.
➔ In contrast, targets defined by a conjunction or combination of features (e.g., a green
letter T) should require focused attention and so should be slower to detect.
➔ Treisman and Gelade (1980) tested these predictions using both types of targets; the
display size was 1–30 items and a target was present or absent.
➔ As predicted, response was rapid and there was very little effect of display size when the
target was defined by a single feature: these findings suggest parallel processing.
➔
➔ Response was slower and was strongly influenced by display size when the target was
defined by a conjunction of features: these findings suggest there was serial processing.
➔ According to the theory, lack of focused attention can produce illusory conjunctions
based on random combinations of features.
➔ Friedman-Hill et al. (1995) studied a brain-damaged patient (RM) having problems with
the accurate location of visual stimuli. This patient produced many illusory conjunctions
combining the shape of one stimulus with the colour of another.
Classic research by Treisman and Gelade (1980)
➔ Factors excluded from the theory Duncan and Humphreys (1989, 1992) :
- When distractors are very similar to each other, visual search is faster because it
is easier to identify them as distractors.
- The number of distractors has a strong effect on search time to detect even
targets defined by a single feature when targets resemble distractors.
➔ Role of focused attention:
- Treisman and Gelade (1980) estimated the search time with conjunctive targets
was approximately 60 ms per item and argued this represented the time taken
for focal attention to process each item.
- However, research with other paradigms indicates it takes approximately 250 ms
for attention indexed by eye movements to move from one location to another.
- Thus, it is improbable focal attention plays the key role assumed within the
theory.
➔ Item in visual task
- The theory assumes visual search is often item-by-item.
- However, the information contained within most visual scenes cannot be divided
up into “items” and so the theory is of limited applicability.
- Such considerations led Hulleman and Olivers (2017) to produce an article
entitled “The impending demise of the item in visual search”.
➔ Involvement of parallel processing
- Visual search involves parallel processing much more than implied by the theory.
- For example, Thornton and Gilden (2007) used 29 different visual tasks and found
72% apparently involved parallel processing.
- We can explain such findings by assuming that each eye fixation permits
considerable parallel processing using information available in peripheral vision.
➔ Object vs feature based processing
- Fifth, the theory assumes that the early stages of visual search are entirely
feature-based.
- However, recent research using event-related potentials indicates that
object-based processing can occur much faster than predicted by feature
integration theory (e.g., Berggren & Eimer, 2018).
➔ Randomness of visual search
- Sixth, the theory assumes visual search is essentially random.
- This assumption is wrong with respect to the real world – we typically use our
knowledge of where a target object is likely to be located when searching for it.
➔ The very rapid movement of the eyes from one spot to the next.
➔ The purpose of a saccadic eye movement during reading is to bring the center of your
retina into position over the words you want to read very small region in the center of
the retina, known as the fovea, has better acuity than other retinal regions
➔ Read a passage in English, each saccade moves your eye forward by about 7 to 9 letters
(Wolfeetal., 2009).
➔ Researchers have estimated that people make between 150,000 and 200,000 saccadic
movements every day
➔ The term perceptual span refers to the number of letters and spaces that we perceive
during a fixation (Rayner & Liversedge, 2004).
➔ Researchers have found large individual differences in the size of the perceptual span
(Irwin, 2004).
➔ When you read English, this perceptual span normally includes letters lying about 4
positions to the left of the letter you are directly looking at, as well as the letters about 15
positions to the right of that central letter (Rayner, 2009).
➔ We are looking for reading cues in the text that lies to the right, and these cues provide
some general information
➔ Saccadic-movement patterns depend on factors such as the language of the text, the
difficulty of the text, and individual differences in reading skill.
SUSTAINED ATTENTION
➔ It is the ability to focus on one specific task for a continuous amount of time without
being distracted.
➔ The exact amount of time (minutes, hours) for sustained attention to be considered is not
well defined and varies across studies.
➔ Sustained attention or vigilance is measured in continuous performance-type (CPT) tasks
that require constant monitoring of the situation at hand.
The Digit Vigilance Task (DVT) is a common assessment tool used to measure sustained
attention and psychomotor speed. It's a relatively simple test that can be administered in
paper-and-pencil format or on a computer.
By recording the number of correctly identified targets and any errors (omissions or marking
non-targets), the DVT provides insights into your ability to:
The Mackworth Clock Task, also known as the Mackworth Clock Test, is another tool used in
psychology research to assess a different aspect of attention: vigilance.
● You'll see a display resembling a clock face, either physically or on a computer screen.
● A pointer will move around the clock face, typically in short jumps like a second hand.
● The key difference is that at irregular intervals, the pointer will make a double jump
instead of a single one.
● Your task is to detect these double jumps and respond in a way specified by the test,
often by pressing a button.
● The task can last for extended periods, sometimes up to two hours in original studies.
Unlike the DVT which focuses on sustained attention and processing speed, the Mackworth
Clock Task measures vigilance. Vigilance refers to your ability to maintain focus on a task over a
long period when there's minimal stimulation and infrequent events of interest (like the double
jumps).
The extended duration and monotonous nature of the task make it challenging to stay vigilant.
By recording your accuracy in detecting double jumps throughout the test, researchers can
assess how well you maintain attention and how vigilance declines over time.
The TOVA (Test of Variables of Attention) doesn't have a specific sub-test called the "digit
vigilance task" although it does assess vigilance as one of its key components.
● Test format: Unlike the Digit Vigilance Task (DVT) which uses numbers, TOVA uses either
simple geometric shapes (visual version) or auditory tones (auditory version).
● Target vs Non-Target: Similar to DVT, you need to respond to specific target stimuli (a
designated shape or a higher-pitched tone) but throughout a series of varying stimuli
presented at different speeds and intervals.
● Sustained Attention: The TOVA test lasts for a set duration (typically around 22
minutes), requiring you to maintain focus and respond correctly throughout.
The TOVA test measures vigilance by analyzing your response patterns to the target stimuli,
particularly focusing on:
● Omissions: These occur when you miss a target stimulus entirely, indicating lapses in
attention.
● Response Time Variability: This refers to how consistent your reaction times are to
target stimuli. High variability suggests difficulty in maintaining focus over time.
DIVIDED ATTENTION
MULTITASKING
● Two participants practiced reading short stories and writing dictated words
simultaneously for several months.
● Their reading speed and comprehension were periodically assessed.
● After 6 weeks, participants achieved normal reading speeds even while writing dictation.
● Reading comprehension scores were similar whether participants read alone or while
writing dictation.
● Participants could even categorize the dictated words by meaning without sacrificing
reading performance.
● Attention Alternation (Hirst, Spelke, Reaves, Caharack, and Neisser, 1980): Some
psychologists doubted participants were truly performing both tasks simultaneously and
suggested they might be rapidly switching attention between reading and dictation.
● The authors argued against this by pointing to the unchanged reading speed during
dictation, suggesting minimal time wasted on switching.
● Hirst et al. (1980) provided further evidence against alternation: Participants trained
on different reading materials (stories vs. encyclopedias) showed similar performance
when switching materials, suggesting they weren't alternating based on task difficulty.
● Automaticity of Dictation Task: Another explanation proposed that dictation became
automatic with practice, requiring minimal attention.
- The 3 necessary condition for a task to be automated are: a. No intention, b. No
conscious awareness and c. no interference with other mental activity.
- This automation explanation is challenged by the fact that participants were
aware of the dictation and comprehended the words.
● Hirst et al. (1980) favored the idea that extensive practice allowed participants to
combine reading and dictation into a single, more efficient process.
● This implies that combining these tasks with a new third task (e.g., shadowing speech)
would require further practice for efficient performance.
● This study highlights the significant role of practice in reducing the attentional demands
of a task.
● While some criticisms exist (Shiffrin, 1988), research like this is transforming our
understanding of how practice influences cognitive tasks (Pashler et al., 2001).
Core Tenet:
● Attention during practice determines what we learn and remember. We learn what we
attend to and forget what we ignore. (Logan et al., 1996)
Experiments:
Conclusion:
● Performing certain tasks together can be difficult, like rubbing your stomach and patting
your head.
● Pashler (1993) investigated this limitation through experiments.
Pashler (1993) reported on studies from his and others’ laboratories that examined the
issue of doing two things at once in greater depth
● With longer intervals between S1 and S2, participants performed both tasks efficiently
(assumedly finishing one before starting the other).
● As the interval shortened, reaction times to the letter (S2) increased significantly.
This waiting time is analogous to the slowed response time to the second stimulus, S2, at
short intervals between the presentation of S1 and S2, called the psychological refractory
period, or PRP.
Pashler considered three possible locations for the bottleneck during dual-tasking (refer to
Figure 4-17 in your textbook for a visual representation):
● Retrieving information from memory can also create a bottleneck, diverting attention
from the second task.
● Pashler and colleagues' further research suggests the interference is caused by a central
bottleneck, not by a conscious decision to delay working on one task. (Ruthruff, Pashler,
& Klassen, 2001).
● Lehle et al. (2009): Found serial processing led to better performance but required more
effort due to task inhibition.
● Lehle and Hübner (2009): Showed parallel processing could outperform serial
processing in certain conditions.
The concept of automatic processing is a state where tasks are performed seemingly
effortlessly and without conscious awareness. It contrasts this with attentional processing,
which requires deliberate focus.
1. Unintentional: The action occurs without a prior conscious decision to perform it.
2. Without Awareness: We are not consciously aware of the steps involved in the action.
3. No Interference: The automatic process doesn't disrupt our ability to focus on other
mental activities.
Driving Example:
Results:
Automatic Processing:
● Searching for targets was effortless regardless of the number of targets or distractors
because they "popped out" due to distinct features.
● Finding four targets was as easy as finding one, demonstrating parallel processing's
ability to handle multiple searches simultaneously.
Controlled Processing:
● Difficulty increased due to the need to actively compare targets with distractors on each
trial (targets could become distractors later).
● Performance depended on the number of targets and distractors, reflecting the
limitations of serial processing and the need for attentional resources.
● With practice, we can improve our ability to perform tasks, as demonstrated in a study
on telegraph message sending and receiving.
● Preattentive (Automatic) Stage: Features like color, shape, and orientation are
registered automatically and in parallel (all features processed simultaneously).
● Attentional Stage: We use attention to "bind" these separate features together to form
a unified object. (Tsal, 1989a)
Experimental Evidence:
Participants were asked to search for a particular object—for example, a pink letter or the letter
T. If the item being searched for differed from the background items in the critical feature (such
as a pink item among green and brown items, or a T among O’s), the target item seemed to pop
out of the display, and the number of background items did not affect participants’ reaction
times.
➔ Participants searched for objects differing in a single feature (e.g., pink letter among
green/brown letters).
➔ Reaction times were not affected by the number of background items because detecting
individual features is automatic.
➔ This suggests parallel processing of features in the preattentive stage.
Another condition required searching for objects with a specific combination of features (e.g.,
pink T among non-pink Ts and pink non-Ts).
● Treisman and Schmidt's (1982) study demonstrated that limited attention can lead to
integration errors.
● Example: Briefly glancing at a red Honda and a blue Cadillac, you might mistakenly
report seeing a "blue Honda Civic" later.
Experimental Demonstration:
● Participants focused on memorizing black digits while colored letters were briefly
displayed.
● Despite limited attention to the letters, they could report some information about them
afterwards.
● Crucially, in 39% of trials, participants reported illusory conjunctions (combining features
incorrectly, e.g., reporting a "red X" when there was a blue X or a red T).
Conclusion:
● Treisman's theory has been influential, inspiring further research and refinements by
various researchers (Briand & Klein, 1989; Quinlan, 2003; Tsal, 1989a, 1989b).
Attentional Capture
● In visual search tasks, some stimuli can "pop out" and demand attention involuntarily.
● This is called attentional capture, described as an involuntary shift of focus caused by
the stimulus itself.
● It's often considered a bottom-up process, driven by stimulus features rather than our
goals.
● The term "capture" implies the stimulus automatically attracts our attention.
Example:
● Theeuwes et al. (1998) study: Participants searched for a specific letter among circles
that changed color.
● An irrelevant red circle appearing unexpectedly (shown in Figure 4-14 of your textbook)
often delayed their response, even though it wasn't part of the task.
Overcoming Capture:
● Theeuwes et al. (2000) showed that attentional capture can be overcome with
preparation.
● When participants knew where to focus their attention beforehand, the irrelevant red
circle didn't disrupt their response times.