Unit 1 - Attention

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

INTRODUCTION TO ATTENTION

What is attention?

The concept of attention has too many meanings to justify a single term. Attention is used to
refer to both the explanation (the set of phenomena in need of an answer) and explanans (the
set of processes during explaining)

William James (1890) defined attention as:


“Attention is . . . the taking into possession of the mind, in clear and vivid form, of one out of
what seem several simultaneously possible objects or trains of thought. Focalisation,
concentration, of consciousness are of its essence.”

Active vs Passive mode of attention (William James)


➔ Attention is active when controlled top-down by the individual’s goals or expectations.
➔ In contrast, attention is passive when controlled in a bottom-up way by external stimuli
(e.g., a loud noise).

Focused (Selective) vs Divided Attention


Focused Attention
➔ Focused attention (or selective attention) is studied by presenting individuals with two or
more stimulus inputs at the same time and instructing them to respond to only one.
➔ Research on focused or selective attention tells us how effectively we can select certain
inputs and avoid being distracted by non-task inputs.
➔ It also allows us to study the selection process and the fate of unattended stimuli.

Divided Attention
➔ Divided attention is also studied by presenting at least two stimulus inputs at the same
time.
➔ However, individuals are instructed they must attend (and respond) to all stimulus inputs.
Divided attention is also known as multi-tasking.
➔ Studies of divided attention provide useful information about our processing limitations
and the capacity of our attentional mechanisms.

External vs Internal Attention


➔ External attention is “the selection and modulation of sensory information” (Chun et al.,
2011).
➔ In contrast, internal attention is “the selection, modulation, and maintenance of
internally generated information, such as task rules, responses, long-term memory, or
working memory” (Chun et al., 2011, p. 73).
➔ The connection to Baddeley’s working memory model is especially important (e.g.,
Baddeley 2012).
➔ The central executive component of working memory is involved in attentional control
and is crucially involved in internal and external attention.
FOCUSED (SELECTIVE) AUDITORY ATTENTION

Selective attention allows us to focus on specific information while ignoring distractions. This is
crucial for navigating our complex world. Studies show we miss a lot of irrelevant details in
selective attention tasks (like following one conversation at a party- Cocktail Party
Phenomenon).

Benefits of Selective Attention:

● Focus: We can concentrate on important things without getting overwhelmed by


everything around us.
● Filtering: It prevents information overload from our senses (sights, sounds, smells, etc.).
● Effective Action: By focusing, we can respond appropriately to specific situations.

In general, people can process only one message at a time (Cowan, 2005). However,
people are more likely to process the unattended message when
• (1) both messages are presented slowly,
• (2) the main task is not challenging, and
• (3) the meaning of the unattended message is immediately relevant

The Cocktail Party Problem (CPP):

This problem, first explored by Colin Cherry in 1953, describes the challenge of focusing on one
conversation amidst multiple conversations or background noise.

A listener, when attending to one voice among many, may face the following two challenges
(McDermott, 2009):

1. Sound Segregation: Listeners must separate and identify which sounds belong together.
This is complex because different sound sources often overlap.
2. Directing the sound source of interest: Once segregation is achieved, listeners must
then direct their attention to the sound source of interest while ignoring others.

Auditory segmentation vs Visual segmentation (McDermott, 2009): Auditory segmentation is


often harder than visual segmentation due to the overlap of signals in the cochlea. In contrast,
visual objects typically occupy different regions of the retina.

There's a question of how much processing occurs for unattended auditory input while focusing
on one source: Cherry's Dichotic Listening Task

➔ Cherry used a dichotic listening task to study cocktail party phenomenon.


➔ Here, different auditory messages were presented to each ear, and participants focused
on one.
➔ Listeners engaged in shadowing (repeating the attended message aloud as it was
presented) to ensure their attention was directed to that message.
➔ However, the shadowing task has two potential disadvantages:
- listeners do not normally engage in shadowing and so the task is artificial.
- It increases listeners’ processing demands.
➔ Findings of the dichotic listening task
- Listeners use physical differences (e.g., speaker's sex, voice intensity) to separate
auditory inputs.
- When physical differences are eliminated, such as presenting both messages in
the same voice to both ears, it's challenging to separate the messages based on
meaning.
- Minimal information is extracted from the unattended message, as found by
Cherry (1953).
- Listeners rarely notice changes in content but easily detect physical changes like
a pure tone.
- Moray (1959) supported this finding by noting that listeners remembered very few
words from unattended messages

Early vs. Late selection explaining CPP


Our brains have a limited capacity to process information at once, like a bottleneck on a road.
This bottleneck creates the need for selective attention - focusing on important information
while ignoring distractions. Here's how different early theories explain where this bottleneck is
located (early vs. late selection)

Broadbent’s Filter Theory (1958, early selection):


➔ The bottleneck is at the beginning of processing.
➔ A filter selects information based on physical features (pitch, location) of the message.
➔ Only one message gets through at a time, the rest is briefly stored and might be lost.
➔ Thus, Broadbent argued there is early selection.

Treisman (Flexible Selection):

➔ The bottleneck is more flexible.


➔ Processing starts with physical cues but can also consider meaning and context.
➔ Unattended messages get some processing, but less than attended ones (like
recognizing a familiar word).
➔ Later processes are omitted or attenuated if there is insufficient processing capacity to
permit full stimulus analysis.
➔ Top-down processes (expectations) influence what gets selected.
➔ Listeners performing the shadowing task sometimes say a word from the unattended
input. Such breakthroughs mostly occur when the word on the unattended channel is
highly probable in the context of the attended message
Deutsch and Deutsch (Late Selection):
➔ All information gets fully processed.
➔ The bottleneck is at the very end, where the brain decides what's most important.
➔ The most relevant information is used to respond, regardless of when it was processed.

Examining the Evidence:

➔ Some processing of unattended messages happens, especially for personally relevant


things.
➔ Attended messages are processed more thoroughly (faster brain activity for attended
targets).
➔ Brain activity suggests there might be active suppression of distractions.

Findings on Unattended Input:

➔ Broadbent's Prediction: Broadbent's theory predicts minimal processing of unattended


auditory messages.
➔ Treisman's Perspective: Treisman's approach suggests flexibility in processing
unattended messages. Treisman and Riley (1969) asked listeners to shadow one of two
auditory messages. They stopped shadowing and tapped when they detected a target in
either message. Many more target words were detected on the shadowed message.
➔ Deutsch and Deutsch's View: Deutsch and Deutsch's theory implies thorough processing
of unattended messages.
➔ Unattended information may get processed: Aydelott et al. (2015) asked listeners to
perform a task on attended target words. When unattended words related in meaning
were presented shortly before the target words themselves, performance on the target
words was enhanced when unattended words were presented as loudly as attended
ones. Thus, the meaning of unattended words was processed.
➔ Significance for the Listener: There is often more processing of unattended words that
have a special significance for the listener.
- For example, Li et al. (2011) obtained evidence that unattended weight-related
words (e.g., fat; chunky) were processed more thoroughly by women dissatisfied
with their weight.
- Conway et al. (2001) found listeners often detected their own name on the
unattended message. This was especially the case if they had low working
memory capacity (see Glossary) indicative of poor attentional control.
➔ ERP Measures: Coch et al. (2005) used ERPs to measure processing activity. ERPs 100 ms
after target presentation were greater when the target was presented on the attended
rather than the unattended message. This suggests there was more processing of the
attended than unattended targets.
➔ Brain Activation: Horton et al. (2013) found greater brain activation associated with the
attended message, suggesting enhanced processing or suppression of unattended
stimuli.
➔ Critique of Classic Theories in recognizing the importance of unattended stimuli:
- Classic theories, including those of Broadbent, Treisman, and Deutsch and
Deutsch, tend to de-emphasize the importance of suppression or inhibition of
unattended messages, as shown by Horton et al. (2013).
- Schwartz and David (2018) reported suppression of neuronal responses in the
primary auditory cortex to distractor sounds, indicating a need for reevaluation
of classic theories.

➔ Research Findings on Cocktail Party Problem:


◆ Humans excel at separating and understanding one voice from multiple
speakers, which is termed the cocktail party problem.
◆ Automatic speech recognition systems are notably inferior to human speech
recognition in this regard (Spille & Meyer, 2014).
◆ Mesgarani and Chang (2012) recorded activity within the auditory cortex of
listeners hearing two messages, revealing enhanced processing of the attended
speaker's features.
➔ The responses within the auditory cortex revealed “The salient spectral
[based on sound frequencies] and temporal features of the attended
speaker, as if subjects were listening to the speaker alone” (Mesgarani &
Chang, 2012, p. 233).
➔ Listeners found it easy to distinguish between the two messages in the
study by Mesgarani and Chang (2012) because they differed in physical
characteristics (i.e., male vs female voice)
◆ Olguin et al. (2018) found comparable comprehension of attended messages
regardless of language, supporting flexible accounts of selective attention.
➔ Native English speakers were presented with two messages in different
female voices.
➔ The attended message was consistently in English.
➔ The unattended message varied, being either in English or an unknown
language.
➔ Findings:
◆ Comprehension of the attended message was similar in both
conditions.
◆ However, there was stronger neural encoding of both messages
when the unattended message was in English.
◆ This suggests that the brain processed both messages more
robustly when the unattended message was in a known language.
◆ Study by Puvvada and Simon (2017): Participants were presented with three
speech streams simultaneously.
➔ Brain activity was assessed as listeners attended to only one of the
speech streams.
➔ Early Processing Stage:
➔ The auditory cortex maintained an acoustic representation of the
auditory scene.
➔ No significant preference was observed between the attended
and ignored sources in this early stage of processing.
➔ Later Processing Stage:
➔ Higher-order auditory cortical areas represented the attended
speech stream separately.
➔ The representation of the attended speech stream had
significantly higher fidelity (accuracy) compared to the
unattended speech streams.
➔ The enhanced representation of the attended speech stream in
higher-order auditory cortical areas resulted from top-down processes,
such as attention.

How to solve the cocktail party phenomenon?

Top-down processes, including attention, play a crucial role in this segregation (Robinson &
McAlpine, 2009).

➔ Role of Top-Down Processes:


◆ Listeners benefit from prior knowledge and expectations, aiding in identifying
specific speakers amidst background noise (McDermott, 2009).
◆ Woods and McDermott (2018) highlighted schema learning and temporal
coherence in improving listening performance.
➔ Brain Activation Patterns:
◆ Evans et al. (2016) found increased activation in attentional and control brain
areas when attended speech was presented with competing speech, emphasizing
the role of top-down processes.
➔ Integration of Visual Information:
◆ Golumbic et al. (2013) suggested that visual information can enhance processing
of attended speech, particularly in noisy environments.

THEORIES OF SELECTIVE ATTENTION

BROADBENT’S FILTER THEORY OF ATTENTION (1958)

➔ Broadbent (1958) proposed the filter theory of attention, suggesting that individuals
have limits on the amount of information they can attend to simultaneously.
➔ The theory posits the existence of an attentional filter that selectively allows certain
information to pass through based on physical characteristics, such as location or pitch,
while blocking the rest.
➔ Only information that passes through the filter is processed for meaning, explaining why
unattended messages are often poorly recalled.
➔ The filter operates early in processing, typically before the meaning of the message is
identified.
Challenges to Filter Theory:

➔ Moray (1959) discovered the "cocktail party effect," where participants recall hearing
their own name in the unattended message, challenging the notion that all unattended
messages are filtered out.
➔ Treisman (1960) found evidence suggesting that participants process unattended
messages based on meaning, contrary to the predictions of filter theory.
➔ Wood and Cowan (1995) investigated the effect of backward speech in unattended
messages on participants' attention, concluding that attentional shifts to unattended
messages occur unintentionally and without awareness.

Study by Wood and Cowan (1995) on Unattended Information Processing:

➔ Wood and Cowan (1995) conducted a dichotic listening task with 168 undergraduate
participants to investigate whether information from the unattended channel can be
recognized.

Experimental Setup:

➔ Participants shadowed an excerpt from "The Grapes of Wrath" in the attended channel
(right ear) and listened to "2001: A Space Odyssey" in the unattended channel (left ear).
➔ Five minutes into the task, the speech in the unattended channel switched to backward
speech for 30 seconds.
➔ Previous experiments showed that roughly half of the participants noticed the switch,
while the other half did not.
➔ Two experimental groups differed in how long the "normal" speech was presented after
the backward speech: two and a half minutes for one group and one and a half minutes
for the other. A control group heard no backward speech.

Findings:

➔ Participants who noticed the backward speech showed a disruption in shadowing the
attended message, indicated by a peak in shadowing errors during the 30 seconds of the
backward-speech presentation.
➔ The effect was more pronounced for participants who reported noticing the backward
speech, while control participants and those who did not notice the backward speech
showed no increase in shadowing errors.
➔ Wood and Cowan analyzed shadowing errors in 5-second intervals before, during, and
after the backward-speech segment. They found that error rates peaked 10 to 20
seconds after the backward speech began.
➔ The attentional shift to the unattended message was unintentional and occurred without
awareness, as indicated by the uniform timing of error peaks among participants who
noticed the backward speech.
➔ Role of Working Memory:
➔ Conway, Cowan, and Bunting (2001) demonstrated that individuals with lower
working-memory spans are more likely to detect their names in unattended messages,
suggesting a link between working memory and attentional control.

TREISMAN’S ATTENUATION THEORY

Treisman proposed a modified version of filter theory called attenuation theory.

In attenuation theory, unattended messages are not completely blocked but rather their
"volume" is turned down, making them harder to process.

Three Levels of Analysis:

According to Treisman, incoming messages undergo three types of analysis: physical, linguistic,
and semantic.

1. Physical analysis involves characteristics like pitch and loudness.


2. Linguistic analysis breaks down the message into syllables and words.
3. Semantic analysis processes the meaning of the message.

Permanently Lowered Thresholds:

➔ Words with subjective importance or signaling danger have permanently lowered


thresholds, making them easily recognizable even at low volumes.
➔ Participants in Moray's experiments likely heard their names due to these lowered
thresholds.

Contextual Priming:

➔ Context can temporarily lower the threshold of certain words.


➔ Hearing preceding words in a sentence primes listeners to detect and recognize
subsequent words, even if they occur in the unattended message.

Disambiguation and Expectation:

➔ MacKay (1973) demonstrated that words in the unattended message can help
disambiguate sentences in the attended message.
➔ Expected information is easier to process, leading to enhanced understanding of
ambiguous sentences.

Interaction with Filter:

➔ Attenuation theory contrasts with filter theory in allowing multiple analyses of all
messages.
➔ While filter theory discards unattended messages after processing physical
characteristics, attenuation theory weakens unattended messages but retains their
information.

DEUTSCH & DEUTSCH’S LATE SELECTION THEORY (1963)- LATER DEVELOPED BY NORMAN
(1968)

➔ Introduction to Late-Selection Theory:


◆ Deutsch and Deutsch proposed the late-selection theory, later elaborated by
Norman (1968).
◆ This theory suggests that all messages are routinely processed for at least some
aspects of meaning, with selection of the response to a message occurring late in
processing.
➔ Extent of Processing:
◆ According to late-selection theory, familiar objects or stimuli are recognized
unselectively and without capacity limitations.
◆ Recognition of familiar objects proceeds automatically, without voluntary control.
➔ Bottleneck in Processing:
◆ Late-selection theory describes a bottleneck in processing, but it is located later
in the process compared to filter theory.
◆ All material is processed up to this bottleneck, with more "important" information
being elaborated more fully and retained.
➔ Factors Determining Importance:
◆ A message's importance depends on various factors, including context, personal
significance (such as one's name), and the observer's level of alertness.
◆ At low alertness levels, only very important messages capture attention, while at
higher levels, less important messages can be processed.
➔ Function of the Attentional System:
◆ The attentional system functions to determine the most important incoming
message, to which the observer will respond.
➔ Evaluation of Evidence:
◆ Different theorists hold different positions on the evidence for late-selection
theory.
◆ While some evidence suggests that unattended messages receive some
processing for meaning, it's unlikely that they are processed to the same degree
as attended messages.
◆ Results thought to demonstrate late selection could be explained by attentional
lapses or particularly salient stimuli.

Feature Early Selection Model Late Selection Model

Early (after sensory Late (after semantic


Selection Stage memory) analysis)
Filter based on physical
characteristics (loudness, No filter, all information
Filtering Mechanism pitch) processed to meaning
Processed for meaning, but
Unattended Information Limited or no processing not fully analyzed
Understanding a word
presented to the
unattended ear that
Filtering out background influences the meaning of
Example voices in a conversation the attended message
Supporters Broadbent Deutsch & Deutsch

Doesn't explain processing


of unattended information Unrealistic to process
Criticisms (e.g., own name) everything completely

DANIEL KAHNEMAN’S MODEL OF ATTENTION (1973)

➔ He viewed attention as a set of cognitive processes for categorizing and recognizing


stimuli.
➔ The more complex the stimulus, the harder the processing, and therefore the more
resources are engaged. However, people have some control over where they direct their
mental resources:
➔ They can often choose what to focus on and devote their mental effort to.
Analogy of investor depositing money

➔ Essentially, this model depicts the allocation of mental resources to various cognitive
tasks.
➔ An analogy could be made to an investor depositing money in one or more of several
different bank accounts—here, the individual “deposits” mental capacity to one or more
of several different tasks.
➔ Many factors influence this allocation of capacity, which itself depends on the extent and
type of mental resources available.
➔ The availability of mental resources, in turn, is affected by the overall level of arousal, or
state of alertness.
➔ Kahneman (1973) argued that one effect of being aroused is that more cognitive
resources are available to devote to various tasks.
➔ Paradoxically, however, the level of arousal also depends on a task’s difficulty. This
means we are less aroused while performing easy tasks, such as adding 2 and 2, than we
are when performing more difficult tasks, such as multiplying a Social Security number
by pi.
➔ We therefore bring fewer cognitive resources to easy tasks, which, fortunately, require
fewer resources to complete.
➔ Arousal thus affects our capacity (the total of our mental resources) for tasks. But the
model must still specify how we allocate our resources to all the cognitive tasks that
confront us.

Limitations that affect our ability to perform tasks:


➔ Arousal & Alertness
◆ Arousal and alertness influence multitasking ability. Alertness varies based on
time of day, hours of sleep, etc. Concentration fluctuates; more tasks can be
attended to with higher concentration, but focusing becomes difficult when tired
or drowsy.
➔ Resource-limited tasks:
◆ These tasks require mental resources or capacity, like taking a midterm. The
harder you try (the more resources you allocate), the better you perform.
➔ Data-limited tasks:
◆ These tasks depend entirely on the quality of the information you receive, not on
your effort. For example, trying to see a dim light in a bright room. No matter
how hard you concentrate, it might be impossible to see it clearly.

SCHEMA THEORY
Neisser's Schema Theory of Attention: Picking Apples, Not Filtering Them

A Different Approach:

Psychologists traditionally viewed attention as a filter, letting some information in and blocking
the rest. Ulric Neisser (1976) proposed a different theory called schema theory.
The Apple Picking Analogy:

Imagine picking apples from a tree. You focus on the ripe ones you want and leave the others
on the tree. Neisser says attention works similarly. We don't filter out unwanted information, we
simply don't pick it up (like the unripe apples) in the first place.

The Experiment:

Neisser and Becklen (1975) conducted a study where participants watched two superimposed
films: a hand-slapping game and a basketball game. They were asked to focus on one film and
press a button when a specific event happened (e.g., hand slap).

Findings:

➔ People could easily follow the chosen film, ignoring the other.
➔ They even missed unexpected events in the unattended film (e.g., someone throwing a
ball in the hand game).

Neisser's Explanation:

These results suggest skilled perception, not filtering, explains attention. We focus on the
relevant information (picking the ripe apples) and our perception guides what we see next.
There's no need for filters; our natural skills handle what's important in the chosen scene.

LOAD INDUCED BLINDNESS & PERCEPTUAL LOAD THEORY (MACDONALD & LAVIE, 2008)

Load-Induced Blindness:
Load-induced blindness refers to the phenomenon where the cognitive load of a task leads to a
decreased awareness or perception of other stimuli, even if they are salient or relevant.
➔ Mechanism:
◆ High cognitive load from a primary task can reduce the capacity to process
peripheral stimuli or secondary tasks.
◆ Attentional resources are fully engaged in the primary task, leaving fewer
resources available for processing other stimuli.
➔ Examples:
◆ In driving, focusing intensely on navigating complex traffic conditions may lead
to reduced awareness of pedestrians or road signs.
◆ During a conversation in a noisy environment, concentrating on understanding
speech may result in overlooking visual cues or other auditory stimuli.

The debate about early vs. late selection in attention (when information is filtered) might
not have a simple answer. A proposed solution is the perceptual load model.

➔ Perceptual Load Model:


◆ Perception has limited capacity (early selection view).
◆ Perception proceeds automatically on all stimuli until capacity is consumed (late
selection view).
◆ Level of perceptual load dictates whether early or late selection occurs.
➔ Predictions:
◆ High perceptual load leads to early selection; task-irrelevant stimuli are not
perceived.
◆ Low perceptual load leads to late selection; spare attentional capacity results in
perception of irrelevant stimuli.

Experimentation by Foster & Lavie (2008)


➔ Studies show that with more complex tasks requiring high perceptual load (e.g.,
identifying a specific letter among many similar ones), distractors (like a cartoon image)
have less influence.
➔ Brain activity studies support this - less brain activity is associated with distractors
during high-load tasks.

Why Does Low Load Lead to More Distraction?

➔ Broad attentional focus: When a task is easy (low load), people tend to pay attention to
a wider range of things, making them more susceptible to distractions.
➔ According to the theory, brain activation associated with distractors should be less when
individuals are performing a task involving high perceptual load. This finding has been
obtained with visual tasks and distractors (e.g., Schwartz et al., 2005) and also with
auditory tasks and distractors

Cognitive Load and Distraction:

Why is a low perceptual load associated with high distractibility?

➔ Biggs and Gibson (2018) argued this happens because observers generally adopt a
broad attentional focus when perceptual load is low.
➔ They tested this hypothesis using three low-load conditions in which participants decided
whether a target X or N was presented and a distractor letter was sometimes presented
➔ They argued that observers would adopt the smallest attentional focus in the circle
condition and the largest attentional focus in the solo condition.
➔ As predicted, distractor interference was greatest in the solo condition and least in the
circle condition. Thus, distraction effects depend strongly on size of attentional focus as
well as perceptual load.

Sörqvist et al. (2016) argued high cognitive load can reduce rather than increase
distraction.

➔ They pointed out that cognitive load is typically associated with high levels of
concentration and our everyday experience indicates high concentration generally
reduces distractibility.
➔ As predicted, they found neural activation associated with auditory distractors was
reduced when cognitive load on a visual task was high rather than low.

Why the Variability?

➔ Difficulty in distinguishing task stimuli from distractors:


➔ Easy to distinguish (different modalities, like vision vs. sound) - high cognitive load
reduces distraction.
➔ Hard to distinguish (similar or same modality) - high cognitive load increases distraction.

Perceptual and Cognitive Load Interaction:

➔ Perceptual load's effect on attention depends on available cognitive resources.


➔ If cognitive load is high (limited resources), perceptual load's influence is reduced (not
fully "automatic").
Perceptual Load Theory: Strengths and Weaknesses

Strengths:

➔ Predicting Distraction:
◆ The theory effectively predicts when distractions will be strong or weak based on
perceptual load (complexity of the task).
◆ High perceptual load reduces distraction.
◆ Applied research supports this - drivers with high perceptual load (complex road
situation) are more likely to miss hazards.

Weaknesses:

➔ Terminology:
◆ Terms like "perceptual load" and "cognitive load" are unclear, making it difficult
to precisely test the theory.
➔ Interaction, not Independence:
◆ The theory assumes perceptual and cognitive load have separate effects, but
research suggests they interact.
◆ Perceptual load's influence depends on available cognitive resources.
➔ Confounding Factors:
◆ Perceptual load and attentional focus can be intertwined, making it hard to
separate their effects.
➔ Cognitive Load Inconsistencies:
◆ The theory suggests high cognitive load increases distraction, but research shows
it can also reduce distraction if the task and distractor are easily distinguished.
➔ Omissions:
◆ The theory doesn't consider factors like:
● Salience (how noticeable) of distractions.
● Spatial distance between distractions and the task.

INATTENTIONAL BLINDNESS
➔ Inattentional Blindness and Change Blindness:
◆ Change blindness: inability to notice significant changes in a scene when
disrupted.
◆ Linked to inattentional blindness: failure to perceive a stimulus despite being in
plain sight without attention.
➔ Everyday Example of Inattentional Blindness:
◆ Experienced pilot focusing on airspeed indicator but failing to notice another
airplane blocking the runway.
➔ Demonstration of Inattentional Blindness:
◆ Neisser and Becklen experiment: participants fail to see unexpected events.
◆ Simons' study using video technology partially replicates Neisser and Becklen's
findings.
➔ Experimental Conditions and Unexpected Events:
◆ Four conditions: Easy (counting basketball passes), Hard (tracking bounce and
aerial passes).
◆ Unexpected events: Umbrella-woman walks across the scene, or a person in a
gorilla costume.
➔ Participants' Responses:
◆ After viewing, participants wrote down their counts and described anything
unusual.
◆ 46% of participants failed to notice the unexpected events.
◆ Only 44% noticed the gorilla, more so among those watching the black team.

The Experiment: Simons and Chabris (1999)

● Participants watched a video of people passing a basketball and were asked to count
the number of passes made by a specific team (white shirts).
● Unbeknownst to them, a person in a gorilla suit walked through the scene, performing
clear actions.
● Surprisingly, only 42% of observers noticed the gorilla!
Why Did People Miss the Gorilla?

● Selective Attention: People were focused on counting passes (white shirts) and filtered
out other information (the gorilla) due to limited attentional resources.
● Task-Relevance: In a variation where participants counted passes by people in black
(similar color to the gorilla), detection rates were higher (83%). This suggests a bias
towards noticing task-relevant stimuli.

Beyond Selective Attention:

● Eye Fixations: Rosenholtz et al. (2016) found that participants who fixated closer to the
gorilla (while counting black shirts) were more likely to detect it. This supports the role of
selective attention.
● Peripheral Vision: However, Rosenholtz et al. also found some observers counting black
shirts (with fixations similar to those counting white shirts) still missed the gorilla. This
suggests limitations in peripheral vision might also play a role.
The presence of inattentional blindness can lead us to underestimate the amount of processing
of the undetected stimulus. Schnuerch et al. (2016) found categorising attended stimuli was
slower when the meaning of an undetected stimulus conflicted with that of the attended
stimulus.

What causes inattentional Blindness

Selective Attention and Stimulus Similarity:

● We previously saw how selective attention, focusing on task-relevant stimuli (e.g.,


counting white shirts), can lead to missing unexpected objects (gorilla).
● Simons and Chabris (1999) showed that similarity in features (color) between the task
and unexpected object can influence detection (higher rates for black gorilla with black
shirts).

Attentional Sets and Semantic Categories:

● Most (2013) suggests attentional sets based on semantic categories (like letters or
numbers) also play a role.
● People were less likely to miss an unexpected letter (E) if they were tracking other letters
compared to tracking numbers, even if the stimuli looked identical (except mirrored).

Top-Down Attentional Processes:

● Légal et al. (2017) explored how demanding tasks that require more top-down processing
(counting specific types of passes) increase inattentional blindness (missing the gorilla).
● Conversely, subliminally presenting detection-related words (identify, notice) before the
video boosted gorilla detection, highlighting the influence of top-down attention.

Expectations and Inattentional Blindness:


● Persuh and Melara (2016) provided compelling evidence for the role of expectations.
● Observers missed a prominent image of Barack Obama because they weren't expecting
a face and were focused on a color discrimination task.
● This suggests inattentional blindness can occur even with clear, uncluttered stimuli
(unlike change blindness, which can involve visual crowding).

Identified Factors Influencing Inattentional Blindness:

● Similarity: Inattentional blindness is more likely when the unexpected object is


dissimilar (in features or category) to the task-relevant stimuli (e.g., missing a gorilla
while counting basketball passes).
● Attentional Demands: Demanding tasks that require more focused attention can
increase inattentional blindness.
● Expectations: Prior expectations about what to see can influence what we actually
notice (e.g., missing a face because we're focused on a color discrimination task).

Limitations of Research in inattentional blindness:

1. Perception vs. Memory: It's unclear if inattentional blindness reflects a failure to


perceive the unexpected object or simply a rapid forgetting of what was perceived.
○ Ward and Scholl (2015) found evidence for perceptual failure, as immediate
reporting of unexpected stimuli did not eliminate inattentional blindness.
2. Non-conscious Processing: Studies suggest some processing of undetected stimuli
might still occur even if observers miss them entirely. The extent of this non-conscious
processing needs further investigation (Pitts, 2018).
3. Interaction of Factors: Research often focuses on single factors, while the various
influences on inattentional blindness likely interact in complex ways. The nature of these
interactions remains largely unexplored.

CHANGE BLINDNESS

➔ Change blindness, which is “the failure to detect changes in visual scenes” (Ball et al.,
2015, p. 2253)
➔ Studies by Levin et al. (2002) highlight our tendency to overestimate our ability to detect
changes. Participants significantly overestimated how likely they were to notice changes
in videos (plates changing color or a disappearing scarf) compared to the actual
detection rate observed in the experiment (0%).

In real-world situations, we often detect changes through accompanying motion cues (e.g.,
someone removing their scarf).

Laboratory Techniques: Researchers use various methods to prevent motion detection and
induce change blindness:

Saccades: Presenting the change during a rapid eye movement (saccade) disrupts the visual
processing stream.
Flicker Paradigm: Briefly flashing the original and changed images with a short blank interval
in between can also lead to change blindness.

Change Blindness vs Inattentional Blindness

Change Blindness:

● Definition: Failure to detect changes in a visual scene, even when focusing on the area
where the change occurs.
● Instructions: Even with instructions to look for changes, detection can be difficult.
● Memory: Requires comparing the pre-change scene with the post-change scene in
memory.
● Attention: Can occur even when attention is focused on the general area of the change.
● Processing: More complex, involving encoding pre- and post-change stimuli, comparing
them, and consciously recognizing the difference.

Inattentional Blindness:

● Definition: Failure to notice an unexpected object or event due to attention being


directed elsewhere.
● Instructions: Target detection becomes easy once observers are told to look for it.
● Memory: Doesn't require memory comparison. The unexpected object is simply missed in
the initial observation.
● Attention: Happens when attention is focused on a different task that captures most of
the attentional resources.
● Processing: Less complex, as only the initial scene needs to be processed, not compared
to a memory representation.

What causes change blindness

Theories on Change Blindness:

➔ Attentional Approach (Rensink et al., 1997):


◆ Focuses on the role of attention.
◆ Change detection requires focused attention on the specific object or location
where the change occurs.
◆ Our attention can only be directed to a limited area of visual space at a time.
◆ Changes in areas we're not actively attending to are more likely to be missed.
➔ Peripheral Vision Approach (Rosenholtz, 2017a,b; Sharan et al., 2016):
◆ Emphasizes the importance of peripheral vision (outer edges of our visual field).
◆ Argues that visual processing happens simultaneously across our entire visual
field, including the periphery.
◆ Limitations in peripheral vision might be a key factor in standard change
blindness demonstrations.
◆ Changes in peripheral vision might be detected but not processed in detail,
leading to missed awareness.

Why Two Theories?

Neither theory fully explains change blindness on its own. The attentional approach sheds light
on how limited focus can lead to missed changes. The peripheral vision approach highlights the
potential role of limitations in processing information from the outer edges of our vision.

Is change blindness a defect?

● Stable World, Efficient Processing: The visual world is generally stable for short periods.
Prioritizing stability allows for efficient processing and a continuous perception of our
environment.
● Perceptual Biases: Studies by Fischer and Whitney (2014) and Manassi et al. (2018) show
how our visual system prioritizes stability. We tend to perceive visual elements
(orientation, location) as being more consistent with what we saw previously, even if
there's a change.
● Serial Dependence: This phenomenon explains how past visual experiences influence
our perception of present stimuli. It involves multiple stages of processing and
potentially memory, leading to a bias towards stability.

Our visual system prioritizes stability, which can sometimes lead to missing changes. This might
seem like a flaw, but it allows for a more consistent and efficient overall perception of the world.

Change blindness might be less pronounced in situations where detecting changes is crucial
(e.g., driving).
Individual differences might influence susceptibility to change blindness.

FOCUSED (SELECTED) VISUAL ATTENTION

There has been much more research on visual attention than auditory attention. The main
reason is that vision is our most important sense modality with more of the cortex devoted to it
than any other sense

Focused attention like spotlight

Look around you and look at any interesting objects. Was your visual

attention like a spotlight? A spotlight illuminates a fairly small area, little

can be seen outside its beam and it can be redirected to focus on any given

object. Posner (1980) argued the same is true of visual attention.


● Limited Scope: Just like a spotlight illuminates a limited area, our focused attention can
only process a small amount of information in detail at a time.
● Peripheral Awareness: While the spotlight focuses on a specific area, we still have some
awareness of things outside of it. Similarly, even when we focus on something visually,
we have some basic awareness of things in our peripheral vision.
● Shifting Focus: A spotlight can be redirected to illuminate different areas. In the same
way, we can shift our focused attention to different parts of a scene or to different tasks.

Zoom Lens Model of Visual Attention:


➔ Proposed by psychologists like Eriksen & St. James (1986) as an alternative to the
spotlight analogy.
➔ Visual attention is compared to a zoom lens, allowing flexibility to adjust the area of
focal attention.
➔ Analogous to a zoom lens, where the visual area covered can be increased or decreased
as needed.

Supporting Evidence from Müller et al. (2003):

➔ Experiment with observers attending to different numbers of squares in a semi-circle.


➔ On each trial, observers saw four squares in a semi-circle and were cued to attend to
one, two or all four.
➔ Four objects were then presented (one in each square) and observers decided whether a
target (e.g., a white circle) was among them.
➔ Brain activation in early visual areas was most widespread when the attended region
was large (i.e., attend to all four squares) and was most limited when it was small (i.e.,
attend to one square).
➔ As predicted by the zoom-lens theory, performance (reaction times and errors) was best
with the smallest attended region and worst with the largest one.

Chen and Cave (2016, p. 1822) argued the optimal attentional zoom setting “includes all possible
target locations and excludes possible distractor locations”.

Most findings indicated people’s attentional zoom setting is close to optimal. However, Collegio
et al. (2019) obtained contrary findings. Drawings of large objects (e.g., jukebox) and small
objects (e.g., watch) were presented so their retinal size was the same. The observer’s area of
focal attention was greater with large objects because they made top-down inferences
concerning their real-world sizes. As a result, the area of focal attention was larger than optimal
for large objects.

Goodhew et al. (2016) pointed out that nearly all research has focused only on spatial perception
(e.g., identification of a specific object). They focused on temporal perception (was a disc
presented continuously or were there two presentations separated by a brief interval?). Spatial
resolution is poor in peripheral vision but temporal resolution is good. As a consequence, a small
attentional spotlight is more beneficial for spatial than temporal acuity.
Multiple spotlights theory

The multiple spotlights theory says our attention can be like multiple flashlights. We can shine
them on two different things at the same time, even if they're far apart.

This surprised some scientists because they thought attention was more like a single spotlight.
They worried that splitting our attention might make it harder to do things well. But research
shows we can actually do this without problems.

Split Attention

The multiple spotlights theory (Awh & Pashler, 2000) challenges the idea that attention is like a
single spotlight. Here's the key point: Our attention can be divided and focused on two or more
separate areas at the same time, even if those areas are not close together. This is called split
attention

This is controversial because some scientists, like Jans et al. (2010), argue that attention is
linked to physical actions. They believe focusing on two separate things might make it difficult
to perform those actions effectively.

Studies supporting the multiple spotlights theory:

1. Awh & Pashler (2000): Finding the Gap

➔ Task: Identify two digits presented in separate locations with some space between them.
➔ Prediction (Zoom Lens Theory): Attention should cover both digits and the space in
between, making it easy to see anything there.
➔ Result: People were actually bad at identifying digits presented in the middle space
between the cued locations.
➔ Explanation (Multiple Spotlights): This suggests attention wasn't spread out like a
zoom lens, but rather focused on two separate areas like spotlights, missing the
information in the gap.
2. Morawetz et al. (2007): Brain on Split Duty

➔ Task: Attend to letters and digits in specific locations while ignoring others.
➔ Method: Measured brain activity while participants performed the task.
➔ Result: Brain activity showed two distinct peaks corresponding to the attended locations,
with less activity in the region between them.
➔ Explanation: This brain activity pattern suggests separate "spotlight" areas of focus,
neglecting the unattended space in the middle.

3. Niebergall et al. (2011): Monkey See, Monkey Suppress

➔ Task: Monkeys attended to two moving stimuli while ignoring a distractor in between.
➔ Method: Recorded brain activity of monkeys performing the task.
➔ Result: When the distractor was present between the attended stimuli, brain activity for
the distractor decreased compared to other conditions.
➔ Explanation: This suggests that split attention involves actively reducing processing of
distractions located between the areas we're focusing on, like a mental "off switch" for
the middle space.

4. Walter et al. (2016): Seeing Double Across the Divide

In most research demonstrating split attention, the two non-adjacent stimuli being attended
simultaneously were each presented to a different hemifield (one half of the visual field). Note
that the right hemisphere receives visual signals from the left hemifield and the left hemisphere
receives signals from the right hemifield.

➔ Task: Identify targets presented in opposite or the same halves of the visual field.
➔ Finding: Performance was better when targets were presented in opposite halves (left
and right) compared to the same half. Brain activity also showed better filtering of
distractions between targets presented in opposite halves.
➔ Explanation: This suggests our brains process information from each half of the visual
field differently. Split attention seems to work best when focusing on opposite sides,
potentially because each brain hemisphere can handle one "spotlight" more efficiently.

Evaluation of these theories:

➔ In sum, we can use visual attention very flexibly.


➔ Visual selective attention can resemble a spotlight, a zoom lens or multiple spotlights,
depending on the current situation and the observer’s goals.
➔ However, split attention may require that two stimuli are presented to different
hemifields rather than the same one. A limitation with all these theories is that
metaphors (e.g., attention is a zoom lens) are used to describe experimental findings but
these metaphors fail to specify the underlying mechanisms (Di Lollo, 2018).
Focus of Visual Attention

The spotlight and zoom lens metaphors are helpful ways to understand how we selectively focus
our attention because they capture two key aspects of visual attention:
➔ Space-based attention: This is like a spotlight focusing on a specific region of space. We
might use this when searching for something in a particular area, like looking for a lost
key on the counter.
➔ Object-based attention: This is like a zoom lens adjusting to focus on a particular object.
We might use this when trying to read a specific word on a busy page or examining a
detailed painting.
➔ Feature-based Attention: While object-based attention focuses on whole objects,
feature-based attention allows us to focus on specific characteristics like color, motion,
or shape. Example: Your friend in the red shirt - In this scenario, you're using
feature-based attention by focusing on the color red to find your friend in the crowd,
even though you're not necessarily looking at specific objects or locations.

Research Evidence for Space-Based vs Object-Based Attention


1. Egly et al. (1994) Study: This experiment assessed reaction times for identifying a
target stimulus based on a preceding cue. The key finding was faster detection when
invalid cues (pointing to the wrong location) were within the same object as the target
compared to different objects. This suggests object-based attention, where attention is
directed to the entire object, not just the specific cued location.

2. Automaticity vs. Strategy: This raises a question about whether object-based attention
is automatic (always happening) or strategic (influenced by task goals).
- Evidence for Strategic Control: The study by Drummond and Shomstein (2010)
is presented as evidence against object-based attention being fully automatic.
They found no object-based attention effect when cues perfectly predicted the
target location (100% certainty). This suggests that when we know exactly where
to look, we can override any automatic tendency to focus on the entire object.
-
3. Coexistence of space-based and object-based attention:
- Hollingworth et al. (2012): This study used a task similar to Egly et al. (1994).
They found evidence for both types of attention. When the target was far from
the cue within the same object, performance was worse (object-based effect).
However, within the same object, reaction times also increased with the distance
between cue and target (space-based effect). This suggests both types of
attention operate simultaneously.
-
- Kimchi et al. (2016): This study also supports the idea of coexisting attention.
Participants responded faster to targets within objects (object-based) and closer
to objects outside them (space-based). This again suggests parallel processing of
object and space information.
-
4. Is object based attention overestimated?
- Limited Evidence for Object-Based Attention: Studies by Pilz et al. (2012) found
that space-based attention was more prevalent than object-based attention.
Only a small portion of participants showed clear evidence of object-based
effects.
- Spatial Cues Bias Results: Donovan et al. (2017) argue that many studies rely
on spatial cues (like arrows pointing to locations) which might indirectly influence
object-based attention. When they removed spatial cues, they found no
object-based attention effect. This suggests previous research might have been
biased by the way experiments were designed.
- The Inhibition of Return (IOR) Phenomenon:
When we search our environment, revisiting the same spot repeatedly is
inefficient. IOR helps avoid this by reducing the likelihood of returning attention
to a recently focused location.

○ Location-based IOR:
■ List and Robertson (2007) found stronger evidence for location-based
IOR using a task similar to Egly et al. (1994). This suggests focusing on a
specific area reduces the likelihood of revisiting that area, regardless of
objects within it.
○ Object-based IOR:
■ Theeuwes et al. (2014) suggest both location and object-based IOR can
co-exist. They argue that attention to a location automatically includes
attention to any object present there, and vice versa. This suggests IOR
might be influenced by both factors.

Research Evidence for Feature-Based Attention

➔ Bartsch et al. (2018): This research provides evidence for feature-based attention. They
found that attention to color-defined targets wasn't limited to the area where a spatial
cue pointed (like in Figure 5.4). Instead, it influenced processing across the entire visual
field. This means we can search for a specific color (feature) even if we don't know its
exact location.
➔ Naturalistic Setting- Chen and Zelinsky (2019): This study emphasizes the importance
of studying attention in natural contexts. They found that while attention initially focuses
on specific areas (space-based), these regions might be chosen because they contain
relevant features that contribute to building our perception of objects.

Limitations of studies on visual attention Focus:

1. Variable Findings: Studies haven't produced definitive conclusions about the dominance
of object-based or space-based attention.
2. Flexibility: The relative importance of each type might be flexible, influenced by factors
like individual differences and task demands.
3. Spatial Cue Bias: Research emphasizing object-based attention often uses spatial cues,
potentially influencing results (Donovan et al., 2017).
4. Limited Understanding of Interaction: We lack a comprehensive understanding of how
space-based, object-based, and feature-based attention interact (Kravitz & Behrmann,
2011).
5. Artificial Settings: Most research uses artificial stimuli and tasks. It needs to be clarified
how well these findings translate to natural viewing conditions (Chen and Zelinsky, 2019).

ATTENTION NETWORKS
For attention, several theorists (e.g., Posner, 1980; Corbetta & Shulman, 2002) have argued there
are two major networks. One attention network is goal-directed or endogenous whereas the
other is stimulus-driven or exogenous.

POSNER’S (1980) APPROACH

Posner's (1980) influential work on covert attention, examining how we shift attention without
moving our eyes.

The Experiment:

● Participants responded to a light appearing on a screen, preceded by a cue (arrow or


box outline) indicating the likely location (valid cue) or misleading the location (invalid
cue).
● A neutral cue (central cross) provided no location information.

Key Findings:

● Reaction times were fastest for valid cues, slower for neutral cues, and slowest for invalid
cues. This suggests an attentional benefit for cued locations.
● Interestingly, when valid cues were presented infrequently, participants ignored them in
the center of the screen but not on the periphery.
Posner's Two Attention Systems:

Based on these findings, Posner proposed two distinct attention systems:

1. Endogenous System:
○ Controlled by our intentions and goals.
○ Activated by informative cues in the center of the screen (e.g., arrow pointing
left).
○ Allows us to strategically direct attention.
2. Exogenous System:
○ Automatic and involuntary.
○ Triggered by peripheral cues (e.g., sudden flash of light) or salient stimuli (bright
color).
○ Rapidly shifts attention without conscious effort.

Peripheral Cues and the Exogenous System:

● Peripheral cues, even if uninformative (like a neutral cue), can still capture attention
through the exogenous system.
● This explains why participants couldn't completely ignore peripheral cues, even when
they were valid only a small portion of the time.

CORBETTA & SHULMAN’S (2002) APPROACH

Corbetta & Shulman's Two Attention Networks:

1. Dorsal Attention Network (DAN):


○ Function: Goal-directed or top-down system.
○ Similarities to Posner: Resembles Posner's endogenous system.
○ Brain Regions: Fronto-parietal network (Key areas within the dorsal attention
network are as follows: superior parietal lobule (SPL), intraparietal sulcus (IPS),
inferior frontal junction (IFJ), frontal eye field (FEF), middle temporal area (MT)
and V3A (a visual area).
○ Influences: Expectations, knowledge, current goals.
○ Activation: When a cue predicts the location or feature of an upcoming visual
stimulus.
2. Ventral Attention Network (VAN):
○ Function: Stimulus-driven or bottom-up system.
○ Similarities to Posner: Resembles Posner's exogenous system.
○ Brain Regions: Primarily right-hemisphere ventral fronto-parietal network. Key
areas within the ventral attention network are as follows: inferior frontal junction
(IFJ), inferior frontal gyrus (IFG), supramarginal gyrus (SMG), superior temporal
gyrus (STG) and insula (Ins). The temporo-parietal junction also forms part of the
ventral attention network.
○ Activation: Unexpected and potentially important stimuli (e.g., sudden flames).
○ Function: "Circuit-breaking" - redirecting attention from current focus.
○ Triggers for the Ventral Network:
- Non-task Distractors: Stimuli similar to task-relevant ones are
particularly likely to activate the VAN.
- Salient Stimuli: Conspicuous or unexpected stimuli can also trigger the
VAN.

Why Two Networks?

● Flexibility: The DAN allows goal-directed focus, while the VAN ensures we don't miss
unexpected but potentially important cues.
● Effective Interaction: The two networks typically work together for efficient attention
allocation.

Research Findings supporting the two-network model

Corbetta & Shulman's Model and Brain Activity:

➔ They used meta-analyses (analyses of multiple studies) to show:


◆ Brain areas activated when expecting an unseen stimulus (top-down) are linked
to the dorsal attention network (DAN).
◆ Brain areas activated when detecting infrequent targets (bottom-up) are linked
to the ventral attention network (VAN).

Supporting Evidence:

➔ Hahn et al. (2006): Compared brain activity during top-down and bottom-up tasks.
◆ Minimal overlap in brain regions involved, supporting the distinction between
DAN and VAN.
◆ Activated brain regions corresponded well to those identified by Corbetta and
Shulman.
➔ Chica et al. (2013): Reviewed research on the two systems and found 15 key differences:
◆ Stimulus-driven attention (VAN) is faster, more object-based, and less susceptible
to interference compared to top-down attention (DAN).
◆ These numerous differences bolster the argument for separate systems.

Neuroimaging and Functional Distinction:

➔ Vossel et al. (2014): Neuroimaging studies show distinct neural circuits for DAN and VAN,
even at rest.
◆ However, these studies can't definitively prove a brain area's involvement in
specific attention processes.
Brain Stimulation and Causal Evidence:

➔ Chica et al. (2011): Used transcranial magnetic stimulation (TMS) to disrupt activity in
specific brain regions.
◆ TMS to the right temporo-parietal junction (VAN) impaired bottom-up attention
but not top-down attention.
◆ TMS to the right intraparietal sulcus (involved in both networks) impaired both
attention systems.
➔ Chica et al.'s (2011) findings suggest the two attention systems can also work together.
◆ Disrupting a key area in one network (VAN) didn't affect the other (DAN),
indicating some independence.
◆ Disrupting a shared area (intraparietal sulcus) impaired both networks,
suggesting their ability to interact.

Brain Lesions and Network Function:

➔ Shomstein et al. (2010): Patients with damage to the superior parietal lobule (DAN) had
difficulty with top-down attention tasks.
➔ Shomstein et al. (2010): Patients with damage to the temporo-parietal junction (VAN)
had difficulty with stimulus-driven attention tasks.
➔ These findings support the idea that distinct brain areas underlie the two attention
networks.

Network Interactions:

Wen et al. (2012): Studied how the two networks influence each other.
➔ Stronger Top-Down Influence on Bottom-Up: When the top-down system (DAN)
strongly suppressed activity in the bottom-up system (VAN), participants performed
better. This suggests the DAN can focus attention by reducing interference from
irrelevant stimuli detected by the VAN.
➔ Stronger Bottom-Up Influence on Top-Down: When the bottom-up system (VAN)
strongly influenced the top-down system (DAN), participants performed worse. This
suggests that unattended stimuli activating the VAN can disrupt the current focus
maintained by the DAN.
Limitations of Corbetta & Shulman's Two Attention Network Model

Despite its successes, Corbetta and Shulman's (2002) two attention network model has some
limitations:

1. Brain Area Specificity: While distinct networks are linked to each attention system, the
exact brain areas involved remain somewhat unclear. There might be more overlap than
originally proposed.
2. Shared Processing Regions: Some brain regions, particularly in the parietal lobe, seem
to be involved in both top-down and bottom-up attention, suggesting a more complex
interplay between the networks.
3. Additional Attention Networks: Research has identified other brain networks crucial for
attention, such as the cingulo-opercular network (alertness), default mode network
(internal focus), and fronto-parietal network (cognitive control). These weren't included in
the original model.
4. Limited Understanding of Network Interactions: We still have much to learn about
how the different attention networks interact with each other. More research is needed to
understand the dynamics of their collaboration and competition.

RECENT DEVELOPMENT IN CORBETTA & SHULMAN’S APPROACH

Beyond the Two Networks: Interactions and New Discoveries

➔ Integration of Bottom-Up and Top-Down Processing: Meyer et al. (2018) found both
types of attention activate parts of the dorsal attention network (DAN), suggesting it
plays a crucial role in combining these processes for effective attention allocation.
➔ Sustained Top-Down Influences: Meehan et al. (2017) showed that top-down influences
within the DAN can persist for a relatively long duration, suggesting a more extended
role for goal-directed attention beyond just pre-stimulus anticipation.

Additional Networks Related to Attention

➔ Cingulo-Opercular Network (Alertness): Sylvester et al. (2012) identified this network,


associated with non-selective attention or alertness (Coste & Kleinschmidt, 2016). It
helps maintain a general state of vigilance.
➔ Default Mode Network (Internal Focus): This network, linked to activities like
daydreaming, can actually hinder performance on tasks requiring external focus (Amer
et al., 2016a). Deactivation of this network can be beneficial for externally directed
attention.
➔ Fronto-Parietal Network (Cognitive Control): Dosenbach et al. (2008) identified this
network, associated with top-down attentional control and other cognitive processes. It
helps us exert control over our attention and thoughts.

VISUAL SEARCH
We spend much time searching for various objects (e.g., a friend in a crowd). The processes
involved have been studied in research on visual search where a specified target is detected as
rapidly as possible.

FEATURE INTEGRATION THEORY (TRIESMAN & GELADE, 1980)

According to the theory, we need to distinguish between object features (e.g., colour; size; line
orientation) and the objects themselves.

There are two processing stages:

1. Basic visual features are processed rapidly and pre-attentively in parallel across the
visual scene.
2. Stage (1) is followed by a slower serial process with focused attention providing the
“glue” to form objects from the available features (e.g., an object that is round and has
an orange colour is perceived as an orange). In the absence of focused attention,
features from different objects may be combined randomly producing an illusory
conjunction.

Assumptions of the feature integration theory:

➔ Targets defined by a single feature (e.g., a blue letter or an S) should be detected rapidly
and in parallel.
➔ In contrast, targets defined by a conjunction or combination of features (e.g., a green
letter T) should require focused attention and so should be slower to detect.
➔ Treisman and Gelade (1980) tested these predictions using both types of targets; the
display size was 1–30 items and a target was present or absent.
➔ As predicted, response was rapid and there was very little effect of display size when the
target was defined by a single feature: these findings suggest parallel processing.


➔ Response was slower and was strongly influenced by display size when the target was
defined by a conjunction of features: these findings suggest there was serial processing.
➔ According to the theory, lack of focused attention can produce illusory conjunctions
based on random combinations of features.
➔ Friedman-Hill et al. (1995) studied a brain-damaged patient (RM) having problems with
the accurate location of visual stimuli. This patient produced many illusory conjunctions
combining the shape of one stimulus with the colour of another.
Classic research by Treisman and Gelade (1980)

➔ Demonstration supports the isolated-feature/ combined-feature effect: People can


➔ typically locate an isolated feature more quickly than a combined feature (Quinlan,
➔ 2010).

Treisman and Souther (1985)

➔ Feature-present/feature-absent effect: People can typically locate a feature that is


present more quickly than a feature that is absent.
➔ Pop-out’’ effect is automatic

Limitation of the feature integration theory

➔ Factors excluded from the theory Duncan and Humphreys (1989, 1992) :
- When distractors are very similar to each other, visual search is faster because it
is easier to identify them as distractors.
- The number of distractors has a strong effect on search time to detect even
targets defined by a single feature when targets resemble distractors.
➔ Role of focused attention:
- Treisman and Gelade (1980) estimated the search time with conjunctive targets
was approximately 60 ms per item and argued this represented the time taken
for focal attention to process each item.
- However, research with other paradigms indicates it takes approximately 250 ms
for attention indexed by eye movements to move from one location to another.
- Thus, it is improbable focal attention plays the key role assumed within the
theory.
➔ Item in visual task
- The theory assumes visual search is often item-by-item.
- However, the information contained within most visual scenes cannot be divided
up into “items” and so the theory is of limited applicability.
- Such considerations led Hulleman and Olivers (2017) to produce an article
entitled “The impending demise of the item in visual search”.
➔ Involvement of parallel processing
- Visual search involves parallel processing much more than implied by the theory.
- For example, Thornton and Gilden (2007) used 29 different visual tasks and found
72% apparently involved parallel processing.
- We can explain such findings by assuming that each eye fixation permits
considerable parallel processing using information available in peripheral vision.
➔ Object vs feature based processing
- Fifth, the theory assumes that the early stages of visual search are entirely
feature-based.
- However, recent research using event-related potentials indicates that
object-based processing can occur much faster than predicted by feature
integration theory (e.g., Berggren & Eimer, 2018).
➔ Randomness of visual search
- Sixth, the theory assumes visual search is essentially random.
- This assumption is wrong with respect to the real world – we typically use our
knowledge of where a target object is likely to be located when searching for it.

Saccadic Eye Movements

➔ The very rapid movement of the eyes from one spot to the next.
➔ The purpose of a saccadic eye movement during reading is to bring the center of your
retina into position over the words you want to read very small region in the center of
the retina, known as the fovea, has better acuity than other retinal regions
➔ Read a passage in English, each saccade moves your eye forward by about 7 to 9 letters
(Wolfeetal., 2009).
➔ Researchers have estimated that people make between 150,000 and 200,000 saccadic
movements every day
➔ The term perceptual span refers to the number of letters and spaces that we perceive
during a fixation (Rayner & Liversedge, 2004).
➔ Researchers have found large individual differences in the size of the perceptual span
(Irwin, 2004).
➔ When you read English, this perceptual span normally includes letters lying about 4
positions to the left of the letter you are directly looking at, as well as the letters about 15
positions to the right of that central letter (Rayner, 2009).
➔ We are looking for reading cues in the text that lies to the right, and these cues provide
some general information
➔ Saccadic-movement patterns depend on factors such as the language of the text, the
difficulty of the text, and individual differences in reading skill.
SUSTAINED ATTENTION

➔ It is the ability to focus on one specific task for a continuous amount of time without
being distracted.
➔ The exact amount of time (minutes, hours) for sustained attention to be considered is not
well defined and varies across studies.
➔ Sustained attention or vigilance is measured in continuous performance-type (CPT) tasks
that require constant monitoring of the situation at hand.

Digit Vigilance Task

The Digit Vigilance Task (DVT) is a common assessment tool used to measure sustained
attention and psychomotor speed. It's a relatively simple test that can be administered in
paper-and-pencil format or on a computer.

Here's how it typically works:

● You'll be presented with a page or screen containing rows of random digits.


● You'll be instructed to find and mark (usually by crossing out) specific target digits, like
all the "6"s or "9"s.
● The key is to do this as quickly and accurately as possible throughout the test duration,
which is often around 10 minutes.

What the DVT measures:

By recording the number of correctly identified targets and any errors (omissions or marking
non-targets), the DVT provides insights into your ability to:

● Maintain focus over a sustained period.


● Respond quickly to visual stimuli.
● Avoid distractions and maintain accuracy.
Mackworth Clock Task

The Mackworth Clock Task, also known as the Mackworth Clock Test, is another tool used in
psychology research to assess a different aspect of attention: vigilance.

Here's how it works:

● You'll see a display resembling a clock face, either physically or on a computer screen.
● A pointer will move around the clock face, typically in short jumps like a second hand.
● The key difference is that at irregular intervals, the pointer will make a double jump
instead of a single one.
● Your task is to detect these double jumps and respond in a way specified by the test,
often by pressing a button.
● The task can last for extended periods, sometimes up to two hours in original studies.

What the Mackworth Clock Task measures:

Unlike the DVT which focuses on sustained attention and processing speed, the Mackworth
Clock Task measures vigilance. Vigilance refers to your ability to maintain focus on a task over a
long period when there's minimal stimulation and infrequent events of interest (like the double
jumps).

The extended duration and monotonous nature of the task make it challenging to stay vigilant.
By recording your accuracy in detecting double jumps throughout the test, researchers can
assess how well you maintain attention and how vigilance declines over time.

TOVA (Test of Variables of Attention)

The TOVA (Test of Variables of Attention) doesn't have a specific sub-test called the "digit
vigilance task" although it does assess vigilance as one of its key components.

Here's how the TOVA test works in relation to vigilance:

● Test format: Unlike the Digit Vigilance Task (DVT) which uses numbers, TOVA uses either
simple geometric shapes (visual version) or auditory tones (auditory version).
● Target vs Non-Target: Similar to DVT, you need to respond to specific target stimuli (a
designated shape or a higher-pitched tone) but throughout a series of varying stimuli
presented at different speeds and intervals.
● Sustained Attention: The TOVA test lasts for a set duration (typically around 22
minutes), requiring you to maintain focus and respond correctly throughout.

How TOVA measures vigilance:

The TOVA test measures vigilance by analyzing your response patterns to the target stimuli,
particularly focusing on:

● Omissions: These occur when you miss a target stimulus entirely, indicating lapses in
attention.
● Response Time Variability: This refers to how consistent your reaction times are to
target stimuli. High variability suggests difficulty in maintaining focus over time.

DIVIDED ATTENTION

MULTITASKING

● Defined as coordinating multiple tasks to achieve a goal (MacPherson, 2018).


● Can involve simultaneous task execution or rapid task switching.
● Debate exists regarding the impact of multitasking on attention and cognitive control
(see referenced box).

Factors Affecting Multitasking Performance:

1. Similarity Between Tasks:


○ Treisman and Davies (1973) found greater interference between tasks using the
same modality (e.g., two visual tasks vs. visual and auditory tasks).
2. Response Modality Similarity:
○ McLeod (1977) showed performance on a tracking task declined when both tasks
required a similar response (e.g., using the same hand).
3. Practice:
○ Spelke et al. (1976) observed improved performance (reading comprehension
while writing dictation) with training.

ALTERNATING ATTENTION/SWITCHING ATTENTION


➔ Alternating attention is the ability to switch your focus back and forth between tasks
that require different cognitive demands.
➔ Displaying mental flexibility to shift attentional focus from task to task (or between
different stimuli)
➔ Also required in language processing, such as in bilingual individuals
DUAL-TASK PERFORMANCE

The Study By Spelke, Hirst, and Neisser (1976):

● Two participants practiced reading short stories and writing dictated words
simultaneously for several months.
● Their reading speed and comprehension were periodically assessed.
● After 6 weeks, participants achieved normal reading speeds even while writing dictation.
● Reading comprehension scores were similar whether participants read alone or while
writing dictation.
● Participants could even categorize the dictated words by meaning without sacrificing
reading performance.

Alternative Explanations and Counterarguments:

● Attention Alternation (Hirst, Spelke, Reaves, Caharack, and Neisser, 1980): Some
psychologists doubted participants were truly performing both tasks simultaneously and
suggested they might be rapidly switching attention between reading and dictation.
● The authors argued against this by pointing to the unchanged reading speed during
dictation, suggesting minimal time wasted on switching.
● Hirst et al. (1980) provided further evidence against alternation: Participants trained
on different reading materials (stories vs. encyclopedias) showed similar performance
when switching materials, suggesting they weren't alternating based on task difficulty.
● Automaticity of Dictation Task: Another explanation proposed that dictation became
automatic with practice, requiring minimal attention.
- The 3 necessary condition for a task to be automated are: a. No intention, b. No
conscious awareness and c. no interference with other mental activity.
- This automation explanation is challenged by the fact that participants were
aware of the dictation and comprehended the words.

The Preferred Explanation: Task Combination Through Practice

● Hirst et al. (1980) favored the idea that extensive practice allowed participants to
combine reading and dictation into a single, more efficient process.
● This implies that combining these tasks with a new third task (e.g., shadowing speech)
would require further practice for efficient performance.

The Role of Practice in Attention Allocation

● This study highlights the significant role of practice in reducing the attentional demands
of a task.
● While some criticisms exist (Shiffrin, 1988), research like this is transforming our
understanding of how practice influences cognitive tasks (Pashler et al., 2001).

ATTENTION HYPOTHESIS OF AUTOMATIZATION

Attention Hypothesis of Automatization proposed by Logan and Etherton (1994), which


highlights the crucial role of attention during learning and memory.

Core Tenet:

● Attention during practice determines what we learn and remember. We learn what we
attend to and forget what we ignore. (Logan et al., 1996)

Experiments:

In a series of experiments, Logan and Etherton (1994) presented college student


participants with a series of two-word displays and asked them to detect particular target
words (for example, words that named metals) as fast as possible.

● Participants searched for target words (e.g., metals) in two-word displays.


● Condition 1: Consistent Pairs: Same word pairs appeared throughout the experiment
(e.g., steel-Canada).
● Condition 2: Variable Pairs: Different word pairs appeared on each trial (e.g.,
steel-Canada, steel-broccoli).
Results:

● Participants in Condition 1 showed an advantage only if they had to pay attention to


both words (e.g., not focusing on color cues).
● When a color cue allowed ignoring one word, participants gained no advantage from
consistent pairings and recalled fewer of the ignored words.

Conclusion:

● Inattention leads to forgetting. We encode information into memory based on what we


attend to during practice, influencing later retrieval.
● This highlights the importance of focused attention during learning for optimal
performance and memory retention.

THE PSYCHOLOGICAL REFRACTORY PERIOD

● Performing certain tasks together can be difficult, like rubbing your stomach and patting
your head.
● Pashler (1993) investigated this limitation through experiments.

Pashler (1993) reported on studies from his and others’ laboratories that examined the
issue of doing two things at once in greater depth

The Experimental Setup:

● Participants performed two tasks:


○ Tone discrimination (responding to high or low pitch)
○ Letter identification (pressing a key corresponding to the displayed letter)
● The time interval between the tone (S1) and letter (S2) presentation was varied.

The Psychological Refractory Period Effect:

● With longer intervals between S1 and S2, participants performed both tasks efficiently
(assumedly finishing one before starting the other).
● As the interval shortened, reaction times to the letter (S2) increased significantly.
This waiting time is analogous to the slowed response time to the second stimulus, S2, at
short intervals between the presentation of S1 and S2, called the psychological refractory
period, or PRP.

Theories on the Bottleneck's Location:

Pashler considered three possible locations for the bottleneck during dual-tasking (refer to
Figure 4-17 in your textbook for a visual representation):

● A: Perception Bottleneck: Limited processing capacity at the stage of perceiving stimuli


(e.g., difficulty noticing both the tone and letter in the PRP experiment).
● B: Response Selection Bottleneck (Favored Theory): Limited capacity to choose the
appropriate response for each task. This aligns with Welford's (1952) theory and is
supported by Pashler's research on the PRP effect.
● C: Response Execution Bottleneck: Limited capacity to physically execute the chosen
responses (e.g., pressing buttons).

Additional Factors Affecting the Bottleneck:

● Retrieving information from memory can also create a bottleneck, diverting attention
from the second task.

Central Bottleneck vs. Voluntary Delay:

● Pashler and colleagues' further research suggests the interference is caused by a central
bottleneck, not by a conscious decision to delay working on one task. (Ruthruff, Pashler,
& Klassen, 2001).

SERIAL & PARALLEL PROCESSING


● Serial Processing:
○ Attention rapidly switches between tasks, with only one processed at a time.
○ Considered the more dominant mode for multitasking (Koch et al., 2018).
● Parallel Processing:
○ Both tasks are processed simultaneously.
○ Can be more efficient under specific circumstances.

The Efficiency Debate:

● Lehle et al. (2009): Found serial processing led to better performance but required more
effort due to task inhibition.
● Lehle and Hübner (2009): Showed parallel processing could outperform serial
processing in certain conditions.

Factors Influencing Efficiency:


● Task Previews: Brüning and Manzey (2018) demonstrated that previewing the next task
(possible with parallel processing) improved performance compared to no preview (serial
processing only).
● Working Memory Capacity: Individuals with higher working memory capacity were
more likely to benefit from parallel processing, potentially due to better attentional
control.
AUTOMATIC VS CONTROLLED PROCESSING

The concept of automatic processing is a state where tasks are performed seemingly
effortlessly and without conscious awareness. It contrasts this with attentional processing,
which requires deliberate focus.

Posner and Snyder's (1975) Criteria for Automaticity:

1. Unintentional: The action occurs without a prior conscious decision to perform it.
2. Without Awareness: We are not consciously aware of the steps involved in the action.
3. No Interference: The automatic process doesn't disrupt our ability to focus on other
mental activities.

Driving Example:

● An experienced driver navigating a familiar route under normal conditions might be


operating "on autopilot."
● Automatic processes might explain:
○ Unintentionally turning on a familiar route while lost in thought.
○ Reaching your usual destination despite intending to go elsewhere due to
ingrained routine.

Schneider and Shiffrin's (1977) study on automatic processing in selective attention.

● Participants searched for target letters/numbers among other letters/numbers (frames).


● Two conditions were tested:
○ Varied-Mapping: Targets and distractors were the same type (e.g., searching for
letter J among other letters). This condition was expected to require effort and
attention.
○ Consistent-Mapping: Targets and distractors were always different types (e.g.,
searching for letter J among numbers). This condition was expected to require
less attention and potentially involve automaticity.

Results:

● Consistent-Mapping (Automatic Processing):


○ Accuracy depended only on how long the frames were displayed (frame time),
not on the number of targets or distractors.
○ Participants performed equally well regardless of searching for one target or
four, or encountering one, two, or four distractors.
● Varied-Mapping (Attentional Processing):
○ Performance depended on all three factors:
■ Memory set size (number of targets to find)
■ Frame size (number of distractors)
■ Frame time (display duration)
○ As these factors increased, participants' accuracy decreased, suggesting greater
attentional demands.

Automatic Processing:

● Used for easy tasks with familiar stimuli.


● Operates in parallel, allowing simultaneous processing of multiple elements.
● Doesn't require much attention or strain cognitive resources (capacity limitations).

Example: Consistent-Mapping Condition (Targets easily distinguished from distractors)

● Searching for targets was effortless regardless of the number of targets or distractors
because they "popped out" due to distinct features.
● Finding four targets was as easy as finding one, demonstrating parallel processing's
ability to handle multiple searches simultaneously.

Controlled Processing:

● Used for difficult tasks or those involving unfamiliar processes.


● Operates serially, processing information one step at a time.
● Requires focused attention and is limited by cognitive capacity.
● Under conscious control.

Example: Varied-Mapping Condition (Targets and distractors share features)

● Difficulty increased due to the need to actively compare targets with distractors on each
trial (targets could become distractors later).
● Performance depended on the number of targets and distractors, reflecting the
limitations of serial processing and the need for attentional resources.

Learning Through Practice:

● With practice, we can improve our ability to perform tasks, as demonstrated in a study
on telegraph message sending and receiving.

The Shift in Attention:

● Bryan and Harter observed a shift in participants' focus as they practiced:


○ Initially, they focused on individual letters (sending/receiving).
○ With practice, attention shifted to words, then to phrases/groups of words.
○ Practice seemed to automate individual letter processing, freeing attention for
higher-level units (words, phrases).
Video Games as an Example:

● Similar learning is observed in video games:


○ Beginners struggle to control their character, requiring full concentration.
○ With practice, the controls become automatic, freeing attention for other aspects
(e.g., strategy, threats).
○ Skilled players can even hold conversations while playing, demonstrating
efficient automatic processing of game mechanics.

Automatic vs. Controlled Processing:

● Beginners rely on controlled processing, requiring focused attention on game mechanics.


● Skilled players achieve automatic processing through practice, freeing attention for
other tasks.

Role of Attention and automaticity in Perception: Feature Integration Theory


● Inspired by Schneider and Shiffrin's work on automatic processing, Treisman
investigated the interplay between attention and perception.

The Two-Stage Model of Object Perception:

● Preattentive (Automatic) Stage: Features like color, shape, and orientation are
registered automatically and in parallel (all features processed simultaneously).
● Attentional Stage: We use attention to "bind" these separate features together to form
a unified object. (Tsal, 1989a)

Experimental Evidence:

Participants were asked to search for a particular object—for example, a pink letter or the letter
T. If the item being searched for differed from the background items in the critical feature (such
as a pink item among green and brown items, or a T among O’s), the target item seemed to pop
out of the display, and the number of background items did not affect participants’ reaction
times.

It was found that:

➔ Participants searched for objects differing in a single feature (e.g., pink letter among
green/brown letters).
➔ Reaction times were not affected by the number of background items because detecting
individual features is automatic.
➔ This suggests parallel processing of features in the preattentive stage.

Another condition required searching for objects with a specific combination of features (e.g.,
pink T among non-pink Ts and pink non-Ts).

➔ Reaction times increased with the number of background items.


➔ Treisman and Gelade argued that this indicates controlled processing
(attention-demanding) for feature conjunctions (combining features).

Illusory Conjunctions and Attention Overload:

● Treisman and Schmidt's (1982) study demonstrated that limited attention can lead to
integration errors.
● Example: Briefly glancing at a red Honda and a blue Cadillac, you might mistakenly
report seeing a "blue Honda Civic" later.

Experimental Demonstration:

● Participants focused on memorizing black digits while colored letters were briefly
displayed.
● Despite limited attention to the letters, they could report some information about them
afterwards.
● Crucially, in 39% of trials, participants reported illusory conjunctions (combining features
incorrectly, e.g., reporting a "red X" when there was a blue X or a red T).

Conclusion:

These findings support Treisman's theory:

● Recognizing individual features is automatic and requires minimal attention.


● Integrating features (binding them into a whole object) requires focused attention and
mental effort.
● When attention is overloaded, we make mistakes by combining features incorrectly,
resulting in illusory conjunctions.

The Broader Impact of Feature Integration Theory:

● Treisman's theory has been influential, inspiring further research and refinements by
various researchers (Briand & Klein, 1989; Quinlan, 2003; Tsal, 1989a, 1989b).

Attentional Capture
● In visual search tasks, some stimuli can "pop out" and demand attention involuntarily.
● This is called attentional capture, described as an involuntary shift of focus caused by
the stimulus itself.
● It's often considered a bottom-up process, driven by stimulus features rather than our
goals.
● The term "capture" implies the stimulus automatically attracts our attention.
Example:

● Theeuwes et al. (1998) study: Participants searched for a specific letter among circles
that changed color.
● An irrelevant red circle appearing unexpectedly (shown in Figure 4-14 of your textbook)
often delayed their response, even though it wasn't part of the task.

Overcoming Capture:

● Theeuwes et al. (2000) showed that attentional capture can be overcome with
preparation.
● When participants knew where to focus their attention beforehand, the irrelevant red
circle didn't disrupt their response times.

You might also like