Psychophysical and Physiological Evidence For Viewer-Centered Object Representations in The Primate

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 15

Psychophysical and Physiological

Evidence for Viewer-centered Object


Representations in the Primate

N.K. Logothetis and J. Pauls

Cerebral Cortex (1995)


Background

Image Input Match ?


Transformations representation Recognition

Stored memory
representations
Theories of object representations
3D models (Marr, Biederman)
View dependent 2D templates (Basri & Ullman, Poggio)

Face selective cells


Found in STS
Mostly view dependent
Methods
Trained three juvenile rhesus macaques on an object
recognition task
Performed psychophysical tests after training
Recorded from the upper bank of the anterior medial
temporal sulcus (AMTS)

Stimuli:
Computer generated ‘wire like’
and ‘amoeboid’ objects
Training

Began with training


monkeys to recognize a
single view of an object
presented sequentially
among distractor objects
Slowly increased rotations
up to + or – 90o before
training with a new object
Feedback with juice reward
Testing
Recordings
Recorded from 773 neurons in AMTS
Findings—psychophysical
Recognition performance fell off sharply when object
rotated more than 30-40o beyond training view
Both for wire and amoeboid objects
Findings—psychophysical
Interpolation with wire objects
Monkeys could interpolate between two training views up
to 120o apart
Three to five views
allowed monkey to
generalize to entire
‘great circle’
Findings—psychophysical
‘Pseudo-mirror symmetrical’ wire objects
Some of the wire
objects have mirror
symmetrical 0o and
180o views due to lack
of self-occlusion
Findings—psychophysical
Viewpoint invariance for ‘basic’ objects
among different class distractors
Findings—physiological
View specific,
object specific
cells (71 of 773)

Cell responses to distractor views


Cell responses to target views
Findings—physiological
View invariant,
object specific
cells (8 of 773)
Findings—physiological
Findings—physiological

Multiple cells tuned to


different views of the same
object
Author’s conclusions:
Object recognition depends on training view
A small number of stored views can be used to achieve
invariance with wire like objects
Neurons in IT found that respond selectively to learned
objects, mostly to specific views

Problems:
Highly unnatural stimuli
View selective neurons used for recognition or after
recognition?
Interpolation with self occluded (solid) objects?

You might also like