Professional Documents
Culture Documents
Perception As Information Detection Reflections On Gibsons Ecological Approach To Visual Perception Paperbacknbsped 0367312964 9780367312961 Compress
Perception As Information Detection Reflections On Gibsons Ecological Approach To Visual Perception Paperbacknbsped 0367312964 9780367312961 Compress
Introduction 1
J effrey B . W agman and J ulia J . C . B lau
Part I
The Environment to Be Perceived 3
Part III
Visual Perception 149
Part IV
Depiction 253
References 291
Index 333
Illustrations
Figures
2.1 Water velocity 60 s after an 86 mm-long fish (Lepomis
gibbosus) passed the area 26
2.2 (a) Typical posture and movement of craftsmen during
stone bead production. (b) Examples of ellipsoidal glass
beads produced by expert (HQ) and non-expert (LQ)
craftsmen. (c) Singularity spectrum estimated for expert
(HQ) and non-expert (LQ) craftsmen 28
2.3 (a) Ventral surface of a flake detached by conchoidal
fracture. (b) Flake terminology 31
2.4 Re-fitted elongated ovate fine-grained porphyritic
phonolite cobble from Lokalalei 2C 33
3.1 A hand feeling a pair of scissors 42
4.1 (a) A diverging pencil of rays from a single reflecting point.
(b) A few converging cones show how an optical image could
be built from an infinite number of points on the object 53
4.2 The demonstration of a “retinal image” by Scheiner 56
4.3 Illumination is transparent to that which is illuminated 61
4.4 (a) Three holes drilled in a plywood box provided an
illumination shaft. (b) Rods angled obliquely directly in
front of the viewing aperture of the box 63
4.5 Photographs taken through the viewing aperture 63
4.6 Newton revisited 64
4.7 (a) Box with interior reflecting surfaces. (b) A top view
schematic of the light in the interior of the foam-lined
box. (c) A top view schematic of the light in the interior
of the box shown in (a) 65
4.8 Light reflected from spacewalkers: observations 67
5.1 Pike’s Peak, Barr Trail 76
5.2 Ambient optic array diagrams 77
6.1 The transformation of the optic array defined by a
locomotor movement 94
Illustrations ix
6.2 Illustrating the dual components of an ecosystem 101
6.3 The dual frame discrepancy hypothesis 105
6.4 Lee’s swinging room 106
7.1 A newborn baby participating in the weight-lifting
experiment 112
7.2 A newborn boy only a few hours old is studying his
hand intensely 115
7.3 A 4-month-old girl in deep concentration on the visual
motion presented on the screen in front of her 122
7.4 Accelerating looming stimulus approaching the infants’
eyes resulting in increased theta-band oscillatory activity
in the visual cortex of the extrinsic loom 126
8.1 Surfaces and affordances as interfaces 131
8.2 Affordances for support 133
8.3 Objects that afford sitting on (by humans) 134
8.4 Many animal species perceive affordances for reaching 135
8.5 Performing a given behavior creates and eliminates
affordances 137
8.6 Relationship between physics and geometry and
perception-action in standard (top) and ecological
(bottom) approaches 139
8.7 Objects that afford grasping with one hand and with
two hands 141
8.8 Perceptual experience vs. artificial measurement (M)
devices 142
9.1 The horizon ratio 154
9.2 Geometry of the ground theory 155
9.3 Ground texture as an intrinsic scale for relative
exocentric size and distance 157
9.4 Distance perception: experimental tasks, representative
data, and the results of numerical simulation based on
Equations 9.1 and 9.3 160
10.1 Depiction of the moving room used in Stoffregen and
Smart (1998) and Smart, Stoffregen, and Bardy (2002) 179
10.2 (a) Depiction of the virtual hallway stimuli used by
Warren, Kay, and Yilmaz (1996). (b) Depiction of large
screen projection of optic flow (sinusoidal) used in
Dijkstra, Schöner, and Gielen (1994) 181
10.3 Set-up and depiction of study for Littman (2011) 184
10.4 Potential alteration of perception and action when using VE 185
10.5 Depictions of optic flow conditions used in Smart et al.
(2014) 186
12.1 (a) Mobile eye tracker devised by Land (1992).
(b) Simultaneous eye and field of view recording for
tracking eye gaze 208
x Illustrations
12.2 (a) Lightweight, mobile adult eye tracker. (b) Mobile
infant eye tracker 209
12.3 Different stages of eye, head, and body rotations during
a 90° shift of gaze 215
12.4 Field of view for an average-height 8-year-old girl 217
12.5 Heat maps showing the frequency of face (top row) and
toy (bottom row) locations within the head-centered
field of view for 12- and 24-month-old infants during
fixations 218
14.1 Top: Apparatus and procedure for the overhead
reaching task. Bottom: Apparatus and procedure for the
minimum pass-through-ability task 240
14.2 (a) No perceptual intent condition. (b) Perceptual intent
condition of Experiment 1 247
15.1 A camera obscura 256
15.2 The chambered eye considered as a camera with a lens 257
15.3 Hand paintings in paleolithic rock art 263
15.4 Hand prints in development 263
15.5 The “City Churches” of Sir Christopher Wren, as seen
from 35,000 feet 266
15.6 Locating the vanishing point 272
16.1 (a) A camera receives light from a scene and focuses it,
then (b) a spinning shutter exposes each frame 276
16.2 (a) The original scene (in grayscale). (b) The negative.
(c) The positive print 277
16.3 (a) A bright light is projected through the positive print.
(b) A spinning shutter exposes, then blocks each frame
three times 278
16.4 The scanning pattern of a television or computer screen 279
16.5 (a) No blur specifies the ball (and camera) are still.
(b) Global blur specifies the camera is moving.
(c) Localized blur specifies the ball is moving 281
16.6 Typical editing structure 283
16.7 Nested event structure 286
Table
1.1 Some distinctions between the world, habitat, and
Umwelt 8
Contributors
James Gibson begins his final book by making a distinction between two
senses in which the world surrounds the animal. The whole project of The
Ecological Approach to Visual Perception depends upon the distinction
Gibson makes between the physical world surrounding animals and the
environment surrounding them. The physical world contains everything
from sub-atomic particles to galaxies, but it does not contain meaning.
Perception does contain meaning. If the world that animals perceive is the
physical world, it is mysterious why perception is meaningful. The tradi-
tional solution to this mystery, which Gibson rejected, is that animals
make the meaning somehow. For animals like us, the traditional solution
has it that our brains construct the meaning in private, and that is where
the meaning lives. In proposing that the world that animals perceive is the
environment, not the physical world, Gibson proposes a different solution
to this mystery. Perception is meaningful, on Gibson’s account, because
there is meaning in the environment that animals perceive. Animals do not
create meaning; they discover it in the environment. Gibson’s solution
amounts to a radical rejection of the understanding of the nature of the
world and its relationship to experience, perception, and knowledge that
had been in place since the founding of modern science. Gibson’s solution
is the key move that motivates the rest of his book, and indeed the whole
of his ecological approach to psychology. The distinction has, however,
created new problems. The aim of this chapter is to argue that Gibson
should have made a distinction between not just two, but three different
senses in which the world surrounds the animal.
The distinction Gibson (1979/2015) makes is between the following:
A Troublesome Observation
The additional distinction we are proposing here is not novel. Gibson
himself was quite aware that his use of “environment” referred ambigu-
ously to both the surroundings of an individual living animal (a token) and
the surroundings of an idealized member of a species (a type), and he noted
that this ambiguity could potentially cause confusion. Gibson writes
(1979/2015, p. 3):
In the course of time, each animal moves through the same paths of its
habitat as do other animals of its kind. Although it is true that no two
individuals can be at the same place at the same time, any individual
can stand in all places, and all individuals can stand in the same place
at different times. Insofar as the habitat has a persisting substantial
layout, therefore, all its inhabitants have an equal opportunity to
explore it. In this sense the environment surrounds all observers in the
same way that it surrounds a single observer.
The Third Sense of Environment 7
What Gibson says here is more or less true. For most of the objects and
surfaces around us, we can station our eyeballs relative to those surfaces at
a point of observation that was previously occupied by one of our fellows.
(There are exceptions to this: only you can see the end of your own nose
from the position of your own eyeball; Gibson, 1967a.)
But this does not really resolve the tension. There are many ways in which
the world can look different to two different members of the same species,
even if we all keep moving around in the appropriate way. This is not because
we are trapped in our own private mental world, as the traditional view has
it, but simply because we are different animals with different abilities.
Take a Chinese newspaper. Neither of the authors of this chapter knows
how to read Chinese. We might be able to recognize the general shape of
the characters and make the judgment, “That looks like Chinese writing,”
or again we might recognize someone in a photograph printed on the front
page. But we cannot read any of the headlines. The script is not meaning-
ful for us in the way that it is for someone who can read Chinese. Similarly,
the cockpit of a plane looks different to a trained pilot than it does to a
novice or to a child. The pilot can do things with the buttons and levers
that non-trained individuals cannot. Or, at an even more basic level, think
about climbing a set of stairs. The stairs look different depending on
whether you are a toddler or a long jumper or you are someone who has
just had a hip operation or you are wearing high-heeled shoes.
What’s missing, in Gibson’s elision of the two senses of environment
here, is an acknowledgment that the way the world looks to us is to some
extent a result of the way we currently are as individuals. In claiming that
we all share the same environment because we can in principle all see the
same surfaces from the same set of positions, Gibson is treating perception
as if it were an idealized process. He is suggesting that perception can be
separated from the perception–action loop that comprises the activity of a
living animal with a history of engagement and learning. This is odd,
because Gibson’s overall project throughout the book is to deny that per-
ception is a separate phenomenon in this way, and to promote the view
that perception and action are inherently one and the same process.
We suggest, then, that Gibson was right to point out the “troublesome”
ambiguity of the term “environment,” and that he was prescient in noting
that this ambiguity “may cause confusion.” Indeed, as we will show below, it
has caused confusion. The tension can be resolved, but not in the manner that
Gibson proposed. Instead, we need to make a further distinction, between:
the environment as it exists for a typical member of a species, a habitat; and
the environment as it exists for a particular living animal, an Umwelt.
It is important to note that the three senses of environment do not define
three distinct universes. Rather, the three senses are overlapping and
nested. Briefly, the habitat is the physical world considered relative to a
typical or ideal member of a species, i.e., it is a type of environment. It is a
complementary term to a species. The habitat continues to exist even when
8 Edward Baggs and Anthony Chemero
a given animal dies. The Umwelt, meanwhile, is the physical world con-
sidered relative to a particular living individual animal, i.e., it is a token
instance of a habitat. Umwelt is a complementary term to a specific organ-
ism. When that organism dies, its Umwelt necessarily ceases to exist. (The
term Umwelt originates in the work of Jakob von Uëxkull, and refers to
the world as it appears to a given animal. See Kull, 2009, for a discussion
of the history of the term.)
We will not here attempt to provide watertight definitions of the three
concepts of world, habitat, and Umwelt. Instead we offer a set of illustra-
tive differences in Table 1.1. The upper part of Table 1.1 sets out some
Table 1.1 Some distinctions between the world, habitat, and Umwelt
View from nowhere Ideal perspective for typical Has a first-person perspective
member of a species
Exists prior to any animal Exists for typical member of a Brought forth through
encountering it species development and active
exploration; enacted
Conclusion
In this chapter we have argued that Gibson’s distinction between the phys-
ical world and the animal environment should be refined. The latter term
should be subdivided. We should appreciate the difference between the
species-specific habitat and the animal-specific Umwelt.
Making clear this distinction allows us to resolve several long-standing
tensions in the field. The notions of affordance and information can be
better understood if we allow that habitat resources are different from
active relational engagements, and that arrays of energy are different from
self-regulating organisms actively seeking out structured patterns. These
notions are not in conflict with one another, but are complementary.
Similarly, the phenomena of learning and social interaction can be better
understood by keeping distinct the setting of behavior and the process
through which that behavior comes to take shape over time as an indi-
vidual becomes a skillful participant in a social practice.
One last question: does this mean that we have to stop talking about
“animal-environment systems?” If the term “environment” is ambiguous,
then should we abandon it in favor of more accurate terms—“species–
habitat” and “animal-Umwelt” systems? This does not strike us as an attrac-
tive conclusion. We like the phrase “animal-environment system.” We are
used to it. It is part of the family. For the purposes of this chapter, we have
avoided talking about “the environment.” But perhaps the term need not be
problematic as long as its meaning is appropriately constrained by context.
Acknowledgments
Edward Baggs was supported by funding from the European Union’s
Horizon 2020 research and innovation program under grant agreement
No 706432. Anthony Chemero was supported by the Charles Phelps Taft
Research Center.
2 The Triad of Medium, Substance,
and Surfaces for the Theory of
Further Scrutiny
Tetsushi Nonaka
Logic works perfectly well once mankind has developed adequate language.
But logic is helpless if it has to develop this adequate language … As for the
formulation of adequate logic, there must be a language which does not
impoverish the real situation. It is terrible that in our technocratic age we
do not doubt the initial basic principles. But when these principles become
the basis for constructing either a trivial or finely developed model, then the
model is viewed as a complete substitute for the natural phenomenon itself.
(from the Kyoto Prize Lecture by Israel M. Gelfand, 1989, p. 19. Copyright
2009, Tatiana V. Gelfand and Tatiana I. Gelfand)
Charles Sanders Peirce (1955, p. 6) once remarked that every great work
of science affords some exemplification of the defective state of the art of
reasoning of the time when it was written. Perhaps one of the great ser-
vices of The Ecological Approach to Visual Perception (Gibson,
1979/2015) to psychological science was bringing to the fore a set of
fundamental assumptions which have long been, and perhaps still are,
invisible to psychologists who live amidst them. For generations, we have
been taught that the apprehension of the world depends on the percep-
tion of space and time, and that “perception begins in receptor cells that
are sensitive to one or another kind of stimulus energy” (Kandel,
Schwartz, Jessell, Siegelbaum, & Hudspeth, 2013, p. 445). Such a nar-
rative is so deep in psychology as to be almost unquestionable: Fish don’t
talk about water. But what if these fundamental assumptions are irrele-
vant to the activity of perception that orients the organs of perception
and explores the cluttered environment? What if our perceptual systems,
through evolution, are coordinated to a particular scale of nature that is
at a different level from the description provided by physics of atoms and
objects in space? What if the level of the description of the world that
our perceptual systems fit into is so unique that it deserves investigation
in its own right?
Gibson suggested that psychologists can approach the problem of per-
ception in an entirely different light, once an adequate description of the
22 Tetsushi Nonaka
environment to be perceived which does not impoverish the real situation
has been developed. In Chapter 2 of his 1979 book, “Medium, Substances,
and Surfaces,” Gibson attempted to develop a “new” description of the
environment that our perception and behavior fit into. “It will be unfa-
miliar, and it is not fully developed, but it provides a fresh approach where
the old perplexities do not block the way” (Gibson, 1979/2015, p. xi). The
aim of this chapter is two-fold: to reflect on the development of a “new”
description of the environment—the triad of medium, substances, and sur-
faces, and to highlight its significance to the study of perception and
behavior.
Useful Vision
Almost immediately after the publication of The Senses Considered as Per-
ceptual Systems (Gibson, 1966a), Gibson began working on the revision of
The Perception of the Visual World (Gibson, 1950a), which, as we now
know retrospectively, ended up in an entirely new book (Gibson,
1979/2015). In the James J. Gibson papers (#14–23–1832) at the Division
of Rare and Manuscript Collections of Cornell University Library, New
York, there is a folder entitled “Notes for Revision of Visual World,”
which contains over 50 pages of handwritten notes for the “new book”
written by Gibson in 1967. One of the notes reads as follows:
Although this handwritten memo was probably not intended for publica-
tion, its message is clear. A theory of visual perception, first and foremost,
needs to account for “useful vision.” It must account for our remarkably
Medium, Substance, and Surfaces 23
veridical ability to deal with the surfaces and substances, objects and
events of the environment that we are, like it or not, obliged to cope with
and use as a species and as individuals (E. J. Gibson, 1994, p. 503). What
different types of clay, rock, vegetation, bark, leaves, fur, feathers, or skin
afford to us may not be distinguishable at first glance. But they can poten-
tially be identified through the activity of perceiving—looking around, pal-
pating, listening to, or sniffing them. Unlike neural signals converted from
stimuli impinging on receptors, the activity of perceiving, involving adjust-
ments of organs, is a function of a set of meaningful properties of the
environment that are selectively attended to by animals. Naturally, an
effort to study the activity of perception thus conceived demands the
description of the functional referent of perception (Holt, 1915). Other-
wise, it would be like watching a tennis match with half the court occluded
from view. To study useful dimensions of perception, the next logical step
would be to find out the adequate level of the description of the world that
our perception and behavior fit into. An attempt to develop a “new”
description of the environment, it seems, was a natural consequence of
Gibson’s effort at understanding useful vision.
The Triad
It is likely that Gibson wrote the first draft of Part I of the new book—
“The Environment to be Perceived”—between January and November in
1971, and came up with the title of the book, An Ecological Approach to
Visual Perception (albeit beginning with “An” instead of “The”) during
this period. On January 12, 1971, Gibson (1971a) sent a tentative outline
of the new book to the editor at the Houghton Mifflin Company that was
quite different from the final version of the book: the book was then enti-
tled Everyday Visual Perception, and the Part I of the book was “A New
Theory of Perception,” for which Gibson left notes that contrast a theory
of sensation-based perception and that of information-based perception
(e.g., Gibson, 1967c).
1971 seems to have been a key year in the development of ideas that
culminated in Gibson’s (1979/2015) final book. It was when a series of
notes on affordances (Gibson, 1982a) and “Do We Ever See Light?”
(Gibson, 1971b) that later constitute the important parts of the 1979 book
were written, and the phrase “an ecological approach to visual perception”
appeared in a note (Gibson, 1971c). The use of the term “ecological psych-
ology” by Gibson is also found in the note entitled “A Preliminary List of
Postulates for an Ecological Psychology,” written in June, 1971 (Gibson,
1971d). After writing a series of notes, by November 1971, Gibson had
written up an early version of the new Part I, “The Environment to be Per-
ceived,” which included all the basic contents of the finished version of the
part, including the triad of medium, substances, and surfaces and the nine
ecological laws of surfaces (Gibson, 1971e).
24 Tetsushi Nonaka
Normal perception involves the possibility of further exploration, which
we are aware of whether or not the possibility is taken advantage of
(Gibson, 1978a). But, what makes us aware of the possibility of further
exploration in the first place? What makes us aware of the layout of the
environment in and out of sight? What makes it possible for animals to
discover the potentially meaningful features of the environment that have
not yet been taken advantage of? These are the questions that share the
same fundamental issue which cannot be resolved without restoring the
active observer to the world in a way physics never did (Gibson, 1973).
Gibson’s following thought experiment illustrates well what gets lost in the
description of the world by physics in terms of space, time, matter, and
energy: What if a wholly passive animal were in a wholly frozen world?
Or, conversely, what if an animal were in “an environment that was
changing in all parts and was wholly variant, consisting only of swirling
clouds of matter” (Gibson, 1979/2015, p. 10)? In both cases, it would be
impossible for the animal to disentangle a set of variables that are specific
to the world out there (i.e., independent of the point of observation). These
hypothetical worlds are not the environment for perceiving animals. But,
note that “in both extreme cases there would be space, time, matter, and
energy” (p. 10).
Restoration of the active observer that scrutinizes the environment
requires a fundamental reworking of the description of the world, or in
Gibson’s (1971f ) words, “the permutation of the orchard with Newton’s
apple!” The crux of this permutation was the replacement of matter and
bodies in empty space with the triad of (1) medium (the gaseous atmo-
sphere); (2) substances that are more or less substantial; and (3) surfaces
that separate the substances from the medium (Gibson, 1979/2015, p. 27).
The Medium
The recognition of the air as a medium (for terrestrial animals) allows the
distinction between potential and effective stimulation. Radiant energy,
acoustic energy, and chemical energy are propagated through the medium,
which provides the ambient sea of stimulus energy in which animals can
move about. Instead of inquiring whether one model of inferring the causes
of sensation aroused by stimuli is better than another, with the notion of
medium, we can now begin to study activity before sensations have been
aroused by stimuli, an activity that orients the organs of perception and
explores the sea of potential stimulation for the information external to the
perceiver (Gibson, 1982b, p. 398). Unlike points in space defined by an
arbitrary frame of reference, the ambient energy array surrounding each
potential point of observation is unique (Gibson, 1979/2015, p. 13). As the
observer moves from one point of observation to another, the optical
array, the acoustic array, and the chemical array are transformed accord-
ingly (p. 13). This provides the opportunities for an active observer to
Medium, Substance, and Surfaces 25
move in the medium to detect invariants underlying the transforming per-
spectives in the ambient array surrounding a moving point of observation.
Among the recent advances that have furthered our understanding of
the notion of medium for perceiving animals is the insight provided by
Turvey and Fonseca (2014), whose research, probably for the first time
since Aristotle (1907), brought to light the problem of medium in haptic
perception. They hypothesized that interconnected structural hierarchies
composed of tensionally prestressed networks of our bodies that span from
the macroscale to the microscale—from muscles, tendons, and other con-
nective tissues to various micro-elastic structures such as a network of col-
lagen fibers—constitute the medium for the haptic sense organs of animals
(Turvey & Fonseca, 2014). Like the air being the medium for sound, odor,
and reverberating flux of light, despite being on the other side of the skin,
the presence of isometric tension distributed throughout all levels of inter-
connected, multiscale networks make available the opportunities for an
active perceiver to spontaneously transform the distribution of forces
throughout the tensionally integrated system in such a way as to detect the
invariant patterns that specify the source of mechanical disturbances.
Turvey and Fonseca’s (2014) rediscovery of the medium of haptic per-
ception resonates with the recent surge of interest in the mechanical basis
of information and pattern formation in a wide range of fields—
mechanobiology, soft robotics, sensory ecology, and rheology (e.g.,
Hanke, 2014; Ingber, 2006; Iwamoto, Ueyama, & Kobayashi, 2014;
Rieffel, Valero-Cuevas, & Lipson, 2010). Because the form of any struc-
ture, whether a vortex flow of water or a living tissue, is determined
through a dynamic interplay of physical forces, the distinct pattern of
forces characteristic of a mechanical disturbance may convey a physical
form of information that constrains perception and the behavior of an
agent (Ingber, 2005). One good example of this is the hydrodynamic per-
ception by aquatic animals (Hanke, 2014). Harbor seals, for instance, are
known to use their vibrissae to haptically discriminate the water move-
ments left behind by prey or predator that have passed by at an earlier
point in time, and perceive the motion path, size and shape of the object
that caused the trail (Hanke, Wieskotten, Marshall, & Dehnhardt, 2013)
(Figure 2.1). A point worth emphasizing is the fact that although the
informative patterns of water movement are there to be perceived by an
animal, there are many reasons that the animal may not attend to the
information. The harbor seal running away from the white shark may
not attend to a pattern of water movement that specifies the presence of
salmon that can be preyed upon. Near the surface of clear water during
daytime the animal may attend to optical information without taking
advantage of hydrodynamic information. The notion of medium makes
possible for us to recognize this distinction between the existing informa-
tion available to the animal and the information selectively picked up by
the perceptual activity of the animal.
26 Tetsushi Nonaka
Flaking Stone
The earliest known evidence of the alteration of a surface layout by
humans to change its affordances is the modification of a natural, hard,
and rigid mineral material (cobbles, pebbles, and rock fragments) by means
of percussion with hard stone hammers (Režek, Dibble, McPherron, Braun,
& Lin, 2018). Thanks to its excellent durability, hard stone records very
old traces of physical actions applied to it (Pelegrin, 2005). The archeologi-
cal records clearly show that by around at least 2.6 million years ago (and
likely much earlier, e.g., Harmand et al., 2015; McPherron et al., 2010),
early human species were already habitually fracturing stones so as to alter
their utility or function—stone knapping, as archeologists call it. From the
very beginning, the aim of stone knapping was to obtain a specific layout
of surfaces—the razor-sharp edges—that afford a function which is absent
or extremely rare in the natural world: the cutting function (Roche,
Medium, Substance, and Surfaces 31
lumenschine, & Shea, 2009). This is inferred from the fact that stone
B
fragments (called flakes), detached from a block of stone (called a core)
found in the old archeological sites, unequivocally display the character-
istic layout of surfaces resulting from the specific fracture mechanism
called a conchoidal fracture. This mechanism leaves razor-sharp cutting
edges and conspicuous bulbs of percussion on the fracture plane that are
unlikely to have been formed naturally (Roche, 2005; Semaw et al., 1997)
(Figure 2.3a).
Conchoidal fracture refers to the phenomenon producing a Herzian
cone. It arises from the fracture of a specific type of brittle substance—a
homogeneous and isotropic crypto-crystalline structure (e.g., flint, fine-
grained silicified sandstone) or glasses (e.g., obsidian) with no preferred
planes of weakness (Pelegrin, 2005). Conchoidal fracture requires loading
at a point near the angular edge of the block of raw material with its
Figure 2.3 (a) Ventral surface of a flake detached by conchoidal fracture. (b) Flake
terminology.
Source: (a) Adapted from “Lithic flake on flint, with its fundamentals elements for technic
description,” Copyright 2006, José-Manuel Benito Álvarez. From José-Manuel Benito
Álvarez, Wikimedia (https://creativecommons.org/licenses/by-sa/2.5/legalcode).
32 Tetsushi Nonaka
e xterior angle (called the exterior platform angle) less than 90 º (Figure
2.3b), and the to-be-flaked exterior surface (called the flaking surface)
needs to be flat or slightly convex to allow for the propagation of the
energy transmitted by the loading event (Pelegrin, 2005).
Imagine you are a prehistoric human. What would you do to obtain
precious cutting tools by fracturing stones? First, you need to look for
natural homogeneous, isotropic, and brittle material that can be fractured
conchoidally to obtain razor-sharp cutting edges, and the hammerstone
that is suitable for aimed striking. In addition, the raw material needs to
have a peculiar natural layout of surfaces with angular edges and a more
or less flat surface (e.g., cobbles with a flat surface as opposed to a convex
one) required for conchoidal fracture. Then, in order to make the most of
the precious raw material (i.e., to produce as many usable flakes as pos-
sible), you need to make sure that the specific layout of surface of the core
that allows further flake removals is preserved after each flake removal.
The foregoing is what has been found in the in-depth analysis of the
traces of stone tool-making behavior by early humans who lived near the
western margin of present-day Lake Turkana in Kenya—the archeological
site of Lokalalei 2C—around 2.34 million years ago (Delagnes & Roche,
2005; Roche et al., 1999). Surprisingly for stone fragments that are so old,
it was possible to re-fit many of these pieces found in in Lokalalei 2C back
together into an original cobble. By re-fitting the flakes, Roche and her col-
leagues examined the sequence of flake removals—the order by which
these flakes were originally removed (Delagnes & Roche, 2005). An early
stone knapper who fractured the cobble in Figure 2.4, for example,
selected a fine-grained phonolite cobble that has a flat surface (Face A) as
opposed to a highly convex surface (Face B) resulting in edges with acute
angles around the flat side of the cobble (indicated by the gray dotted line
around Face A). Roman numerals in Figure 2.4 present the order of a
series of flake removals, and the arrows show the direction of strike with a
hammerstone. Even to a novice’s eyes, it is obvious that the knapper
obtained these flakes not randomly but by following a certain set of rules.
For example, the knapper carried out all the flake removals on the flat face
(Face A) but the final attempt (V on Face B), and aimed the blows at the
edges with acute angles shown by the gray dotted line. In addition, after a
series of flake removals, the knapper switched to the opposite edge on the
same face, and alternately struck flakes from the two opposing edges
(Figure 2.4). Had flaking been carried out from one direction, the flat
surface would have been quickly lost and the remaining core would have
been wasted just after a couple of flake removals. In this example, the
knapper maintained the surface flat by alternating the direction of strike,
thus providing the opportunities for further flake removals to the very end.
It was also found that the cores and flakes in Lokalalei 2C do not show
any impact damage from failed percussions, such as might be caused by
faulty estimation of the opportunities for flaking (Delagnes & Roche,
Medium, Substance, and Surfaces 33
/s /s
/s
/s
//
ŽƌĞ
ŽƌĞ
/ s
///
///
///
Ϭ ϭ Ϯ ϯ ϰ ϱ Đŵ
&ĂĐĞ &ĂĐĞ
Figure 2.4 Re-fitted elongated ovate fine-grained porphyritic phonolite cobble from Lokalalei
2C. Flaking was carried out on a large and relatively flat natural face (Face A)
from the longest available edge and a shorter adjacent edge, which are the only
portions of the perimeter of the core with suitable natural striking angles (gray
dotted line). The series of flakes were alternately struck from these two edges.
Source: From Delagnes and Roche (2005), Figure 8. Adapted with permission from Elsevier.
Epilogue
A “new” description of the environment to be perceived—the triad of
medium, substances, and substances that allows for both persistence and
change—provides a principled way to frame the existing possibility of
further exploration, of scrutinizing, or of looking more carefully to extract
invariants (Gibson, 1978a). Without taking this possibility into account,
the activity of perceiving would easily get confounded with the activity of
guessing which occurs in a rather atypical situation where further scrutiny
is wholly prevented. This confusion, in turn, would lead to the reduction
of the laws of perception to those of guessing.
Luminous, mechanical, or chemical energy is structured by the substan-
tial environment and becomes ambient in the medium. The ambient sea of
36 Tetsushi Nonaka
energy around each of us is usually very rich in what we call pattern and
change, which provides the inexhaustible reservoir of potentially informa-
tive invariants that lies open to further scrutiny (Gibson, 1979/2015,
p. 233). At this level of description of the environment, what has been
known tacitly is made explicit: The activity of perception is “open-ended,”
and you can keep discovering new features and details about the environ-
ment by the act of scrutiny (p. 245). Unlike guessing based on a few cues
or clues, normal perception is not based on “going beyond the data,” as
long as one can look again, or go back and look again (Gibson, 1978a).
What the triad of medium, substances, and surfaces offers us is a theory of
unlimited further discovery for perception. We will have to make a fresh
start.
Acknowledgments
I thank Tatiana V. Gelfand for giving me the permission to quote I. M.
Gelfand’s words from his Kyoto Prize Lecture as an epigraph. The writing
of this chapter was in part supported by JSPS KAKENHI Grant Numbers
JP18K12013 and JP18KT0079 from the Ministry of Education, Science,
Sports and Culture, Japan, awarded to Tetsushi Nonaka.
3 Ecological Interface Design
Inspired by “The Meaningful
Environment”
Christopher C. Pagano and Brian Day
Physics, optics, anatomy, and physiology describe facts, but not facts at a
level appropriate for the study of perception. In this book I attempt a new
level of description.
(Gibson, 1979/2015, p. xi)
In the only figure included in the original chapter, Gibson illustrates this
point with a drawing of a hand holding a pair of scissors with the caption
“A tool is a sort of extension of the hand … and one can actually feel the
cutting action of the blades” (p. 35; see Figure 3.1). This is used to illus-
trate how attachments to the body change the boundaries of the body and
thus alter the affordances as one interacts with the environment.
Figure 3.1 The original caption reads; “A tool is a sort of extension of the hand.
This object in use affords a special kind of cutting, and one can actually
feel the cutting action of the blades” (p. 35). This figure is also
employed as Figure 6.6 in Gibson (1966a). There its caption reads; “A
hand feeling a pair of scissors. The pressures of the metal surfaces on
the skin can be felt, but one is mainly aware of the separation and of
the movements of the handle. In use, one actually feels the cutting
action of the blades” (p. 112).
Source: From Gibson (1979/2015), Figure 3.1. Copyright 2015. Reproduced by permission of
Taylor & Francis Group, LLC, a division of Informa plc.
Ecological Interface Design 43
Rather than being a stored and relatively fixed representation, the body
schema is perceived and calibrated continuously as limbs and their attach-
ments change (Day et al., 2017; Maravita & Iriki, 2004; Pagano &
Turvey, 1998). The conception of the body schema as continuously per-
ceived from ongoing proprioceptive input runs counter to an assumption
inherent in the traditional body schema literature that proprioception does
not inform the central nervous system about the metric properties of the
body and its parts (e.g., Longo & Haggard, 2010). To reflect the fluid
nature of the body schema and the role of perceptual calibration in its
maintenance, we have proposed the term embodied action schema (Day et
al., 2017, 2019). The embodied action schema represents the body’s
current action capabilities, including any effects of a tool, such as the
lengthening of the arm to which it is attached (Cardinali et al., 2009; Day
et al., 2017, 2019; Maravita & Iriki, 2004; Sposito et al., 2012). A key
finding is that both limbs and hand-held objects are perceived through
kinesthesis via the same mechanism, with the same mechanical principles
underlying the perception of hand-held objects and the perception of the
body (Pagano & Turvey, 1998). This explains both the malleability of the
body schema and how attached objects become incorporated into the body
schema to be perceived and controlled as if a part of the body.
Incorporating a tool that extends one’s effectivities typically requires a
period of learning or adjustment before proficiency in its use can be
achieved. Recent research has shown that calibration to one’s tool-
enhanced capabilities occurs relatively quickly, though there is variability
in how much calibration is required for different conditions (Bingham,
Pan, & Mon-Williams, 2014; Bourgeois & Coello, 2012; Day et al., 2017,
2019; Fajen, 2005b; Mon-Williams & Bingham, 2007). This research has
also shown that calibration is action-specific and it involves a mapping
from embodied units of perception to embodied units of action (Bingham
& Pagano, 1998; Coats et al., 2014; Pan, Coats, & Bingham, 2014). That
calibration is action-specific means that calibrating for one action (e.g.,
reaching), will not generalize to another action that involves a different
unit of action (e.g., throwing or walking) (Proffitt & Linkenauger, 2013;
Rieser, Pick, Ashmead, & Garing, 1995; Witt, 2011; Witt, Proffitt, &
Epstein, 2010). Rather than using external metrics such as inches or cen-
timeters, the relevant units are intrinsic to the body’s scale and its action
capabilities (Cutting, 1986; Gibson, 1979/2015; Mantel, Stoffregen,
Campbell, & Bardy, 2015; Pagano, Grutzmacher, & Jenkins, 2001; Prof-
fitt & Linkenauger, 2013). Thus, what is actually being calibrated is the
mapping between intrinsic units of perception and intrinsic units of action
(Bingham & Pagano, 1998; Bingham et al., 2014; Pan et al., 2014).
The conclusions generated from the calibration research mentioned above
suggest that we move from traditional concepts of a body schema, which
implies a stable entity that is either innate or learned early in life, toward an
embodied action schema. An embodied action schema represents the
44 Christopher C. Pagano and Brian Day
ongoing effects of constant proprioception and continuous calibration. An
embodied action schema also allows for changing action capabilities of an
actor to be perceived in real time, and stresses the similar intrinsic scaling
employed by the dual processes of perception and action (i.e., motor
control).
To this point, the focus has been on small, portable tools and their
incorporation into the embodied action schema. But humans have created
much larger tools; cars for driving, drones for surveillance, robots to
enhance our physical strength and capabilities, nuclear power plants, etc.
An area of study that ties together perception of tools and the margin of
safety is affordance-based control of action (Warren, 2006; see also Gibson
& Crooks, 1938). An example of this area of study is the control of
braking by vehicle drivers (Fajen, 2005a, 2007, 2008). In this case, action
capabilities are determined by the automobile. But, as with scissors or
other hand-held tools that Gibson mentioned in Chapter 3, the new action
capabilities are incorporated into the embodied action schema of the driver
and can be both perceived and felt by the driver. In his 1961 paper on con-
tributions of experimental psychology to safety research, Gibson mentions
that when the margin of safety is not easily detected during the operation
of automobiles or other devices, then one’s proximity to the margin of
safety must be signaled via some type of display.
We are unsure of the accuracy of their claim regarding the hips being
where the body first detects lateral movement (and we would use the term
“haptic” instead of “tactile”), but the advertisement expresses the appro-
priate sentiment for creating artificial displays. The challenge is to achieve
this with devices such as computer-mediated displays.
In the fourth chapter of Gibson’s 1979/2015 book (see Carello & Turvey,
Chapter 4, in this volume), he claims that one does not see light as such and
one does not see the retinal image as such. Instead one sees by means of the
information that is contained in the light. EID is founded on the assumption
that if it is theoretically possible for direct perception to occur via the specifi-
city of information in the light, then it is theoretically possible for direct per-
ception to occur via artificial displays. Such displays must act as media by
similarly conveying information lawfully specific to the affordances of the
remote environment. It is important to note that perception is indirect when
a psychological process, such as inference-making or mental integration,
intervenes between the object in the world and one’s awareness of that
object (Heft, 2002). Thus, the light reaching the eye or the sound reaching
the ear does not result in indirect perception if these media are functionally
transparent, as Gibson argued they tend to be under natural viewing con-
ditions (e.g., Heft, 2002). Similarly, the existence of an intervening hand-
held tool or a computer display does not necessarily have to result in indirect
perception if these media do not cause an additional psychological process.
The challenge is to turn this into reality.
Providing users with information via a display in a way that does not
result in indirect perception has been accomplished by designing the
48 Christopher C. Pagano and Brian Day
display so as to convey the optic flow that would be available to the
observer during natural perception and action. This has been useful for
designing displays for robot teleoperation (Gomer, Dash, Moore, &
Pagano, 2009; Mantel et al., 2012). In cases such as interfaces for process
control plants, one begins by understanding the task-relevant invariants
that exist in the dynamics of the system itself (e.g., Bennett & Flach, 2011;
Vicente & Rasmussen, 1990). That is to say, human factors practitioners
must begin, as Gibson does in Chapter 3, by understanding exactly what is
meaningful about the environment. For process control, this involves
understanding the physical constraints of the system and then displaying
the higher-order parameters (i.e., invariants) that are specific to those con-
straints. In this way, the display acts as a smart perceptual instrument to
convey higher-order information akin to TTC and DTB, rather than dis-
playing the lower-order parameters of traditional physics that Gibson
argues against in Chapter 3 (Vicente & Rasmussen, 1990). The lower-
order parameters have to be mentally integrated to form a mental model of
the system’s current state, and then mental projection must be employed to
project the current state into the future to understand how one must act to
affect coordination and control. Displaying higher order parameters similar
to TTC and DTB allows for direct perception and thus allows the display
to act as a medium on a par with light and the retina.
Another goal of EID is to convey the constraints of a complex system in an
intuitive manner. Constraints are one of the meaningful aspects of the domain
environment, and their incorporation into displays has been particularly
useful (Bennett & Flach, 2011; Borst, Flach, & Ellerbroek, 2015; Vicente &
Rasmussen, 1990). Effken and colleagues, for example, have created displays
for patient care that depict the physiological constraints imposed on hemody-
namics by fluid, force, and resistance, as well as the effects of clinical inter-
ventions on those constraints through particular combinations of medications
(Effken, 2006; Effken et al., 1997; Effken, Loeb, Kang, & Lin, 2008). Addi-
tional constraints exist between the patient and the health care practitioner,
or in more general terms, between the user and the work domain (Flach, Rey-
nolds, Cao, & Staffell, 2017). A good display for any complex system needs
to show the system’s constraints because the constraints are the targets of
effective human action (Borst et al., 2015; Effken et al., 2008).
In the Conclusion of the 1979/2015 book, Gibson discusses the use of
displays in experimental science (see Stoffregen, Chapter 15, this volume).
He writes:
Visual perception can fail not only for lack of stimulation but also for lack
of stimulus information.
(Gibson, 1979/2015, p. 54)
It works beautifully, in short, for the images that fall on screens or sur-
faces and that are intended to be looked at. But this success makes it
tempting to believe that the image on the retina falls on a kind of
screen and is itself something intended to be looked at, that is, a
picture. It leads to one of the most seductive fallacies in the history of
psychology—that the retinal image is something to be seen.
(Gibson, 1979/2015, p. 53)
I say vision occurs when the image (idolum) of the whole hemisphere
of the world which is in front of the eye, and a little more, is formed
Challenging the Axioms of Perception 53
;ĂͿ LJĞ
^ŝŶŐůĞ
ĨŽĐƵƐ
ƉŽŝŶƚ
;ďͿ
Figure 4.1 (a) A diverging pencil of rays from a single reflecting point on an object
is infinitely dense; a subset converges in a pencil of infinitely dense rays
to a single focus point on the back of the eye. (b) A few converging
cones show how an optical image could be built from an infinite
number of points on the object.
Two things are notable here. First, Kepler ignored the retinal anatomy that
puts the lie to the assumption of an “opaque surface” upon which an
image can form. Dissections by the physician and anatomist Herophilos
(c.335–280 bce) prompted his description of the retina as being reminiscent
54 Claudia Carello and Michael T. Turvey
of a “folded fishing net” (Swanson, 2015). The contemporary characteriza-
tion is in terms of ten layers and a dense network of blood vessels. In
neither case is there an opaque surface. Of its multiple layers, then, which
should we interpret as screen-like? Second, Kepler appreciated that getting
a retinal image is only the first step. That image must still be considered
further by additional authoritative entities. One would think that the
seduction of starting with an image, however misleading, would lose its
appeal once ensnared in the infinite regress of a succession of images, each
in need of its own viewer.
But apparently, logic goes out the window when one is smitten. Witness
the descriptions of compound eyes. As Gibson (1979/2015) pointed out,
the compound eyes of arthropods are different from the chambered eyes of
vertebrates in that they do not focus light. The cone of rays diverging from
an object is not converted into a converging cone of rays by a lens; an
image is not formed. However, those who study insect vision seem to
retain the story initiated for humans. As Gibson lamented:
;ĂͿ
;ďͿ
;ĐͿ
;ĚͿ
A core issue for Campbell and for Bentley was that an image can be
located on a translucent screen (the post-surgery state of affairs, Figure
4.2c) but it cannot be located on a transparent screen (the pre-surgery state
of affairs, Figure 4.2a).1 The foregoing has been expressed in the following
terms:
With the aid of this instrument, it is possible to look directly in the eye
from the front and see clearly not only the retina itself and its blood
vessels but the optical images that are projected on it. That this is actu-
ally the case is proved by the fact that if the eye under examination is
focused on an object that is bright enough, a distinct and sharply
defined image of it may be seen by the observer on the surface of the
retina.”
(Helmholtz, 1866/2000, p. 92)
In the reported experiment with four participants, such an entoptic image was
produced in the left eye of the observer. Simultaneously, the right eye viewed
a cardboard with variously sized gauges that the observer used to report on
the sizes of the fovea and the blind spot in the entoptic image. The author
was happy to note that one image was not distorted relative to the other,
despite the overrepresentation of the fovea in Visual Area 1. However, the
disconnect between retinal-image-as-snapshot and image-of-the-anatomical-
retina was overlooked. The label for the specially produced image should
have been a hint: Entoptic means “originating within the eyeball.” Shadows
of the vessels of the eye are not what have been meant historically and theor-
etically by “retinal image.” The photograph of the author’s own retina, com-
plete with axes and points superimposed on the photograph to show the
location of the fovea, tellingly showed no projection of the cardboard seen by
the other eye. The claim for an image of the environment on the retina seems
not to have been substantiated on anything other than a translucent retina.
Nonetheless, vision scientists and neurobiologists have been beguiled by
all of the focusing controls that seem to point to the importance of image
quality qua image, even lamenting “special-purpose visual systems [that]
failed to provide the comprehensive neural machinery that would allow
these images to be fully exploited” (Land & Nilsson, 2006, pp. 167–168).
But what struck Gibson as important across the diversity of eye designs is
that they are well suited to registering the structure of ambient light.
The structural properties of a retinal image are what count, not its
visible properties. Structurally, it should not be called an image at all,
for that term suggests something to be seen. The structure of a retinal
image is a sample of the structure of ambient light.
(1966a, p. 172)
Yet the visibility of photons—that is, the seeing of illumination, the seeing
of pure non-reflected light—has been an issue of long standing (e.g.,
Baylor, Lamb, & Yau, 1979; Hecht, Schlaer, & Pirenne, 1942; Sakitt,
1972; van der Velden, 1946) and continues to be (Field & Rieke, 2002;
Koenig & Hofer, 2011; Tinsley et al., 2016).
Increasingly sophisticated methods have been dedicated to demonstrat-
ing single-photon detection by the human eye. Whether or not the results
can be considered unequivocal seems not to have undermined the charac-
terization of the goal of the refined methods. One recent investigation aptly
observes that demonstrating the response of a rod cell to an individual
photon is not equivalent to demonstrating the perception by a human
subject of a photon (Tinsley et al., 2016). Along with that admission,
however, was the frustration that limitations of previous methodologies
are to blame simply for “an inherent ambiguity about the exact number of
photons required to elicit the perception of seeing light” (p. 2).
It is instructive, for present purposes, to clarify what investigators mean
when they try “to probe the absolute limit of light perception.” The experi-
mental setting of Tinsley et al. was that of passing laser light through a
1 mm-thick beta-barium borate (BBO) crystal. When suitably illuminated,
BBO gives rise to single-photon states. The participant’s task on each trial
was to report whether a light was seen or not, and how confident the
participant felt with respect to the report. More specifically, on each trial,
the participant watched for a dim flash, which occurred at one of two
times, with both times indicated by a beep. A decision was then given as to
which beep was associated with a dim flash, and what level of confidence
should be assigned to the decision. In the 10% of trials when confidence
was high, correct decisions, averaged over participants, were about 60%
(compared to 51.6% for the 90% of trials engendering low or middling
confidence). Tinsley et al. concluded: “To our knowledge, these experi-
ments provide the first evidence for the direct perception of a single photon
by humans” (p. 6). Our reading of direct perception suggests an alternative
conclusion. With regard to the triad of that which illuminates, that which
is illuminated, and illumination as such, it is perhaps more accurate to
claim that the participants in the Tinsley et al. experiment occasionally saw
a source of illumination (that which illuminates). They did not see illumi-
nation as such. They did not see photons.
Challenging the Axioms of Perception 61
On the Invisibility of Illumination
A simple mundane phenomenon highlights that whereas light sources and
illuminated surfaces are visible, illumination is not. Consider a flashlight
held vertically in a hand with the emitted light striking the floor in an
otherwise ordinarily illuminated room (Figure 4.3). The hand with flash-
light can be so adjusted in height that one can see the emitted light at the
floor without seeing the flashlight’s bulb. That is, the illumination of the
floor (in the form of a circular disk of light if the flashlight is perpendicular
to the ground) is distinct from the source of illumination (a tungsten fila-
ment/incandescent bulb or a light emitting diode/solid state bulb). The
latter could also be seen if the flashlight were raised or tilted. What cannot
be seen is the light between the flashlight (the source of illumination) and
the floor (the thing illuminated). The illumination itself (i.e., light as such)
is not visible. Coloring or sharpening the light source does not change the
experience (Figures 4.3c and 4.3d). The illuminated surface is seen; the
illumination is not. Similar demonstrations are now common in high
school physics classes, where the light is a bright laser which only becomes
visible with the introduction of chalk dust. That the mundane phenom-
enon is nonetheless remarkable is illustrated by the puzzlement expressed
by a 4-year-old film enthusiast who demanded “Where’s the movie?!”
while looking overhead in a dark theater and gazing urgently between the
projector and the screen (J. Blau, personal communication, 2014).
Why does the assumption that light is perceived sit unexamined and
unquestioned in perceptual theory? It fits seamlessly into the narrative
developed for theories of indirect perception, namely, that our acquaint-
ance with the surrounding environment is mediated by sensations of light.
What about the opposite assertion that we never see light? It may at
first sound unreasonable, or perhaps false, but let us examine the state-
ment carefully. Of all the possible things that can be seen, is light one
of them?
(1979/2015, p. 48)
Can we test this assertion? One possible test would be provided if we could
arrange the circumstances so as to allow an observer to encounter light
when it is not illuminating any surfaces. If Gibson is right, then observers
should not experience that light; in the absence of illuminated surfaces,
they should experience only darkness.
The requisite demonstration, implemented as part of a science exhibit, has
been described by Zajonc (1993). Using details provided by him, we con-
structed the apparatus illustrated in Figure 4.4 in order to conduct some sys-
tematic investigations. The basic design entails a light source at one end of
an empty box, interior surfaces covered in light-absorbing material, and an
aperture through which to view the interior of the box. With the light turned
on, the interior of the box provides a setting in which there is illumination in
the absence of things illuminate-able. Illuminate-able objects or surfaces can
be introduced into the otherwise non-reflective box interior, either through a
small slot or by opening the hinged rear wall.
Photographs of the interior of the box were taken through the viewing
aperture for each of the following four conditions: (1) light source off, box
empty (Figure 4.5a); (2) light source on, box empty (Figure 4.5b); (3) light
source off, rod in box (Figure 4.5c); and )4) light source on, rod in box
(Figure 4.5d). The photographs reveal that the fact of light in the box is
insufficient to experience its consequence, which is restricted to Figure
4.5d. (These impressions from the photographs have been verified with a
dozen observers under experimental conditions.)4 The implication is that
illumination is visible only by way of that which is illuminated.
The latter conclusion can be taken a step further with a fifth condition:
light source on with smoke filling the box. Smoke reflects light so that it
can be seen without a rod in the box (Figures 4.5e and 4.5f ); it is a case of
Rayleigh scattering, where the reflecting surfaces are particles in the
medium. Without the particles, there would be nothing to see. The visibil-
ity of the smoke in the box at the viewing aperture (Figure 4.5f ) should be
underscored. It gives necessary evidence that the unseen light in Figures
4.5a–c is not a matter of the unavailability of light. The light is there but
invisible. Smoke renders it visible.
;ĂͿ
&ƌŽŶƚǀŝĞǁ
WůLJǁŽŽĚĞdžƚĞƌŝŽƌ
/ůůƵŵŝŶĂƟŽŶ
ƐŚĂŌ
sŝĞǁŝŶŐĂƉĞƌƚƵƌĞ Z
ZŽĚƐůŽƚ
;ďͿ dŽƉǀŝĞǁ;ůŝĚƌĞŵŽǀĞĚͿ
KďƐĞƌǀĞƌ ZŽĚŝŶƐĞƌƚĞĚ
ƚŚƌŽƵŐŚƐůŽƚ
Figure 4.4 (a) Three holes drilled in a plywood box provided an illumination shaft
(extended with a PVC pipe to secure a flashlight), a monocular viewing
aperture centered on the front wall and, to its right, an insertion slot to
allow the introduction of a thin rod. (b) Rods angled obliquely directly
in front of the viewing aperture of the box, which was lined with black,
light-absorbing foam.
Figure 4.5 Photographs taken through the viewing aperture for (a) light off/no rod;
(b) light on/no rod; (c) light off/rod present; (d) light on/rod present;
(e) when the box is filled with smoke, the beam can be seen through the
unlatched back wall; (f) the smoke as seen through the viewing
aperture.
64 Claudia Carello and Michael T. Turvey
The observer’s experience with the light box is akin to the experience of
an astronaut on a spacewalk. Light is all around but, if the spacecraft,
Earth, and the moon are out of view, nothing is seen but darkness pep-
pered with point-light sources (the distant stars). Although the astronaut’s
eyes are continuously registering photons—the sun’s light is all around—
the astronaut does not have sensations of light (Zajonc, 1993).
Figure 4.6 Newton revisited. (a) The box was placed outdoors, slanted so that the
illumination shaft faced the sun, and an optical glass triangular prism
was aimed so as to shine down the illumination shaft. (b) With a camera
inside the box, the light spectrum (rendered here in shades of gray) can
be seen on an opaque white surface placed against the rear wall of the
box. (c) The spectrum that is visible on the rear wall is not seen at the
viewing aperture. (d–f) When a rod is introduced, however, it is easily
seen in the color of the part of the spectrum that it intersects.
Challenging the Axioms of Perception 65
closed, spectral qualities are registered by camera and observers only in the
presence of the inserted rod, that is, only in the presence of a reflecting
(illuminate-able) surface.
We are claiming that observers experience the radiant light-filled box
interior as black (i.e., as unlit) because the interior walls do not reflect light
and vision is tied to reflected light not radiant light. One might contend,
however, that observers’ experience of blackness on those occasions is
because the direction of light in the box (the direction of the illumination
shaft) is perpendicular to the direction of the viewer’s line of sight (the
viewing aperture). That is, one might contend that vision requires that illu-
mination’s direction be parallel to, not perpendicular to, the viewer’s line
of sight.
A final demonstration with the light box, however, indicates otherwise.
All of the foam walls were covered with light-reflecting paper (Figure
4.7a). In sharp contrast to the nonreflecting version of the box, the walls
;ĂͿ
;ďͿ ;ĐͿ
Figure 4.7 (a) Box with interior reflecting surfaces. (b) A top view schematic
of the light in the interior of the foam-lined box. (c) A top view
schematic of the light in the interior of the box shown in (a).
66 Claudia Carello and Michael T. Turvey
of the light-reflecting version were plainly visible at the viewing aperture.
The inserted rod was not needed to reveal the illumination of the box’s
interior. We take the implication to be that light in the box is invisible in
Figures 4.5a–c and 4.6c not because the projection of light is perpendicular
to the line of sight but because it is not reflected.
Figures 4.7b and 4.7c schematize the contrast between light’s behavior
in the non-reflecting box and the reflecting box. The consequence of scat-
tering in an enclosure, such as the box with reflecting walls, is multiple-
reflection or reverberation, an endless bouncing of light from surface to
surface, a network of convergence and divergence that is indefinitely dense.
It renders the light in a given environmental arrangement specific to the
environmental arrangement (Gibson, 1966a, 1979/2015).
For those who remain unconvinced, let us return to the situation of
astronauts in deep space. Imagine two spacewalkers who can see each
other in the surrounding blackness of space by virtue of the light each
reflects to the other’s eyes (Figure 4.8a). If we restrict our attention to the
light reflected from the face of one (Figure 4.8b), it is clear that she is
bathed in the light she reflects, including light that is reflected from her
eyes. If the other astronaut moves out of view, she sees only darkness
where he had been. She does not see the light despite still being immersed—
eyes included—in it (Figure 4.8c). The problem is not that light doesn’t
make its way into the eye of the observer where it can affect the sensory
apparatus, the problem is that there is no structured light.
;ĂͿ
йηΨΨΘ
͊ΗηΨ
;ďͿ
;ĐͿ
What the observer saw, as I would now put it, was an empty medium
… The purpose of the experiment is to control and vary the projective
capacity of light. This must be isolated from the stimulating capacity
68 Claudia Carello and Michael T. Turvey
of light. Metzger’s experiment points to the distinction between an
optic array with structure and a nonarray without structure. To the
extent that the array has structure, it specifies an environment.
(Gibson, 1979/2015, p. 143)
Summary
Our critical evaluation of the commonplace discourse on the retinal image
and the visibility of light bears on Gibson’s aim in Chapter 4: To “describe
the information available to observers for perceiving the environment.” In
our chapter, we have highlighted the historical inclination to accept the
image formulation as a fitting characterization of the light to the eye, and
the echoes in contemporary literature. As underscored, that inclination
engenders a theory of vision that is necessarily inferential: Perception is a
process of making inferences with respect to images on the retina. Corre-
spondingly, in our chapter we have highlighted the historical inclination to
accept visible light as a fitting ontological characterization of light as such,
again with contemporary echoes. That inclination leads to a theory of
vision necessarily expressed as having experiences of light quanta—that
visual experiences are first and foremost of light as such. In his Chapter 4,
in contrast, Gibson highlighted the distribution of light—its structure, its
order—as information about the surfaces and substances that structured it.
In so doing, he provided a formulation in terms of information at nature’s
ecological scale that rejects both of the storied historical implications.
Notes
1. We have not found citations of Bentley or Campbell in any of Gibson’s books
or papers. Although calls to question the literal existence of the retinal image
would likely be celebrated by Gibson, his own arguments were simply against
how the retinal image has been used conceptually.
2. Helmholtz’s footnote 1 attributes this report to A. W. Volkmann, a renowned
scholar in physiological optics.
3. Anecdotally, we have asked our own optometrist what he sees when he uses the
ophthalmoscope to peer into our eyes. He listed the features related to retinal
anatomy and health (vessels, floaters, etc.). Trying not to lead the witness too
Challenging the Axioms of Perception 69
much, we artfully encouraged him to say more. He responded, “Do you mean
do I see what you see? No, that’s just silly” (S. McKeown, personal communica-
tion, April 27, 2018).
4. Identical impressions were reported by Zajonc (1993), whose light source was a
powerful projector rather than a flashlight.
Part II
The Information for Visual
Perception
5 Getting into the Ambient Optic
Array and What We Might Get
Out of It
William M. Mace
James Gibson labored carefully over the development of the concept of the
ambient optic array. This chapter will present and elaborate Gibson’s most
mature articulation of the ambient optic array concept but with enough
history to dramatize Gibson’s constant effort to appreciate relevant empiri-
cal results, to adjust all aspects of his theorizing to one another and to keep
the system focused on the overall task of explaining perception of the
environment in a coherent way. With the development of the ambient optic
array concept in hand, the chapter will then survey what has happened in
research on vision from 1979 to the present in order to discover what
people have and have not appreciated in Gibson’s work.
Thus, the optic array is a theory of the optical structure of the environ-
ment. So far as I can tell, there has never been anything quite like this, that
is, the “of the environment” part. Ecological optics says that in an environ-
ment with an atmosphere, opaque surfaces, and light, the light bouncing in
all directions will come to an equilibrium such that there will be patterned
structure that is specific to the structuring environment. The minimum
structure for Gibson is an intensity difference. Light, as streams of photons,
is not the relevant description. The relations of difference are relevant.
Light is a medium to carry structure. Structured light makes light focusa-
ble. Unstructured (uniform) light is unfocusable. A surround of homogen-
eous intensity is a Ganzfeld, and both scientists (Avant, 1965; Gibson &
74 William M. Mace
Waddell, 1952) and artists, e.g., Robert Irwin (Weschler, 2009), and James
Turrell (Adcock, 1990) have fashioned versions. In order to have accom-
modation adjustments of an eye’s lens, the adjustments have to be to some-
thing. There has to be an intensity difference created by arrangements in
the optic array that can be brought into focus. Accommodation is not
something an optical system can do in memory or imagination. There is
something in the array, structured by an environmental arrangement, to be
focused on. This (accommodative adjustment) is the first of Gibson’s list of
criteria for reality that will be presented below.
Now consider this next move of Gibson’s in presenting the structure in
the ambient optic array:
The ambient optic array is divided into two parts: the upper (sky) and the
lower (earth), creating a brightness difference at the horizon. Note that this
is an extreme simplification but it is the opposite of atomism, which sim-
plifies by reduction to the smallest parts. Gibson’s simplification, the divi-
sion of the whole optic array into two parts, is a simplification that is more
like a Gestalt whole.
Simplification begins with the largest part. But unlike Gestalt entities,
this whole is not a mental organization. It is the environment that is
divided into sky and earth. It is a large containing envelope that life is
inside of (Mace, 1977). This very large containing envelope does not
require observers for its existence, although there is no specific containing
envelope without something to be contained. Minimally, to do the geome-
try, a point of observation must be declared. Both halves of the array func-
tion as background for all that is contained in the world. The sky, with
little detailed structure, is not a very determinate background, whereas the
richness of surface layout on the earth makes for a far more determinate
background for the nested components that it contains.
Now consider the subdivisions of each hemisphere––cloud configura-
tions in the sky, rich varieties of surface layout on the earth. The two
enveloping hemispheres have nested structures of adjacent patches that
project to points of observation. Gibson said that the parts of an array are
given as solid angles projecting to possible points of observation. The solid
angles are faces and facets of surfaces as well as the gaps between them
(like looking through tree leaves into a background sky). He stressed that
the perspective lines in his diagrams (see Figure 5.2) indicated differences
Getting into the Ambient Optic Array 75
in intensity of light, differences that would be preserved (invariant) over
absolute light levels. Thus, the lines in his diagrams are not rays of light.
For Gibson, the ambient optic array is a plenum. It is structurally dense,
like a jigsaw puzzle. Each “form” is a face of a surface, a facet, or even an
opening onto the sky. There are no gaps in this dense structure. The entire
set projects to a point of observation to yield a structured optic array at
that point of observation. One consequence of this initial image of a struc-
tured array is to avoid any thinking based on single points in empty space.
Motion perception, considered as a change of position of a point relative
to the retina, is not a natural topic from the standpoint of a packed optic
array. Rather, for Gibson, the changing structure of the optic array (the
disturbance of the structure) had to be fundamental and types of change,
not simple motions, need to be studied as such.
Because of this nested puzzle structure, for Gibson, the natural way to
locate some portion of the array was by inclusion (nestedness), not by
systems of coordinates. As puzzle pieces, each solid angle would be unique,
and the total arrangement at each point of observation would be unique—
in contrast to a coordinate system, in which every point is the same as
every other point. It follows from this uniqueness that the exact location in
the world of a photograph can be determined, where “exact location”
refers both to the world setting and the camera location in that setting.
Using Google Earth to zoom in on and then zoom back can provide mul-
tiple nested levels, to underscore the distinctiveness of otherwise similar-
looking locations. See Figure 5.1. Zooming back to increase the nested
structure makes the location more definite.
When Gibson described the contrast between radiant and ambient light,
and the key features of the ambient optic array, he began with an unoccu-
pied array without motion. Figure 5.2a shows Gibson’s diagrammatic optic
array of a room with one window and no person. Remember, as mentioned
above, that the lines do not indicate rays of light, but differences of intensity.
Gibson found projective geometry useful, even if limited. The geometry he
advocated was that of Euclid and Ptolemy, termed “natural perspective.”
Natural perspective is constructed from solid angles formed by the faces of
environmental surfaces projected to a point. Artificial perspective introduces
projection surfaces as slices of the solid angles. The projection surfaces of
artificial perspective contributed greatly to the mischief of picture theories of
vision that Gibson strenuously avoided. The projective geometry of the top
of the table (or stool) for the seated and standing observer in Figure 5.2c
would have the conventional texture compression differences of slant vari-
ation. Most significant for Gibson, the dotted lines in Figures 5.2a and 5.2b
represent occluded surfaces, which are not an intrinsic part of perspective
geometry. In Figure 5.2c, Gibson emphasized the occlusion difference
between optic arrays of a seated person and the same person standing. When
a point (place) in an array is occupied, then the surfaces of the viewer’s body
are added to the structure of that array. Because the body is opaque, roughly
76 William M. Mace
Figure 5.1 Pike’s Peak, Barr Trail. Used with permission, Google Earth.
Source: Google Earth Pro 7.3.2.5491. Pike’s Peak, Barr Trail. 38o 50’51.37” N 105o 01’
54.35” W elev 12792 ft eye alt 12727 ft Imagery date 8/2017.
Note: Google and the Google logo are registered trademarks of Google LLC, used with permission.
half of the optic array of a human is occluded. The edges of the eye sockets,
the nose, and some body are visible, as in the famous diagram by Ernst
Mach (see the four panels of Figures 7.1 and 7.2 in Gibson, 1979/2015).
Separate terms needed to be devised for physical motions and for the
optical motions that specified them, for events in the world and for
events in the array, for geometry did not provide the terms … Perhaps
80 William M. Mace
the best policy is to use the terms persistence and change to refer to the
environment but preservation and disturbance of structure to refer to
the optic array.
(1979/2015, p. 236)
Option 2 The Ambient Optic Array and the World Are Different
Perspectives on the Same Thing (Gibson)
A glimpse of an answer to “Hume’s problem,” to help develop this Option
2 comes from William James. James, like Gibson, thought that a variety of
deep puzzles originated in scholars’ lack of imagination about the richness
of experience. The foundation experiences in traditional British empiricism
were simple sensations, far from “the world” as such. Thus, the world had
to be “outside” of experience as something constructed or inferred. In con-
trast, James embraced a phenomenally rich empiricism that he called
Radical. He argued that this allowed “the world,” as content and object of
experience, inside experience to begin with. To illustrate, James asked
what the object of a mental image might be. In the case of his thinking of
“Memorial Hall,” he asked if it really is “Memorial Hall’ ” that he is
thinking of. He concludes:
[I]f I can lead you to the hall, and tell you of its history and present
uses; if in its presence I now feel my idea, however bad it may have
been, to be continued; if the associates of the image and of the felt hall
run parallel, so that each term of the one context corresponds serially,
as I walk, with an answering term of the others; why then my soul was
prophetic, and my idea must be, and by common consent would be,
called cognizant of reality.
Getting into the Ambient Optic Array 81
… That percept was what meant, for into it my idea has passed by
conjunctive experiences of sameness and fulfilled intention …
Knowledge thus lives inside the tissue of experience …
(James, 1904, 539–540)
I suggest that perfectly reliable and automatic tests for reality are
involved in the working of a perceptual system … A surface is seen
with more or less definition as the accommodation of the lens changes;
an image is not. A surface becomes clearer when fixated; an image
does not. A surface can be scanned; an image cannot. When the eyes
converge on an object in the world, the sensation of crossed diplopia
disappears, and when the eyes diverge, the “double image” reappears;
this does not happen for an image in the space of the mind. An object
can be scrutinized with the whole repertory of optimizing adjustments
… No image can be scrutinized––not an afterimage, not a so-called
eidetic image, not the image in a dream, and not even a hallucination.
The most decisive test for reality is whether you can discover new
features and details by the act of scrutiny. Can you obtain new stimu-
lation and extract new information from it? Is the information inex-
haustible? Is there more to be seen? The imaginary scrutiny of an
imaginary entity cannot pass this test.
(Gibson, 1979/2015, p. 245)
If one re-reads what James said about connecting his “mental image” to
Memorial Hall by unfolding the experiential sequence to an acceptable
conclusion, it is a short (but significant) step to add Gibson’s criteria to get
to the “real world” through perceptual experience. As an example, con-
sider adding the reversibility of occlusion to what James said. Then,
James’s “soul” need not only be “prophetic,” but it can extract the invari-
ance of existing surfaces by the reversibility of the changes in the optic
array. The route to Memorial Hall, and the surfaces of the Hall itself,
consist of persisting surfaces that can be brought into view much as James
described. Those surface textures that go out of view can, reversibly, be
brought back into view if they persist. Reversibility is the guarantor of the
independent existence of the persisting surfaces. One could then add to
James’s description an allusion to the return trip to his home, a later trip
back to Memorial Hall, and so on. The reversibility of those experiences
establishes the independent reality of the surfaces.
James maintained that an object of experience can still be within experi-
ence. As important as James’s move is, it still leaves an opening for a
strong subjectivism, even if that is anathema to James. Gibson showed that
one could preserve the coherence of James’s formulation by staying within
experience, but also establish that independently existing, persisting sur-
faces, could be revealed through invariance.
Getting into the Ambient Optic Array 83
Artistic Emphasis on the Ambient Optic Array
The retinal image was a bête noire for Gibson. The ambient optic array
was his alternative. It provided something overarching or surrounding to
be sampled and could be a basis for explaining what larger whole succes-
sive sampling could converge on. Without a basis for convergence (clarifi-
cation, equilibration, adjustment) larger than a single sample, processes,
such as convergence, clarification, or adjustment, are left unexplained.
Other ways to draw attention to the ambient array as a structure over-
arching retinal images can be illustrated through non-pictorial art, includ-
ing architecture.
Robert Irwin
The contrast between the retinal image and the ambient optic array may
have no better dramatization than the career of the artist, Robert Irwin
(Weschler, 2009). His work literally progressed through stages from paint-
ing pictures to obsessing over the settings for displaying pictures, to the
environments themselves, leading us out of the temptations of the retinal
image and delivering us to the ambient optic array, which is the point of
my drawing attention to Irwin’s career.
Robert Irwin was a talented conventional artist when he began his
career in the early 1950s. However, he soon established the goal of creat-
ing work that was not about anything but itself:
The Southwest desert attracted me, I think, because it was the area
with the least kinds of identifications or connotations. It’s a place
84 William M. Mace
where you can go along for a while and nothing seems to be happening.
It’s all just flat desert, no particular events, no mountains or trees or
rivers. And then, all of a sudden, it can just take on this sort of … magical
quality. It just stands up and hums, it becomes so beautiful, incredibly,
the presence is so strong. Then twenty minutes later it will simply stop.
And I began wondering why, what those events were really about,
because they were so close to my interests, the quality of phenomena.
(pp. 163, 164)
Here are truly ambient optic arrays, structured by extended surface, sky,
and lighting at a specific time of day. Much of Irwin’s subsequent work
concentrated on finding minimal ways to heighten viewers’ awareness of
environments (both indoors and outdoors) that already exist. An example
is “Running Violet V” in a Eucalyptus Grove near the Faculty Club of the
University of California at San Diego. Because Irwin’s concerns are the
effects of a whole surround, pictures cannot capture the effects he created.
To avoid any misunderstanding, he did not allow photos of his work to be
published. He later relented. Consequently photos and video explorations
of many of his works have been made and are not hard to find online.
James Turrell
A second major artist, who surely does know something about Gibson
(having studied psychology as an undergraduate at Pomona) and who once
worked with Robert Irwin, is James Turrell (Adcock, 1990). Turrell also
has spent a career using surface layout and light to create non-picture-
based visual experiences. Many of Turrell’s works involved sharply cutting
apertures in walls and ceilings, framing the sky to create variants of what
Katz (1935) called “film color.” In these situations, the sky seems to fill the
frame but to be flush with it, as a mysterious part of the surface. The ulti-
mate frame for the sky, built by Turrell, is a sharply sculpted crater, Roden
Crater, in northern Arizona. A site for numerous arrangements for framing
light, including celestial objects, the central experience for observers is to
lie down on the floor of the crater and observe the dome-like appearance
of the framed sky. Much of Turrell’s work, shown in museums, consists of
careful arrangement of lights, sometimes to make a well-delineated, shaped
volume of light, sometimes evenly distributed to make a Ganzfeld.
For art that exploits occluding edges to control what viewers see as they
move relative to the work, see the constructions of Yaacov Agam. Internet
sources are easy to find.
Architecture
Going beyond the retinal image to the ambient optic array seemingly
would not need to stretch to artists like Irwin and Turrell and could just as
Getting into the Ambient Optic Array 85
well simply cite sculpture and architecture. The architect, Michael Benedikt
is best known to ecological psychologists for pursuing and promoting a geo-
metric concept called the isovist (Benedikt & Burnham, 1985; Benedikt, in
preparation, part II). Isovist software is available at isovists.org. An isovist is
a shaped space of all surfaces visible from a given location. Draw all lines of
sight from a given location to all visible points. Gibson’s Figure 11.2 in the
1979/2015 book is essentially an isovist. Numerous measures can then be
made on an isovist, for example, minimum distance, maximum distance,
average distance, area (isovists are usually examined in two dimensions),
perimeter, perimeter2/area = jaggedness, the inverse of jaggedness gives com-
pactness. The isovist is a kind of abstraction of Gibson’s concept of a vista,
where the vista is all that is unoccluded from a given place of observation,
but the vista contains all of the nested structure, as illustrated in Figure 5.1.
An isovist is abstracted for the purpose of numerical measures.
Since Benedikt introduced his ideas to the community of ecological
psychology in 1981, the use of isovists has increased dramatically in archi-
tecture. “Space Syntax” is the name of the prime sub-field begun as early
as 1976 by Bill Hillier at University College London. See Hillier (1996) for
a full presentation that includes the use of isovists and Turner et al. (2001)
for additional analysis. There are now multiple websites showing how iso-
vists have been used in building and urban design.
Research outdoors
For some years after Gibson’s 1950 vintage studies, we saw little interest
by other researchers in the ground as visual content and context. He, Ooi,
and their colleagues and students have changed that and done extensive
research using the ground as a background for perceptual judgments (e.g.,
He et al., 2004; Ooi & He, 2007). Farley Norman has contributed his
share as well (Norman et al., 2017). Hartman (1994) reported size judg-
ments of a truck parked on a hill seen against a cloudy sky compared to a
clear sky.
Jim Todd (1982) and Koenderink (1986) provided nice progress on ana-
lysis of the rigid vs. non-rigid distinction. Spröte and Fleming (2016) have
shown a more recent interest in investigating non-rigid transformations.
Their studies here were with static (albeit stereo) graphic images to see
how much might be conveyed about properties of pictured objects. Their
interest is in inferences and judgments about causal processes that led to
observed shapes, as well as material properties of the shapes.
86 William M. Mace
Mingolla and his students (Ruda et al., 2015) have used occlusion as
both an independent and a dependent variable. They have investigated the
optical information for occlusion, but also related their results to depth
perception. Tanrikulu et al. (2018) continue to examine occlusion as tied
to depth perception. Hancock and Manser (1997), using a driving simula-
tor, compared estimates of time to contact (see Smart et al., Chapter 10, in
this volume) with vehicles that simply disappeared compared to those that
“disappeared” by occlusion. The most common heirs to that work assimi-
late it to “prediction motion” (PM) tasks (Makin, 2018). Nam Gyoon Kim
(2008) examined abilities to indicate heading direction in the context of
occlusion from an undulating ground plane. Recent work using occlusion
by the Bingham lab is covered in Heft’s Chapter 11, in this volume.
Readers understand well, I imagine, that research tends to cluster
around empirical manipulations and phenomena more than the issues
framed Gibson’s way. I mentioned “prediction motion” above. In the same
vein, Gibson and Gibson (1957) tend to be extended now under the rubric
of “structure from motion” more than invariance under transformation.
See Koenderink (1986) for a critique of some “structure from motion”
work within a broad presentation of “optic flow.” Another area that Jim
Todd contributes to prominently is called “shape-from-shading.”
Alan Gilchrist (2018) has persistently examined lightness perception in
real conditions, inveighing against exclusive use of computer displays for
such research.
Seokhun Kim completed a dissertation using an optical tunnel (Kim,
Carello, & Turvey, 2016), inspired by Gibson, Purdy, and Lawrence
(1955). See Figures 9.1–9.3 in Gibson (1979/2015), where the discussion
begins on page 145. Gibson (1982c) says, “this experiment led to the
hypothesis of an optic array as contrasted with the retinal image” (p. 98).
In the optical tunnel, Gibson manipulated whether or not there was focus-
able light, and whether or not spaced partitions could be arranged to
specify a solid surface. He was manipulating what was available to an eye
rather than what was “on” the eye at a given moment.
In order to advance our knowledge of the optic array, we should look
for clues anywhere people are doing serious work related to perceiving the
environment. What Gibson can help with most is the framing vision of
what the overall project can be about in order to avoid debilitating puzzles
and blind alleys.
Virtual Reality
Closely related to computer graphics are virtual reality (VR) displays, some
of which are immersive and some not. Immersive virtual reality promises
to give us great purchase on the ambient optic array because it provides
optical structure that surrounds and varies with head motion. It provides a
visual environment that can be explored. Smart phone implementations
can simulate windows on another environment that can be explored. See
Scarfe and Glennerster (2015) for a review. Among ecological psycholo-
gists who have employed virtual reality are Geoff Bingham (Bingham et al.,
2001), Frank Zaal, Claire Michaels, and Reinoud Bootsma (Zaal &
Bootsma, 2011; Zaal & Michaels, 2003), Bill Warren (Warren et al.,
2017), and Jiang et al. (2018). Loomis (2016) reflected on 25 years of VR
and the topic of “presence” for the eponymous journal.
Tom Stoffregen and students (Munafo, Diedrick, & Stoffregen, 2017)
have evaluated the Oculus Rift and found that it can induce discomfort
and sickness in some people.
Plenoptic Function
The “ambient” part of Gibson’s ambient optic array has not been taken
seriously by many, except in the technology of virtual reality (see below).
One notable exception is the “Plenoptic function” of Adelson and Bergen
88 William M. Mace
(1991). This also is called the “light field” (Ng et al., 2005). Like Gibson,
the authors imagine a point of observation with light coming from all 360°
around the point. However, they presume that the proper surround for a
point is light not surfaces:
Natural Scenes
Following the work of Field (1987), people working within traditional
areas of vision research ventured to extend their work beyond simple dis-
plays to “natural scenes” (nearly always photos of scenes, not the scenes
themselves). Geisler (2008) reviews about 20 years of such work. See also
Ruderman (1994). This approach intersects with the procedures and goals
of Brunswik (1956) much more than Gibson. People are mainly interested
in efficient digital image compression and in rationalizing features of the
visual system. Thus, they ask what is statistically distinctive in a picture
that makes it look like a “natural scene.” Predictability, then, suggests
digital compression strategies that would not require storing values for
every pixel in a digital image. A recent example can be found in McCann,
Hayhoe, and Geisler (2018). They took 96 stereo photographs (of which
they could use 81) around the Austin campus of the University of Texas
for their studies of depth perception accuracy in natural scene images. Like
most vision science work, the proximal-distal distinction is taken for
granted and “Hume’s problem” is not taken seriously.
For the academic year (1969–1970), James J. Gibson invited Robert Shaw,
who was at that time an assistant professor at the University of Minnesota,
to visit Cornell University, so that he could learn more about Gibson’s eco-
logical approach to psychology. Shaw had met Gibson at Minnesota in the
summer of 1968 while taking his ecological psychology seminar at the
Center for Human Learning’s Summer Institute. It was from their lively
interactions in Gibson’s seminar that they had, so to speak, “a meeting of
minds.” Shaw’s duties at Cornell would be to help teach Gibson’s graduate
seminar and to teach the undergraduate perception course. Gibson
expressed the hope that Shaw, who had a background in mathematics and
logic, might see ways in which fundamental principles of ecological psych-
ology might be made explicit—even formalized. Shaw gratefully accepted
because he had an inkling that the fundamental significance of invariants
in Gibson’s theory of perceptual information might be amenable to a
group symmetry interpretation—a concentration of Shaw’s. The purpose
of this prologue is to impress upon the reader that everything discussed in
this chapter benefited from that year of close, almost daily, interaction
with Gibson and having been granted license by Gibson to query him
about any facts, concepts, or principles of his approach that Shaw might
need help with (Shaw, 2002).
These disturbances in the optic array are not similar to the events in
the environment that they specify. The superficial likenesses are mis-
leading. Even if the optical disturbances could be reduced to the
motions of spots, they would not be like the motions of bodies or par-
ticles in space. Optical spots have no mass and no inertia, they cannot
collide, and in fact, because they are usually not spots at all but forms
nested within one another, they cannot even move. This is why I sug-
gested that a so-called optical motion had so little in common with a
physical motion that it should not even be called a motion.
(p. 101)
In short, Gibson informs us that disturbances in the optic array lack mass,
inertia, and even motion, and therefore do not resemble events in the
world involving material objects with those properties. Event perception
An Ecological Approach to Event Perception 97
presumably still works because, as impoverished as the array disturbances
may be, they somehow still share common event invariants with real-world
events. The fundamental problem of information as specification is
revealed in this assertion. If so, then this way of posing the problem leaves
us facing a conundrum of how purely kinematic optical information can
specify kinetic events. And emphasizing the extreme dissimilarity of optical
array disturbances to the actual events, except for sharing invariants, as
true as it may be, seems to obscure the path to a solution.
The laws of optics and the laws of mechanics provide the bases for
determining all the invariant properties involved in each event, and must
somehow be the means by which we recognize the physical event and the
optical event as having the same referent. To be recognized as being about
the same event, the force-driven disturbances in the environment and the
forceless disturbances in the light projected from them into the optic array
must share the same invariant information. How they might do so is the
major puzzle to be addressed later in this chapter.
Specifically, we will ask how Gibson’s theory of event perception which
assumes only optic array kinematics might be expanded to include optic
array kinetics. Formally, the issue is one of dimensional inhomogeneity, a
mismatch in dimensionality between events with dimensions of mass,
length, and time versus events with only dimensions of length and time.
This problem is profoundly serious because a theory of living systems
based on a mismatch in dimensionality can never, even in principle, solve
Bernstein’s degrees of freedom problem that must be solved if a perceiving-
acting system is to be capable of adaptive interactions with the
environment.
For instance, without kinetic information, then the negative affordances
of lethal or injurious encounters with surfaces, missiles, or other life forms
could not be recognized and thus not avoided. For example, there would
be no information to distinguish the extreme danger of a charging bull
from the friendly encounter with a running child. Their difference in mass
makes the impact force from colliding with the bull extremely dangerous,
while the impact force from colliding with a small child might even be fun.
Likewise, the relative danger of stepping-off places would not be informa-
tionally distinguished from falling-off places since the difference in danger
impact due to the force of gravity would be unspecified. This is not to say
that the optic array might not register some useful kinematic information,
such as a global transformation of the optic array which specifies to the
actor that they are moving rather than some part of the environment,
which is specified by local patches of change.
ŶĞĐŽƐLJƐƚĞŵĐŽŵƉƌŝƐĞƐĂŶŽƌŐĂŶŝƐŵĂŶĚŝƚƐĞŶǀŝƌŽŶŵĞŶƚ͕ĂŶĚ
K ŝŶĐůƵĚĞƐƚŚĞĂīŽƌĚĂŶĐĞƐĂŶĚĞīĞĐƟǀŝƟĞƐĚĞĮŶĞĚŽŶϬĂŶĚ͕
ŝŶĚĞƉĞŶĚĞŶƚůLJ͕ĂƐǁĞůůĂƐŝŶƚĞƌĚĞƉĞŶĚĞŶƚůLJďĞƚǁĞĞŶϬĂŶĚ͘
ͲĂīŽƌĚĂŶĐĞƐĂƌĞĚƵĂůƚŽKͲĞīĞĐƟǀŝƟĞƐ
KĂīŽƌĚĂŶĐĞƐĂƌĞĚƵĂůƚŽKĞīĞĐƟǀŝƟĞƐ
K
KĂīŽƌĚĂŶĐĞƐĂƌĞĚƵĂůƚŽKĞīĞĐƟǀŝƟĞƐ
K
KͲĂīŽƌĚĂŶĐĞƐĂƌĞĚƵĂůƚŽͲĞīĞĐƟǀŝƟĞƐ
K
Reference Frame
The notion of a reference frame is not the same as a coordinate system or
traditional reference system with a point origin, (0, 0, 0, 0), and metric
coordinates, (x, y, z, t). Instead a reference frame, as construed under the
ecological approach, begins with a point of view (POV) around which per-
spectives are variously organized (see Figure 6.3). The POV might be global
in being the perspectives surrounding O taken with respect to all of E—an
open vista delimited by the visual horizons alone, or something more focal,
ranging from an object and how it is situated in E being at a nearby place or
at a place some intermediate distance away, or, even most locally, being just
defined on the self alone. A reference frame is not located just by places sur-
rounding a POV or lying at various distances away but is also taken relative
to immediate-to sustained-encounters of various durations.
Most importantly, the surround is always filled by distributions of
affordances toward which actions might be taken more or less easily. The
metrics are pragmatic, being restricted to action limits, such as being easily
reachable (e.g., arm’s length, steps away), or navigable over a measured
An Ecological Approach to Event Perception 103
duration (e.g., a few minutes, an hour or so, a day trip), or reachable by
locomotory treks of certain durations (e.g., walking, running, by bicycle,
car, train, etc.). The POV may also be dynamically delineated as revealed
in the “field of safe travel” surrounding automobiles or pedestrians, as
explained in Gibson and Crooks (1938).
If O and E belong to the same ecological frame, then they are mutually
and reciprocally dual (as signified by ‘<>’), but the dual relations (e.g., a<b,
b>a, a<>b) may be ordered or unordered. Here a<b means a and b are
duals but a anchors the relationship. (Analogous to the distinction between
an independent variable and its related dependent variable.) However, if
the dual relation is ordered, then we use the terms primal for the dominant
member and dual for the dominated member, where primal means
dominant (like an independent variable) and dual means dominated (like a
dependent variable). Primal denotes where the origin of the reference
system is located and dual is where the object to be related to the origin is
located. For instance, John (primal) throws the ball to Mary (dual); Mary
(dual) catches the ball that John (primal) throws. Mary (primal) then
throws the ball back to John (dual).
A key duality is the primal affordance an actor intends to realize and the
dual action effectivity by which it does so (e.g., catches the ball thrown,
lifts the baby down from its highchair, trims the bushes with the hedge
clippers).
Discrepancy
In general, discrepancy theory describes the deviation of a situation from
the state one would like it to be in, say, to be the dual action to some
primal affordance goal. You intend to hit the bullseye with the dart but
your throw is errant. Consequently, on the next throw you adjust the
direction of the dart ‘s release by a slight hand rotation. The kinetics of
neuromuscular control is felt directly through kinesthetic information. Put
differently, the information frame of the situated dartboard and the situ-
ated control frame of the hand holding the dart dynamically share a
common force bases, one that is rooted in visual and neuromuscular kines-
thetics (felt weight and momentum of arm, stiffness parameter, etc. in the
context of visual information about target parameters).
104 Robert Shaw and Jeffrey Kinsella-Shaw
The Dual Frame Discrepancy Hypothesis
The work-to-be-done as specified in the primal visual information frame
must be matched by the work-actually-done in the dual neuromuscular
control frame. If not, then there is a discrepancy to be eradicated by reactive
adjustments. For the information and control frames to coalesce into the
proper ecological frame vis-à-vis the perceiving-acting cycle, a synergy com-
prising the two frames must emerge that has both the intended specificity
(goal-path accuracy) and efficacy (properly focused dynamics, or ecological
work) (Shaw & Kinsella-Shaw, 1988). Gibson formulated this idea in his
1966 book. This idea is so important and central to the ecological approach,
we should have the authority of Gibson’s own words (Gibson, 1979/2015):
There are various ways of putting this discovery, although old words must
be used in new ways since age-old doctrines are being contradicted. I sug-
gested that vision is kinesthetic in that it registers movements of the body
just as much as does the muscle-joint skin system and the inner ear system.
Vision picks up both movements of the whole body relative to the ground
and movement of a member of the body relative to the whole. Visual
kinesthesis goes along with muscular kinesthesis. The doctrine that vision
is exteroceptive, that it obtains “external” information only, is simply false.
Vision obtains information about both the environment and the self. In
fact, all the senses do so when they are considered as perceptual systems.
(p. 175)
Case 1
Consider two trains standing next to each other on adjacent tracks in a
train station. On each train there is a person standing in the aisle, facing
forward, holding a full cup of coffee. Call them Bob and Alice. Unaware
that Bob is watching her through the adjacent train car windows, Alice is
lost in thought when her train jerks into motion. Bob’s train remains sta-
tionary. The sudden jerk naturally causes Alice to spill her coffee and Bob
seeing Alice’s train’s abrupt motion, even though his train remains at rest,
also spills his coffee at the same time. Why? (See Figure 6.3).
While there is no mystery regarding what caused Alice to spill her
coffee, it remains surprising that Bob being on a train at rest should spill
his coffee just from watching Alice’s minor calamity. This puzzle is instruc-
tive and solving it will make clear one way that forceless optic array
information about a forceful action can induce a forced outcome. Or,
stated differently, how can a strictly informational coupling between an
event taking place in one local reference frame somehow induce a second-
hand forceful outcome to take place in another distant reference frame?
An Ecological Approach to Event Perception 105
<ĞLJĐŽŶĐĞƉƚŝƐŝŶĞƌƟĂůĨŽƌĐĞĨƌŽŵĨƌĂŵĞĚŝƐĐƌĞƉĂŶĐLJ
/ŶĞƌƟĂůĨƌĂŵĞ EŽŶͲŝŶĞƌƟĂůĨƌĂŵĞ
/сʹŵ
ŵс&
ůŝĐĞ
&сŵʹŵсŵʹŵĂсϬ
/ŶĞƌƟĂůĨƌĂŵĞ
/сʹŵĂ
ʹ
Žď
Figure 6.3 The dual frame discrepancy hypothesis. The means are depicted by
which an information coupling to a frame other than Bob’s own allows
forceless kinematic information to induce a forceful effect.
Case 2
A clever experiment done by David Lee at the University of Edinburgh
several decades ago illustrates most dramatically the reality of optical
pushes (Lishman & Lee, 1973). Assume someone is standing in a room
where the walls and ceiling are detached from the floor (Figure 6.4).
Further assume that the “room” (without the floor) swings on a very long
cable attached to a high ceiling so that it appears to glide. The result is the
room’s walls can glide but of course the room’s floor cannot.
If the room is swung toward the person (who sees the wall’s motion),
she or he will sway backwards; if the room is swung away from the person,
she or he will sway forward. Note that at no time does the wall touch the
person; thus, no mechanical force can possibly be responsible for the per-
son’s swaying. Also, since the person faces a wall that fills her or his visual
106 Robert Shaw and Jeffrey Kinsella-Shaw
;ĂͿ ;ďͿ
K K K
>ŽĐĂůĂŶĚŐůŽďĂůŝŶĨŽƌŵĂƟŽŶĂŐƌĞĞ >ŽĐĂůĂŶĚŐůŽďĂůŝŶĨŽƌŵĂƟŽŶĚŝƐĂŐƌĞĞ
Figure 6.4 Lee’s swinging room. When local and global information agree (a), then
posture is not compromised. But when local and global information dis-
agree (b), then there is a dual frame discrepancy and posture is upset by
an optical “push.”
angle, she or he sees only the wall and nothing else in the environment,
especially not the floor. The motion of the wall projects a global optical
transformation into the person’s optic array information that specifies to
the perceiver that she or he has moved from being upright.
The information does not cause the person’s reaction—information is
forceless and, therefore, cannot be a mechanical cause. Since the
information-to-control coupling is forceless, we need an answer to this
question: By what means, metaphorically speaking, does the language of
information get translated into the language of control? The answer is
clear. The kinetics are supplied by the person’s own neuro-muscular system
whose postural equilibrium is upset by an optical push.
It is known that optical disturbances may trigger involuntary reactions
from the perceivers (Shaw & Kinsella-Shaw, 2007). Here it was found that a
movement of the wall so subtle that it goes unnoticed can still induce the
person to sway in phase with the wall’s movement. Although instructed to
stand still without moving, precise (goniometric) measurements at the person’s
ankle joint show she or he still sways in phase with the room’s movement.
The third step looks a bit trivial, being nothing more than giving a new
name to the negative product of mass x acceleration. In fact, it allows the
108 Robert Shaw and Jeffrey Kinsella-Shaw
expression of an important principle in the next step. In Newtonian mech-
anics, the concept of a system being in equilibrium entails the nullification
of all impressed forces acting on it. Static equilibrium applies to objects not
in motion. With this reformulation of Newton’s law, d’Alembert showed
us how to generalize the concept of equilibrium to objects in motion. To
make this generalization required a brilliant insight—d’Alembert had to
see that inertia itself is a force that can be included with impressed forces
to make up the total effective force of the system, i.e., effective in the sense
of summing to zero. This now allows us to extend any criterion for a
mechanical system being in static equilibrium to a moving mechanical
system being in dynamic equilibrium.
Inertial forces are experienced daily by those of us whose bodies are
carried along with a variety of accelerated frames—automobiles, trains,
buses, airplanes, swings, carnival rides, horses, or rocket ships to the
moon. The origin of these “unimpressed” forces is the tendency for objects
to resist change of their state of motion or state of rest, in accordance with
Newton’s Second Law, which asserts that a force is anything that acceler-
ates a mass, i.e.,
F = mA.
To reiterate, inertial forces differ from impressed forces in how they are
produced. An inertial force is created by the accelerating frame moving out
from beneath the objects it contains—temporarily leaving them behind—
until the train’s impressed force drags them along as well. Armed with
d’Alembert’s principle, we can now show how it is possible, at least in one
case, to transform kinematic optical array information into kinetic optic
array information.
Given dual frames of reference are involved in a situation, such as Alice
and Bob being on the two trains (or seeing both the wall and the floor
simultaneously in Lee’s room), the two frames of reference must be confus-
able by an observer (e.g., Bob). There must also be two potential energy
sources, say, A and B—A for the impressed force and B for the reactive
force—that are informationally coupled (e.g., Bob sees Alice’s train start
up and mistakes it for his own). If the shades on Bob’s train car were
pulled down, then he would have no information regarding Alice’s train
and experience no optical push. (Or, likewise, there would be no optical
push while standing in the Lee room with one’s eyes shut.)
Again, study Figure 6.3. This is how a kinematic display becomes the
control for kinetic forces. The information coupling of the two observer-
two train frames into an ecological physics field lends support to the dual
frame discrepancy hypothesis.
An Ecological Approach to Event Perception 109
Conclusion
One aim in this chapter was to critically review Gibson’s approach to event
perception, as discussed in Chapter 6 of his 1979/2015 book. We stressed
the importance of event perception for having a generally adequate ecolo-
gical approach to visual perception. A second aim was to discuss the
importance of symmetry theory as a precise way to conceptualize Gibson’s
invariants approach to information. Here we followed Gibson in recogniz-
ing that successive order and adjacent order were useful replacements for
time (temporal order), an abstraction from the former, and space (spatial
order), an abstraction from the latter.
A third aim, and one we consider most significant, was to review the
problem of how forceless kinematic optic array information could also
specify forceful kinetic information so that the language of control might
somehow be a direct translation of optic array information. We argued
that it is possible to do so by means of the dual frame discrepancy hypo-
thesis. A perceived discrepancy between dual frames that should be con-
gruent causes the perceiver to make neuromuscular adjustments to
eradicate the discrepancy that manifests as a self-produced inertial force.
Then, by applying d’Alembert’s principle in the usual way, the reactive
response can be shown to be dimensionally homogeneous with a Newto-
nian force when the second law is reformulated in the manner of
d’Alembert to include inertial forces.
Our interpretation of the most fruitful aspect of Gibson’s approach to
event perception is the intrinsic duality of the affordance concept. For this
allows a natural way to have dual frames between which a discrepancy can
arise, and is the insight needed to map kinematics into kinetics, in the
context provided by Gibson’s construct of the optic array.
7 The Optical Information for
Self-Perception in Development
Audrey L. H. van der Meer and
F. R. Ruud van der Weel
arm up and moving normally, but only when they could see the arm, either
directly or on the video monitor. Thus, newborn babies purposely move
their hand to the extent that they will counteract external forces applied to
their wrists to keep the hand in their field of view.
In order to investigate whether newborns are also able to adjust their
arm movements to environmental demands in a flexible manner, we inves-
tigated whether manipulating where the baby sees the arm has an influence
on where the baby holds the arm (Van der Meer, 1997a). Spontaneous
arm-waving movements were recorded in the semi-dark while newborns
lay supine facing to one side. A narrow beam of light 7 cm in diameter was
shone in one of two positions: high over the baby’s nose or lower down
over the baby’s chest, in such a way that the arm the baby was facing was
only visible when the hand encountered the, otherwise, invisible beam of
light. The babies deliberately changed arm position depending on the posi-
tion of the light and controlled wrist velocity by slowing down the hand to
keep it in the light and thus clearly visible. In addition, we found that the
babies were able to control deceleration of the hand in a precise manner.
For all instances where the baby’s hand entered the light and remained
there for 2 seconds or longer, the onset of deceleration (point of peak
velocity) of the hand was noted with respect to the position of the light.
Remarkably, in 70 out of all 95 cases (almost 75%), the babies started to
Optical Information for Self-Perception 113
decelerate the arm before entering the light, showing evidence of anticipa-
tion of, rather than reaction to, the light. On those occasions where the
babies appeared not to anticipate the position of the light, more than 70%
of these occurred within the first 90 seconds after starting the experiment
or after changing the position of the light. Thus, by waving their hand
through the light in the early stages of the experiment, the babies were
learning about and remembering the position of the light. This very quickly
allowed them to accurately and prospectively control the deceleration of
the arm into the light and remain there, while effectively making the arm
clearly visible.
From an ecological perspective, the information for self-perception is
not restricted to a specific perceptual system. This brings us to the ques-
tion: Would newborn babies be able to control their arm movements by
means of sound? In order to answer this question, newborn babies
between 3 and 6 weeks of age were placed on their backs with the head
kept in the midline position with a vacuum pillow (Van der Meer & Van
der Weel, 2011). In this position, both ears were uncovered and available
for sound localization. Miniature loudspeakers were attached to the
baby’s wrists. The baby’s mother was placed in an adjacent room where
she could see her baby through a soundproof window. The mother was
instructed to speak or sing to her baby continuously into a microphone,
while the sound of her voice in real time was played softly over one of
the loudspeakers attached to the baby’s wrist. In order to hear her moth-
er’s voice, the baby would have to move the “sounding” wrist close to
the ear, and change arms when the mother’s voice was played over the
other loudspeaker. The results showed that newborn babies were able to
control their arms in such a way that the distance of the left and the right
wrist to the ear was smaller when the mother’s voice was played over
that wrist than when it was not. Further analyses showed that there were
significantly more reductions than increases in distance between wrist
and ear when the sound was on, while when the sound was off, the
number of reductions and increases in distance between wrist and ear
was about the same.
Thus, sighted newborn babies can precisely control their arms with the
help of both sight and sound. This implies that arm movements are not
simply reflexive, nor can they be explained away as excited thrashing of
the limbs. Neonates can act intentionally from the start, and they come
equipped with perceptual systems that can be used to observe the environ-
mental consequences of their actions. At the same time, actions provide
valuable information about oneself. This dual process of perceiving oneself
and perceiving the consequences of self-produced actions provides very
young infants with knowledge about themselves that is crucial for produc-
ing adaptive behavior (E. J. Gibson, 1988).
114 Audrey van der Meer and Ruud van der Weel
Establishing a Frame of Reference for Action
It seems plausible that the spontaneous arm waving of neonates of the kind
measured in our experiments is directed and under precise control.
Neonates will purposely counteract external forces applied to their wrists
to keep the hand in their field of view. They can also precisely control the
position, velocity, and deceleration of their arms to keep them clearly
visible. Moreover, they can direct their arms to their ears with the help of
sound. Their level of arm control, however, is not yet sufficiently developed
so that they can reach successfully for toys. Young babies have to do a lot
of practising over the first four or five months, after which they can even
catch fast-moving toys (Von Hofsten, 1983). What could be the functional
significance of neonatal arm movements for later successful reaching and
grasping?
To successfully direct behavior in the environment, the infant needs to
establish a bodily frame of reference for action (Van der Meer & Van der
Weel, 1995). Since actions are guided by perceptual information, setting
up a frame of reference for action requires establishing informational flow
between perception and action. It also requires learning about body dimen-
sions and movement possibilities. Thus, while watching their moving arms,
newborn babies pick up important information about themselves and the
world they move in––information babies need for later successful reaching
and grasping, beginning at around four or five months of age.
It is widely known that young infants spend many hours looking at their
hands (see Figure 7.2). And so they should, for infants have to learn many
lessons in ecological optics in those early weeks before they can success-
fully reach for and pick up toys in the environment. First of all, infants
have to learn that the hands belong to the self, that they are not simply
objects, but that they can be used to touch all sorts of interesting objects in
the environment. In order to successfully reach out and grasp toys, infants
also have to familiarize themselves with their own body dimensions in
units of some body-scaled or, more generally, action-scaled metric
(Warren, 1984). In other words, infants have to learn to perceive the
shapes and sizes of objects in relation to the arms and hands, as within
reach or out of reach, as graspable or not graspable, in terms of their
affordances for manipulation (Gibson, 1979/2015).
All this relational information has to be incorporated (Merleau-Ponty’s
term, see Marratto, 2012) into a bodily frame of reference for action in
those early weeks before reaching for objects develops. We have all experi-
enced this process of incorporation, namely, when learning new perceptuo-
motor skills (Tamboer, 1988). For instance, tennis rackets, skis, golf clubs,
and other extensions of the human body, such as false teeth and new cars,
first have to be incorporated into our habitual frame of reference, or
embodied action scheme (Day, Ebrahimi, Hartman, Pagano, & Babu,
2017), before we can use them to our full potential. At first, we experience
Optical Information for Self-Perception 115
Figure 7.2 A newborn boy only a few hours old is studying his hand intensely.
Optic Flow
In a series of experiments, we simulated self-motion with structured optic
flow and compared it to unstructured random motion, while we measured
cortical responses to visual motion with high-density electroencephalogra-
phy (HD EEG, see Figure 7.3). Our studies show that both adults and
infants find it easier to pick up visual motion, as measured by shorter
latencies, when the information available to them is structured, as in optic
flow (Van der Meer, Fallet, & Van der Weel, 2008). We also find that at
the neural level, four-month-olds do not differentiate between simulated
forward, backward, and random visual motion, whereas older infants with
some weeks of crawling experience do (Agyei, Holth, Van der Weel, &
Van der Meer, 2015). Preterm infants at one year of age (corrected for pre-
maturity) show very little development in cortical activity in response to
visual motion (Agyei, Van der Weel, & Van der Meer, 2016b), which leads
us to suspect a dorsal stream vulnerability. As opposed to the ventral
stream that develops mainly after birth, the dorsal visual processing stream
develops during the last three months of pregnancy (Atkinson & Braddick,
2007). Preterm birth during this period seems to disturb the normal devel-
opment of the dorsal stream (e.g., Van Braeckel, Butcher, Geuze, Van
Duijn, Bos, & Bouma, 2008), possibly affecting the typical dorsal stream
functions of timing, prospective control, and visuo-motor integration.
In addition to direction of motion, we studied perception of motion
speed in a naturalistic setting where a vehicle driving down a virtual road
was simulated with optic flow (Vilhelmsen, Agyei, Van der Weel, & Van
der Meer, 2018; Vilhelmsen, Van der Weel, & Van der Meer, 2015). Adult
participants differentiated between direction and speed of motion when
they watched a road that was simulated by poles moving from near the
center of the screen and out (or in) toward the edges of the screen, creating
a realistic simulation of an optic flow field. Older infants between 8–11
122 Audrey van der Meer and Ruud van der Weel
Figure 7.3 A 4-month-old girl in deep concentration on the visual motion presented
on the large screen in front of her, while the corresponding electrical brain
activity (EEG) with a sensor net consisting of 128 electrodes is measured.
Looming
How does the infant brain deal with information about imminent colli-
sions? By simulating a looming object on a direct collision course toward
infants, it is possible to investigate brain activities in response to looming
information. Looming refers to the last part of the approach of an object
that is accelerating toward the eye. To prevent an impending collision with
the looming object, infants must use a timing strategy that ensures they
124 Audrey van der Meer and Ruud van der Weel
have enough time to estimate when the object is about to hit them in order
to perform the appropriate evasive action. Defensive blinking is widely
considered as an indicator for sensitivity to information about looming
objects on a collision course. Infants must use time-to-collision informa-
tion to precisely time a blinking response so that they do not blink too
early and reopen their eyes before the object makes contact, or blink too
late when the object may already have made contact. For an accurate
defensive response to avoid collisions and prevent injury, development of
prospective control is important. Infants must use looming visual informa-
tion to correctly time anticipatory responses to avoid impending collisions.
We investigated the timing strategies that 5–7-month-old infants use to
determine when to make a defensive blink to a looming virtual object on a
collision course in a series of behavioral studies (Kayed, Farstad, & Van
der Meer, 2008; Kayed & Van der Meer, 2000, 2007). To time their
defensive blinks, the youngest infants used a strategy based on visual angle
analogous to the distance strategy described in the catching studies above.
As a result, they blinked too late when the looming object approached at
high accelerations. The oldest infants, on the other hand, blinked at a fixed
time-to-collision allowing them to blink in time for all the approach speeds
of the looming virtual object. When precise timing is required, the use of
the less advantageous visual angle strategy may lead to errors in perform-
ance, compared to the use of a strategy based on time-to-collision that
allows for successful performance irrespective of object size and speed.
With the presentation of a looming virtual object on a direct collision
course, we also studied the developmental differences in infants longitudin-
ally at 4 and 12 months using EEG and the visual evoked potential (VEP)
technique (Luck, 2005). The looms approached the infant with different
accelerations, and finally came up to the infant’s face to simulate an optical
collision. Measuring the electrical signal generated at the visual cortex in
response to visual looming, peak VEP responses were analysed using
source dipoles in occipital areas. Results showed a developmental trend in
the prediction of an object’s time-to collision in infants. With age, average
VEP duration decreased, with peak VEP responses closer to the loom’s
time-to-collision (Van der Meer, Svantesson, & Van der Weel, 2012).
Infants around 12 months of age with up to three months of crawling
experience used the more sophisticated and efficient time-to-collision
strategy when timing their brain responses to the virtual collision. Their
looming-related brain responses occurred at a fixed time of about 500 ms
before the optical collision, irrespective of loom speed. The use of such a
timing strategy based on a fixed time close to collision may reflect the level
of neural maturity in terms of myelinization as well as the amount of
experience with self-produced locomotion (Held & Hein, 1963). Both are
important factors required for accurate timing of evasive (and interceptive)
actions, and need to be continuously incorporated into the baby’s frame of
reference for action.
Optical Information for Self-Perception 125
By localizing the brain source activity for looming stimuli approaching
at different speeds and using extrinsic tau-coupling analysis, the temporal
dynamics of neuronal activity in the first year of life was further investi-
gated (see Figure 7.4). Tau-coupling analysis calculated the tau of the
peak-to-peak source waveform activity and the corresponding tau of the
looms. Source dipoles that modeled brain activities within the three occipi-
tal areas of interest were fitted around peak looming VEP activity to give a
direct measure of brain source activities on a trial-by-trial basis. Testing
prelocomotor infants at 5–7 and 8–9 months and crawling infants at
10–11 months of age, we reported synchronized theta oscillations in
response to visual looming. Extrinsic tau-coupling analysis between the
external looms and the source waveform activities showed evidence of
strong and long tau-coupling in all infants, but only the oldest infants
showed brain activity with a temporal structure that was consistent with
the temporal structure present in the visual looming information (Van der
Weel & Van der Meer, 2009). Thus, the temporal structure of the different
looms was merely reflected in the brain, and not added to the brain, as
indirect theories of perception would have it. As infants become more
mobile with age, their ability to pick up the looms’ temporal structure may
improve and provide them with increasingly accurate time-to-collision
information about looming danger. Unlike young infants who treated all
the looms the same, older infants with several weeks of crawling experi-
ence differentiated well in their brain activity between the three loom
speeds with increasing values of the tau-coupling constant, K, for the faster
looms, as shown in Figure 7.4E–G.
The finding that changing patterns in the optical looming information
were reflected in the changing patterns in the neurological flow may shed
some light on the concept of resonance introduced by Gibson (1966a). The
variable tau (τ ) and its rate of change specify the time-to-contact between
an approaching object and the visual system (Lee, 2009). The same vari-
able was found to be operating in the neural flow when looming-related
activity was progressing through the infant brain. Thus, oscillatory activity
in the visual cortex was tau-coupled to the approaching looms, that is, the
change in the theta rhythm’s temporal structure was linearly correlated
with the value of tau of the looms. This, in our view, may indicate a
process of resonance in which informational and electrical flow are suc-
cessfully coupled in terms of the same variable tau via the coupling con-
stant K. However, how are these intricate processes of resonance further
organized in the infant brain?
Traditionally, it is assumed that there exists a one-to-one mapping
between brain structure and function, implying some kind of modular
organization of the brain (Fodor, 1981). In the case of our looming experi-
ments, this would involve a specific mapping procedure between the
incoming looming information and a specialized, encapsulated module in
the brain dealing with looming-related neural activity. Gibson (1966a),
Optical Information for Self-Perception 127
however, suggested an alternative for this type of modular organization
when he introduced Lashley’s (1922) concept of vicarious use of brain
tissue, explaining that the same neural tissue can be involved in different
temporarily assembled structures suitable to a task. In other words, the
functioning of the neurons depends on the context in which they are oper-
ating. In this view, neurons can change function completely when incorp-
orated in different systems; they temporarily assemble to enable a given
task. Reed (1996b) introduced a different concept to stress the high degree
of flexibility of organization of the nervous system, namely, that of degen-
eracy. Degeneracy is the ability of elements that are structurally different
to perform the same function or yield the same output. These two concepts
express the highly flexible organization of the brain. Bullmore and Sporns
(2009) refer to this type of flexible organization as functional connectivity
as opposed to structural connectivity.
In our latest longitudinal looming results on 25 infants, we observed
both structural and functional organization principles in the infants’ brain
responses to the approaching looms (Van der Weel & Van der Meer,
2019). The location of electrical looming-related activity was stable across
all subjects and trials and occurred within a 1 cm3 area of the visual cortex.
These findings hint at a rather structural organization of brain activity in
response to looming. However, these findings may be explained by the
strict retinotopic organization of the visual system.
However, when it came to orientation of electrical looming-related
activity, the results tell an entirely different story, showing a high degree of
variability of activity that, in addition, was spread across a much larger
Figure 7.4 (A) Accelerating looming stimulus approaching the infants’ eyes result-
ing in increased theta-band oscillatory activity in the visual cortex. A
four-shell ellipsoidal head model was created for every trial and used as
a source montage to transform the recorded EEG data from electrode
level into brain source space. The results of this analysis for dipole
VCrL (visual cortex radial left, depicted in head model in light gray) are
shown for the three infant age groups in B–D. Each graph shows aver-
aged, peak-aligned source waveform (SWF) activity at dipole VCrL for
the three looms (in nanoampere, nA). Overall shape of the SWFs was
similar at the different ages, but their duration was about twice as long
in the 5- to 7-month-olds as compared to the 10- to 11-month-olds.
Note that SWF activity did not discriminate well between slow,
medium, and fast looms. Therefore, peak-to-peak SWF brain activity
was tau-coupled onto the corresponding part of the extrinsic loom to
study the temporal dynamics of neuronal activity. (E–G) Average tau-
coupling plots, tSWF vs tloom for each infant age group for the three loom
speeds, showing that crawling 10- to 11-month-olds differentiated well
between slow (in light gray), medium (in gray), and fast looms (in
black), with significantly higher values for the coupling constant, K, for
faster looms, whereas younger prelocomotor infants did not.
Source: From Van der Weel and Van der Meer (2009).
128 Audrey van der Meer and Ruud van der Weel
area of the visual cortex. This reveals a much more functional form of
organization with connectivity patterns emerging in various directions and
changing radically from trial to trial. With this type of flexible organiza-
tion, there is no need for a one-to-one mapping between brain structure
and function, as suggested by Fodor (1981). Instead, brain organization
can be flexible in the sense that structurally different neural tissue can be
involved in flexible temporarily-assembled structures and the functioning
of the neurons depends on the context in which they are operating. In this
view, neurons adhere to flexible principles; they temporarily assemble to
reveal the typical temporal characteristics of the approaching looms to the
brain.
The main objective of our developmental neuroscience research based
on the principles of Gibson’s ecological approach to visual perception has
always been to show that the brain is not adding to, structuring, or other-
wise enriching the incoming perceptual information, but that crucial
higher-order informational variables, about, for example, time-to-collision
via tau, are merely reflected by the brain. Our findings show that infants,
around their first birthday and after several weeks of self-produced crawl-
ing experience, clearly display in their looming-related brain activity a tem-
poral structure that is consistent with that present in the visual looming
information. Thus, invariants in the perceptual information specifying an
imminent optical collision are merely reflected in the more mature infant
brain, consistent with Gibson’s (1966a) concept of resonance. Our latest
developmental findings on looming provide evidence for degeneracy (Reed,
1996b) or vicarious use of brain tissue (Gibson, 1966a), where brain
organization is flexible, as neurons temporarily assemble to enable a given
task and change function completely when incorporated in different
systems (Van der Weel & Van der Meer, 2019).
Conclusion
What we tried to achieve by writing this chapter is to highlight the fact
that over the past 35 years, ever since we met as undergraduate students in
Amsterdam, we have been inspired and influenced by J. J. Gibson’s ecolo-
gical approach. Our developmental research on early arm movements and
interceptive timing skills emphasizes the importance of establishing a
bodily frame of reference for action as an anchor for both affordance per-
ception and prospective control of adaptive behavior. For us, Dave Lee’s
concept of tau (2009)––an example of an informational variable that can
be picked up directly––is instrumental in linking prospective control to
affordances.
The ecological approach is often accused of neglecting the brain when
explaining perception and action. We would argue here that the brain is
part and parcel of the perceptual and motor systems and therefore deserves
to play a role within ecological theory. However, departing from an
Optical Information for Self-Perception 129
e cological approach to perception and action, the questions asked and the
answers that are considered satisfactory will be very different from those
arising from traditional perspectives. Therefore, the challenge for an ecolo-
gical or Gibsonian neuroscience is to study the brain in a way that is con-
sistent with ecological theory. Over the past 15 years, we have collected
evidence that the brain is not adding to, structuring, or otherwise enriching
the information coming in through the perceptual systems. Instead, the
(temporal) structure already present in the information appears to be
simply better reflected in the more mature infant brain after several weeks
of experience with self-produced locomotion. In our brain research, we
find the ecological concepts of resonance and vicarious function (Gibson,
1966a) and degeneracy (Reed, 1996b) increasingly useful.
8 A Guided Tour of Gibson’s
Theory of Affordances
Jeffrey B. Wagman
Gibson makes two important points about affordances in the above quote,
both somewhat subtly. First, he describes behaviors (and hence,
affordances) as hierarchically nested over both space and time (Reed,
1996b; Stoffregen, 2003a; Wagman & Miller, 2003), laying the ground-
work for the description of affordances as “quicksilvery” (Chemero &
Turvey, 2007), emerging and dissolving from moment to moment as
behavior unfolds (see Figure 8.5). Second, he argues that direct contact
with a ground surface entails direct contact with information about that
ground surface. That is, direct behavior entails direct perception (Turvey,
2013).
Guided Tour: Gibson’s Theory of Affordances 137
[a] vertical, flat, extended rigid surface such as a wall or cliff face is a
barrier to pedestrian locomotion. Slopes between vertical and horizon-
tal afford walking, if easy, but only climbing, if steep, and in the latter
case the surface cannot be flat; there must be “holds” for the hands
and feet.
(Gibson, 1979/2015, p. 124)
The Objects
Gibson again highlights the differences between ecological physics and tradi-
tional physics by differentiating between attached and detached objects: “We
are not dealing with Newtonian objects in space, all of which are detached
but with the furniture of the earth, some items of which are attached to it
and cannot be moved without breakage” (1979/2015, p. 124). Detached
objects afford manipulation, and this is Gibson’s focus in the remainder of
this section. Graspable objects “have opposite surfaces separated by a dis-
tance less than the span of the hand” (p. 125). Subsequently, choices about
whether and how an object is reachable as well as which grasp configuration
should be used (e.g., number of digits in a single-hand grasp, whether the
object is grasped with one or two hands, or with a hand-held tool) are scaled
to the person’s anthropometric properties (Cesari & Newell, 2000; Richard-
son, Marsh, & Baron, 2007; Wagman & Morgan, 2010) (Figure 8.7).
Guided Tour: Gibson’s Theory of Affordances 141
Figure 8.7 Objects that afford grasping with one hand and with two hands.
That is, the continuity between the perception of affordances across human
and non-human animals and in the natural and built environments extends
to the perception of affordances for the self and other. In all such cases,
higher-order, complex, and emergent patterns in structured energy arrays
provide information about affordances. For example, the ability to per-
ceive another person’s maximum reach-with-jump-height is dependent on
the detection of kinematic patterns informative about that person’s ability
to produce task-specific forces with the legs. Moreover, the ability to per-
ceive this affordance for another person improves after watching that
Guided Tour: Gibson’s Theory of Affordances 145
person (or a point light representation of that person) perform task-
relevant behaviors (e.g., walking or squatting) but not task-irrelevant
behaviors (e.g., twisting or standing) (Ramenzoni, Riley, Davis, Shockley,
& Armstrong, 2008; Ramenzoni et al., 2010). Moreover, athletes are
better attuned to information about sport-specific abilities of others than
are non-athletes but are no better attuned to information about non-sport-
specific abilities of others (Weast, Shockley, & Riley, 2011; Weast, Walton,
Chandler, Shockley, & Riley, 2014).
Of course, part of place learning is learning how to get to and from a par-
ticular place. Along these lines, this passage foreshadowed work showing
that human odometry—nonvisual (kinesthetic) perception of places and their
distances—is based on detection of variables that remain invariant over
exploratory locomotion (Harrison & Turvey, 2010; Turvey et al., 2009).
That the same affordance can be perceived from multiple points of obser-
vation foreshadowed and inspired work on the perceptual constancy of
affordances—that perception of affordances for a given behavior reflects a
person’s action capabilities over the variety of circumstances in which that
affordance is encountered (Turvey, 1992; Wagman & Day, 2014).
Affordances for a given behavior can be perceived by means of different
anatomical components, from different points of observation, and under
different task constraints (Cole, Chan, Vereijken, & Adolph, 2013;
Wagman & Hajnal, 2014a, 2014b, 2016).
Guided Tour: Gibson’s Theory of Affordances 147
Misinformation for Affordances
Before concluding the chapter, Gibson comments on what he calls the
“misinformation for affordances.” He writes: “According to the theory
being developed, if information is picked up, perception results; if mis
information is picked up, misperception results” (1979/2015, p. 133). This
is a subtle, but profound statement about the ecological approach. In tra-
ditional approaches, perception is the result of a computational or inter-
pretive process, and perception is accurate (or inaccurate) to the degree
that the outcome of this process matches that of an artificial measuring
device (e.g., a scale, a ruler, or a protractor). In Gibson’s ecological
approach, however, perception is a lawful relationship between perceiver
and environment. Consequently, the “accuracy” of perception cannot be
evaluated, and misperception is not an error. Moreover, if perception is
primarily (or exclusively) of affordances, then the experience of the per-
ceiver is very much unlike the output of artificial measuring devices (see
Figure 8.8). Therefore, so-called ‘illusions’ do not invalidate the ecological
claim that perception is direct so much as they challenge researchers to dis-
cover lawful relationships between the information available at a point of
observation and affordances.
Gibson argues that a theory of perception should be developed from
the countless everyday successes of perception rather than from the rare
(and often artificially induced) so-called failures of perception. For an
infant who refuses to crawl across a visual cliff and an adult who walks
into a sliding glass door, in neither case is perception in error.
Affordances of the respective surfaces were (not) perceived, but this is
only because the information specifying those affordances was (not)
detected (or not present). “These two cases are instructive. In the first, a
surface of support was mistaken for air because the optic array specified
air. In the second a barrier was mistaken for air for the same reason”
(p. 134). Both perception and misperception are the detection of
information. In some (rare) cases, the information specifies one state of
affairs when another state of affairs is so. In other cases, perceivers are
not sufficiently attuned to relevant information (see Adolph, 2008;
Adolph et al., 2014; Kretch & Adolph, 2013) or are prevented from
exploring the structured energy array such that this information can be
detected (Mark et al., 1990; Stoffregen et al., 2009; Yu, Bardy, & Stof-
fregen, 2010).
Conclusion
In the concluding paragraphs of the chapter, Gibson argues that
affordances are among the most fundamental relationships between animal
and environment and play primary roles in shaping the evolution of species
and ontogenetic development of individuals:
148 Jeffrey B. Wagman
The medium, substances, objects, places, and other animals have
affordances for a given animal … They offer benefit or injury, life or
death. This is why they must be perceived.
The possibilities of the environment and the way of life of the
animal go together inseparably …
(1979/2015, pp. 134–135)
Figure 9.1 The horizon ratio. The horizon intersects all objects of the same height
at the same ratio, providing an invariant for size constancy. For
example, the telephone poles and the tree all have the same horizon
ratio of 0.36, and are thus the same size. The horizon line also corres-
ponds to the observer’s eye height on an object: each telephone pole is
thus about three eye heights tall.
Source: From Gibson (1979/2015), Figure 9.6. Copyright 2015. Reproduced by permission of
Taylor & Francis Group, LLC, a division of Informa plc.
Perceiving Surface Layout 155
;ĂͿ
�
ŚŽƌŝnjŽŶ
�
,
, с ƚĂŶ� н ƚĂŶ�
ƚĂŶ�
;ďͿ
ŚŽƌŝnjŽŶ
�
ϭ͘ϱ� с ϭ
ƚĂŶ�
Ζ с ϭ
ƚĂŶ;ϭ͘ϱ�Ϳ
Ζ
;ĐͿ
�
ŚŽƌŝnjŽŶ
�
�н�
Ζ с ϭ
ƚĂŶ;�н�Ϳ
Ζ
Figure 9.2 Geometry of the ground theory. (a) Horizon ratio specifies frontal
extent (H) in eye height units (E). (b) Declination angle (a) specifies
distance (Z) in eye height units; overestimated declination angle
(1.5a) yields perceived linear distance compression. (c) Declination
angle from raised horizon (a + e) yields perceived nonlinear dis-
tance compression
object, and thus specifies object height (H) in units of eye height. This can
be formalized as
H
__ tan a + tan g
= ___________
(9.1a)
E tan a
where α is the visual angle between the horizon and the base of the object
and γ is the visual angle between the horizon and the top of the object.
Note that the horizon ratio specifies not only the height but any frontal
dimension of an object, such as its width (W):
W
__ 2 tan b
______
= (9.1b)
E tan a
156 William H. Warren
where b is half the horizontal visual angle of the object. Eye height thus
provides a body-scaled measure of an object’s frontal extent.
Contrary to the longstanding belief that perceived size depends on per-
ceived distance (size-distance invariance), the horizon ratio specifies size
independent of distance. This claim was confirmed by Haber and Levin
(2001) in an open-field experiment, which found that verbal estimates of
the size and distance of the same objects by the same observers were com-
pletely uncorrelated. Specifically, over distances of 3–100 m and vertical
sizes of 0.2–2.0 m, estimates of the distance of unfamiliar geometric shapes
accounted for zero variance in the estimates of their size, and vice versa.
The perception of size from the horizon ratio does not depend on the per-
ception of distance. This is an exemplary case of how to test whether per-
ception of one property depends on the explicit perception of another
property.
The horizon ratio holds whether the horizon is explicit (visible) or
implicit, specified by the limit of optical compression and the convergence
of ground texture and wall texture. People are highly accurate and precise
when estimating their own eye level (indicative of the perceived horizontal
or horizon) in the light, and only slightly less so in the dark (Stoper &
Cohen, 1986). When a minimal visual framework consisting of two
parallel lines or a rectangular box is pitched upward or downward, it
biases perceived eye level in the same direction by around 50%, demon-
strating both visual and gravitational influences (Matin & Fox, 1989;
Stoper & Cohen, 1989). In outdoor scenes, O’Shea and Ross (2007)
observed that sloping ground elicits a similar bias of 40% in perceived eye
level, although it saturates at +3 º to +4 º when looking at uphill slopes. As
the horizon ratio predicts, these manipulations of perceived eye level
produce corresponding biases in perceived object size (Matin & Fox, 1989;
Stoper & Bautista, 1992). Specifically, raising the perceived eye level
(hence, the implied horizon) by pitching the visual framework upward
reduces the judged vertical size of an object resting on the floor, and vice
versa.
Manipulating the observer’s effective eye height has similar effects on
size-related affordance judgments. Warren and Whang (1987) first showed
that reducing the visually specified eye height by raising a false floor
increased the perceived width of a doorway, so that a narrow aperture was
now judged to be passable. Mark (1987) likewise found an eye height
effect on the perceived vertical height that afforded sitting. Wraga (1999)
reported a similar influence of effective eye height on perceived vertical size
by using a false floor and by varying the eye height in a virtual reality (VR)
head-mounted display (Dixon, Wraga, Proffitt, & Williams, 2000). Note,
however, that the eye height manipulation only affected the perceived size
of objects between 0.2–2.5 eye heights tall (Wraga & Proffitt, 2000).
Perceiving Surface Layout 157
Relative Size and Distance: Ground Texture as an Intrinsic Scale
Gibson emphasized that the ground surface texture provides an intrinsic
scale for exocentric size and distance. If the surface texture is “stochasti-
cally regular,” then “equal amounts of texture” correspond to “equal
stretches of distance along the ground” everywhere in the scene. Thus, the
amount of texture covered by the base of an object provides an intrinsic
scale for size, and the amount of texture between two objects provides an
intrinsic scale for the distance interval between them. This is a potentially
powerful variable, for it provides a basis for both size and distance con-
stancy. For example, any objects that cover T texture are the same width,
while an object that covers 2T is twice as wide. This is nicely illustrated by
Figure 9.3 (from Gibson 1979/2015), in which two cylinders resting on a
tiled floor look to be the same size, for the base of each covers the width of
one floor tile.
For such a scale to be invariant over translation and rotation in the
ground plane, satisfying a Euclidean metric, the texture elements must be
symmetric (isotropy) and have a constant size over the whole surface
(homogeneity). Ordinary textures, however, are often anisotropic (e.g.,
bricks, paving stones, wood grain), undermining comparisons in different
directions. Indeed, the floor tiles in Gibson’s own figure are anisotropic,
Figure 9.3 Ground texture as an intrinsic scale for relative exocentric size and dis-
tance. Objects that cover the same amount of ground texture are the
same width, assuming the texture is homogeneous across the surface.
Thus the two cylinders appear to be the same size. Equal amounts of
texture correspond to equal stretches of distance along the ground,
assuming the texture is isotropic. That is not the case in this figure, so
the depth interval of four texture units between the cylinders is not
equal to a frontal interval of four texture units.
Source: From Gibson (1979/2015), Figure 9.5. Copyright 2015. Reproduced by permission of
Taylor & Francis Group, LLC, a division of Informa plc.
158 William H. Warren
such that an equal amount of texture corresponds to a larger stretch of dis-
tance in depth than in the frontal dimension (see Figure 9.3). The texture
scale hypothesis predicts that manipulating this anisotropy should affect
perceived exocentric distance. In addition, Gibson’s student W. P. Purdy
(1958) proved that the egocentric distance of an object from the observer
is specified by the optical size gradient of ground texture at the object’s
base. This hypothesis predicts that manipulating the size gradient should
affect perceived egocentric distance. To my knowledge, neither of these
experiments has been done.
If ground texture is an effective scale, perceived stretches of distance in
the open field should be quite accurate, precise, and invariant over changes
in viewing distance and direction. But as we will see, there are systematic
biases in distance perception. Note that ground texture can only provide a
scale when it is visually resolvable, and common textures (e.g., grass, sand,
gravel, asphalt) become indistinct at farther distances, where the optical
density surpasses the spatial frequency threshold (around 25 cycles/degree
in daylight). This implies that perceived size over large distances depends
on the horizon ratio, and perceived distance likely depends on other
information.
ϭϬ ϭϱ
ϭϬ
/ŶĚŝĐĂƚĞĚĚŝƐƚĂŶĐĞ;ŵͿ
ϱ
/ŶĚŝĐĂƚĞĚŝƐƚĂŶĐĞ;ŵͿ
ϱ
<ŶĂƉƉΘ>ŽŽŵŝƐ;ϮϬϬϰͿ
Ϭ Ϭ
Ϭ ϱ ϭϬ ϭϱ ϮϬ Ϭ ϱ ϭϬ ϭϱ ϮϬ Ϯϱ ϯϬ
dĂƌŐĞƚĚŝƐƚĂŶĐĞ;ŵͿ dĂƌŐĞƚĚŝƐƚĂŶĐĞ;ŵͿ
ϰϬ &ƌŽŶƚĂůŵĂƚĐŚĞƐƚŽĞŐŽĐĞŶƚƌŝĐĚŝƐƚĂŶĐĞƐ
;ďͿ dLJƉŝĐĂůǀĞƌďĂůĞƐƟŵĂƚĞƐ;>ŽŽŵŝƐΘWŚŝůďĞĐŬ͕ϮϬϬϴͿ ϯϬ
ϯϱ 'ŝůŝŶƐŬLJ;ϭϵϱϭͿŵŽĚĞů WƌĞĚ͘ŵĂƚĐŚ
Ϯϱ ĐĐƵƌĂƚĞ
ϯϬ
Ϯϱ ϮϬ
dž ϮϬ ϭϱ
ϭϱ
ϭϬ
ϭϬ
ϱ
ϱ
/ŵƉƵƚĞĚƉĞƌĐĞŝǀĞĚĚŝƐƚĂŶĐĞ;ŵͿ
/ŶĚŝĐĂƚĞĚĞŐŽĐĞŶƚƌŝĐĚŝƐƚĂŶĐĞ;ŵͿ
>ŝ͕WŚŝůůŝƉƐΘƵƌŐŝŶ;ϮϬϭϭͿ
Ϭ Ϭ
Ϭ ϱ ϭϬ ϭϱ ϮϬ Ϯϱ ϯϬ ϯϱ ϰϬ Ϭ ϱ ϭϬ ϭϱ ϮϬ Ϯϱ ϯϬ
ĐƚƵĂůĞŐŽĐĞŶƚƌŝĐĚŝƐƚĂŶĐĞ;ŵͿ &ƌŽŶƚĂůŝŶƚĞƌǀĂů;ŵͿ
dĂƐŬ ĂƚĂ ^ŝŵƵůĂƟŽŶ
ϭ͘ϱŵ ϭ͘ϲ
ϭ͘ϲϬ
Ϯ͘Ϭŵ
ϭ͘ϰϬ ϭ͘ϰ
ϭ͘ϮϬ
DĂƚĐŚĞĚĚĞƉƚŚͬǁŝĚƚŚ
ϭ͘Ϯ
DĂƚĐŚĞĚĚĞƉƚŚͬǁŝĚƚŚ
WƌĞĚ͘ϭ͘ϱ
ϭ͘ϬϬ
>ŽŽŵŝƐ͕ĞƚĂů͘;ϭϵϵϮͿ
Ϭ͘ϴϬ
ϭ
Ϯ ϰ ϲ ϴ ϭϬ ϭϮ Ϭ Ϯ ϰ ϲ ϴ ϭϬ ϭϮ
WŚLJƐŝĐĂůĚŝƐƚĂŶĐĞŝŶŵĞƚĞƌƐ
WŚLJƐŝĐĂůĚŝƐƚĂŶĐĞ;ŵͿ
;ĚͿ ϭϲ ϭϲ
KƉĞŶĮĞůĚ WƌĞĚ͘ďŝƐĞĐƟŽŶ
ϭϱ ϭϰ
,ĂůůǁĂLJ ĐĐƵƌĂƚĞ
ϭϰ ĐĐƵƌĂƚĞ ϭϮ
ϭϯ
ϭϬ
ϭϮ
ϴ
ϭϭ
ϲ
ϭϬ
/ŶĚŝĐĂƚĞĚďŝƐĞĐƟŽŶ;ŵͿ
ϵ ϰ
ǀĞƌĂŐĞŵŝĚƉŽŝŶƚĞƐƟŵĂƚĞ;ŵͿ
ϴ ŽĚĞŶŚĞŝŵĞƌ͕ĞƚĂů͘;ϮϬϬϳͿ
Ϯ
ϳ Ϭ
ϳ ϴ ϵ ϭϬ ϭϭ ϭϮ ϭϯ ϭϰ ϭϱ ϭϲ Ϭ Ϯ ϰ ϲ ϴ ϭϬ ϭϮ ϭϰ ϭϲ
DŝĚƉŽŝŶƚ;DͿ DŝĚƉŽŝŶƚ;ŵͿ
dĂƐŬ ĂƚĂ ^ŝŵƵůĂƟŽŶ
;ĞͿ
ϯϬ
'ŝůŝŶƐŬLJ;сϮϴͿ
K͗:
ĐĐƵƌĂƚĞ
,ŽƌŝnjнϯΣ
ϮϬ
ϮϬ
Ěͻ
ϭϬ ϭϬ
Ƌ͘/͗ Ě с Ϯϴ͘ϱ
Ϯϴ͘ϱн
WĞƌĐĞŝǀĞĚĚŝƐƚĂŶĐĞ;ĚͿŝŶŵĞƚĞƌƐ
'ŝůŝŶƐŬLJ;ϭϵϱϭͿ
/ŶĚŝĐĂƚĞĚ/ŶĐƌĞŵĞŶƚĂůŝƐƚĂŶĐĞ;ŵͿ
Ϭ Ϭ
Ϭ ϭϬ ϮϬ ϯϬ ϰϬ Ϭ ϭϬ ϮϬ ϰϬ ϲϬ
WŚLJƐŝĐĂůĚŝƐƚĂŶĐĞ;ͿŝŶŵĞƚĞƌƐ WŚLJƐŝĐĂůĚŝƐƚĂŶĐĞ;ŵͿ
Figure 9.4 Distance perception: experimental tasks, representative data, and the results of numerical simulations based
on Equations 9.1 and 9.3. (a) Egocentric distance: blind walking task and verbal estimates. (b) Egocentric
aspect ratio: equilateral ‘L’ task yields linear depth compression. (c) Exocentric aspect ratio: equilateral ‘+’
task, ditto. (d) Egocentric bisection: accurate judgments. (e) Exocentric depth increments: marking off
equal intervals in depth yields nonlinear compression, simulated with Equation 9.4. Note: ‘o’ indicates
targets, ‘x’ indicates participant’s final position, arrows indicate adjustments.
Source: (a) Data from Knapp and Loomis (2004), Figure 3; (b) from Li et al. (2011), Figure 4. Adapted with permission from Springer
Nature; (c) from Loomis et al. (1992), Figure 5a. Adapted with permission from American Psychological Association; (d) from Bod-
enheimer et al. (2007), Figure 2; (e) from Gilinsky (1951), Figure 5.
Perceiving Surface Layout 163
Phillips, & Durgin, 2011). In other words, perceived egocentric dis-
tance is a linear function of physical distance, but compressed in depth,
with a slope of about 0.7.
3. Exocentric aspect ratio (Figure 9.4c). Well, perhaps exocentric dis-
tances between objects behave differently, thanks to the ground texture
scale. But if you attempt to adjust a depth interval between two targets
to match a frontal interval between two targets, forming an equilateral
‘+’ or ‘L,’ the depth interval is underestimated relative to the frontal
interval, increasingly so with distance, until it levels off around
30–40% at 6–10 m (Loomis et al., 1992; Loomis & Philbeck, 1999).
This presents the deepest paradox: one would think that if you can
blind walk accurately to each of these targets, you must be able to per-
ceive their spatial locations and the distances between them. Yet
physically-equal intervals in the open field are perceived as unequal
when viewed from different directions, thereby violating a Euclidean
metric for perceptual constancy (Foley et al., 2004; Toye, 1986;
Wagner, 1985).
4. Egocentric bisection (Figure 9.4d). Perhaps this is only so when com-
paring distance intervals in different directions (e.g., frontal vs depth
intervals). And happily, if you adjust a marker (Z1) to bisect the ego-
centric distance between you and a target (Z2) in the open field (setting
tana1 = 2tana2), you are highly accurate and precise—up to 300 m!
(Bodenheimer et al. (2007); Lappin, Shelton, & Rieser (2006); J. Purdy
& Gibson (1955); Rieser et al. (1990); but see Gilinsky’s, 1951, two
observers).4
5. Exocentric depth increments (Figure 9.4e). On the other hand, if you
try to mark off a series of equal increments in depth, starting near your
feet, equal intervals look progressively smaller with distance, that is,
they appear nonlinearly compressed (Gilinsky, 1951; Ooi & He,
2007). This is apparent when viewing the dashed lines running down
the middle of a highway: equal-length dashes look increasingly
compressed with distance. Another way of saying this is that perceived
incremental egocentric distance is a negatively accelerating function of
physical distance, such as a hyperbolic curve or a power law with an
exponent <1 (compare Figure 9.4e, with Figure 9.4b, center).
Acknowledgments
Thanks to Jack Loomis, Jim Todd, Joe Lappin, Hal Sedgwick, Chris Hill,
Frank Durgin, John Rieser, Geoff Bingham, Fulvio Domini, and Jijiang He
for many happy years of head-banging over these problems. Any insights
herein must have originated with them, but of course they are not to blame
for the rest of it.
Notes
1. The view that direct perception requires a moving observer misinterprets
Chapter 9, for layout is also specified by static information within a given
context of constraint (Runeson, 1988).
2. With the lights on, performance was highly accurate under all viewing con-
ditions, for the eye-level target was mounted on a visible tripod that rested on
the visible ground.
Perceiving Surface Layout 173
3. This is a large literature, with results that depend on stimuli, procedure, and
test conditions. Rather than a full review, I offer my reading of what I regard
as instructive findings.
4. In indoor hallways, overestimates of the midpoint have been reported in some
experiments (Lappin et al., 2006), but not others (Bodenheimer et al., 2007).
5. There are two distinct questions here: the extrinsic geometry of visual space,
the geometric properties that are preserved by the mapping from physical to
visual space; and the intrinsic geometry of visual space, the geometric relations
among perceived properties. For the latter, see Battro, Netto, and Rozestraten
(1976); Todd, Oomes, Koenderink, and Kappers (2001).
6. This dissociation between the information for depth intervals and frontal inter-
vals differs from a dissociation between the processing of locations and extents
(Loomis, 2014; Loomis, Philbeck, & Zahorik, 2002), because the perception
of depth intervals and their endpoints are both based on exaggerated declina-
tion angles.
7. If one includes the exaggerated declination angle, so
Z’
__ 1
= _____________
E tan(1.5( + ))
raising the perceived horizon by just = 1.2 º would also approximate the data.
8. In addition, the rate of optic flow from the ground plane maps to locomotor
distance (Beusmans, 1998; Waller & Richardson, 2008). So does the time-to-
contact variable tau: if tau specifies that the lamppost is 2 seconds away, and
tau during one step is 0.5 seconds, then the lamppost is 4 steps away (Warren,
2007).
9. The extra compression in VR is still not completely understood, see Willemsen,
Colton, Creem-Regehr, and Thompson (2009).
10. The mutual calibration hypothesis differs from Proffitt’s (2006) embodied per-
ception theory, which claims, “Perception is mutable to nonvisual influences …
The apparent dimensions of surface layout expand and contract with changes
in energetic cost” and “one’s purposes, physiological state, and emotions”
(p. 110). The evidence for this claim remains controversial. Mutual calibration
occurs on a slower timescale to keep perception aligned with the affordances of
layout, which do not depend on transient intentions, needs, fatigue, or social
and emotional factors.
10 Acting Is Perceiving
Experiments on Perception of
Motion in the World and
Movements of the Self, an Update
L. James Smart Jr., Justin A. Hassebrock,
and Max A. Teaford
^ĞŶƐŽƌ
ŵŝƩĞƌ
^ĞŶƐŽƌ
ŵŽƚŽƌ
Figure 10.1 Depiction of the moving room used in Stoffregen and Smart (1998)
and Smart, Stoffregen, and Bardy (2002). The room was comprised of
3½ walls and a ceiling with the side walls on rails, so that the parti-
cipant stood on the laboratory floor.
180 L. James Smart Jr. et al.
Despite the usefulness of the swinging room paradigm, there were some
challenges that remained. One such challenge was that only the global
optic flow could be experimentally manipulated with most of the tradi-
tional physical room displays. This was problematic because outside of the
lab there is often local as well as global optic flow. Furthermore, the
realism of the environment was limited in such a paradigm, particularly in
terms of the types of motion that could be created (in most cases, it was
single axis motion). Gibson (1979/2015) noted that these challenges made
the study of visual kinesthesis harder than the study of event perception.
The need to produce a compelling optic flow to indicate a moving environ-
ment or movement through an environment was daunting to accomplish
with the technology and physical apparatus available at the time. A final
challenge that emerged from these constraints was that the direction of
influence was one-way. That is, room motion could be used to influence
personal motion, but not the converse. These challenges had the effect of
limiting the kinds of questions that could be asked, including the
following:
;ĂͿ ;ďͿ
Figure10.2 (a) Depiction of the virtual hallway stimuli (directional optic flow) used
by Warren, Kay, and Yilmaz (1996). (b) Depiction of large screen pro-
jection of optic flow (sinusoidal) used in Dijkstra, Schöner, and Gielen
(1994).
K
D
,
/
d
z h
s
y
t
Figure 10.3 Set-up and depiction of study for Littman (2011). Participants viewed
a virtual version of the laboratory through a head-mounted display
(HMD).
more stable as they adapt. In other words, behavioral adaptation begins with
large and varied motion and eventually “settles” into a working pattern of
movement that takes into account the novel perception-action relationship
of the virtual environment. Littman was also able to show this progression
in the presence of a complex distortion (depicted in Figure 10.4, Panel III,
line F). In a similar fashion, Hartman (2018) was able to show that cali-
bration (adaptation) could occur in the presence of varying perturbations
(her manipulation can be represented by combining Panels II and III of
Figure 10.4). From a research standpoint this is promising because in VE,
physics can be altered in a number of ways. Figure 10.4 depicts some poten-
tial ways in which this perception action relationship can be altered from
spatial distortions (Figure 10.4, Panels I–III) to temporal distortions (Figure
10.4, Panel IV). Theoretically it is useful in that it allows for testing different
forms of visual information that are needed to guide behavior (one of Gib-
son’s original motivations for understanding optical flow).
Acting Is Perceiving 185
/ ///
sŝƌƚƵĂůŚĞĂĚŽƌŝĞŶƚĂƟŽŶ;ΣLJĂǁͿ
sŝƌƚƵĂůŚĞĂĚŽƌŝĞŶƚĂƟŽŶ;ΣLJĂǁͿ
&
ϲϬΣ
ĐƚƵĂůŚĞĂĚŽƌŝĞŶƚĂƟŽŶ;ΣLJĂǁͿ ĐƚƵĂůŚĞĂĚŽƌŝĞŶƚĂƟŽŶ;ΣLJĂǁͿ
/s
sŝƌƚƵĂůŚĞĂĚŽƌŝĞŶƚĂƟŽŶ;ΣLJĂǁͿ
// '
ŚĞĂĚŽƌŝĞŶƚĂƟŽŶ;ΣLJĂǁͿ
ZĂƟŽŽĨǀŝƌƚƵĂůͬĂĐƚƵĂů
ϱƐ
ĐƚƵĂůŚĞĂĚŽƌŝĞŶƚĂƟŽŶ;ΣLJĂǁͿ dŝŵĞ;ƐĞĐͿ
Figure 10.4 Potential alteration of perception and action when using VE. Panel I:
spatial distortions, inversion (line B) and offset (line C). Panel II:
spatial gain distortions, negative gain (compression – line D) and
positive gain (expansion – Line E). Panel III: complex mapping (line
F). Panel IV: temporal distortion (modulating gain over time – line G).
Line A in all panels refers to normal (real-world) mappings.
Smart et al. (2014) found that not only did motion sickness rates vary
across conditions, with almost no motion sickness in the sinusoidal con-
dition and significant sickness in the in-phase condition, but there were
also behavioral differences between conditions and generally between
participants who became motion sick and those who remained well.
Similar to the Littman (2011) virtual prism study, successful postural regu-
lation involved decreased magnitude of motion over time, however, this
was coupled with more variable motion strategies over time. Those who
became motion sick seemed to be disrupted by the optic flow (evidenced by
higher magnitudes/complexity of movement) and were unable to use the
optic flow to regain stability (evidenced by rigid movement strategies over
time). This suggested a difference in the ability to recognize and exploit
movement information to guide behavior. Interestingly, anecdotal reports
by participants who became motion sick in the closed-loop (coupled) con-
ditions seemed to indicate that they were unaware that they were influen-
cing the optic flow, again suggesting that failure to detect information that
Antecedents
Observations to the effect that objects that are presently out of sight persist
in experience can be found scattered in the 20th-century phenomenological
literature. One notable case in point is Merleau-Ponty’s (1962) treatment
in The Phenomenology of Perception (1962) of the perception of distance
or depth. Merleau-Ponty points out that “[t]raditional ideas are at one in
denying that depth is visible” and for that reason, it is assumed that “dis-
tance, like all other spatial relations, exists only for a subject who synthe-
sizes it and embraces it in thought” (p. 255). But Merleau-Ponty rejects
this account, claiming that depth, as when perceiving a three-dimensional
object, is as visible as breadth with a change in viewing position. In pas-
sages that made a deep impression on Gibson (pers. comm., 1977), he
writes that depth in that case is only “breadth seen from the side” (original
emphasis), and if I am unable to immediately perceive depth at this
moment it is because “I am simply badly placed to see it. I should see it if I
were in the position of a spectator looking on from the side” (Merleau-
Ponty, 1962, p. 255). And the side of the object that is presently out of
sight is experienced as still existing. If we adopt the perspective of perceiv-
ing as a process of detecting information with respect to a moving point of
observation, then depth is perceived as immediately as is breadth. Echoing
the remarks on the immediacy of voluminousness from William James
cited above, Merleau-Ponty continues that such considerations of depth
force us to “rediscover the primordial experience from which it springs; it
is, so to speak, the most ‘existential’ of all dimensions” (p. 256, emphasis
added).
Although Gibson does not cite Merleau-Ponty in connection with the
occluding edge, he does discuss in The Senses Considered as Perceptual
Systems (Gibson, 1966a) other instances where this phenomenon is antici-
pated. He notes that the Gestalt psychologist Koffka (1935) observed that
“he could ‘see’ the top of his table extending behind the book that lay on
it” (Gibson, 1966a, p. 204). Although Koffka attributed this phenomenon
to the dynamics of perceptual organization, Gibson avers: “insofar as his
head moved and the texture of the table was [optically] wiped and
unwiped by the edges of the book, he had information for perceiving the
table behind the book” (p. 204).
More germane to what he will later call the occluding edge, Gibson
(pp. 203–206) offers a somewhat lengthy treatment of Michotte (1963)
and his colleagues’ investigations of “kinetic occlusion.” Michotte’s
powerful demonstrations force a reconsideration of the very definition of
Revisiting “The Occluding Edge” 40 Years On 193
perception. It shows that to regard perception to be awareness of the
environment based on immediate sensory stimulation is too restrictive. Per-
ception instead is the awareness of the environment through the detection
of information, and information is most readily detected over time. In the
present case, that information is the gradual occlusion over time of a
surface at an edge. Such perceptual information specifies that the object is
going out of sight, not out of existence. Gibson (1966a) writes:
The length of a perceptual span does not seem to distinguish the per-
ception of adults from that of children, but I believe it is the ability to
find the structure, the embedded relations constituting subordinate
units or cluster, that make a difference. It is not just a stretching out
but a making of one unit out of many smaller units.
(p. 379)
How does orientation to the habitat and knowing how to reach places
come about for the perceiver? These questions also have to do with aware-
ness of features of the environment that are not immediately in sight, and
Gibson applies his analysis of dynamic occlusion to them.
Gibson refers to the portion of the environmental layout that can be
seen “from here” as a vista. It is an extended surface layout, and in most
cases it is cluttered with various features. The visible expanse that is the
vista is limited or foreshortened in various ways. Apart from the horizon
in extended, open regions, there are regularly large surfaces or a cluster
of features that act as visual screens obstructing the view in certain
directions. However, if one travels far enough within a vista, typically a
new vista gradually emerges from behind the visual obstruction. The
successive vista that was once not in sight, progressively comes into
sight, while the previous vista goes out of sight behind. “To go from one
place to another involves the opening up of the vista ahead [in the
200 Harry Heft
process of travel] and closing in of the vista behind” (Gibson,
1979/2015, p. 189).
That portion of the path of travel where the new vista comes into view,
and reciprocally where the previous vista becomes obscured from view, is
a transition. Each transition is a reversible event in the same sense that the
progressive occlusion of an object is. A prior vista can be brought back
into view by reversing the direction of travel. The flow of information gen-
erated through way-finding then is a reversible event, and, like any event,
it is perceived over time. With this conceptualization in hand, learning a
route involves developing an awareness of a particular sequence of trans-
itions linking vistas. Way-finding is a perception-action process of opening
up successive vistas in the course of travel; and in the course of traveling a
familiar route, the perceiver is aware of what lay ahead.
Gibson continues by offering the even more radical proposal to the
effect that in the process of becoming attuned to the extended event
structure that is a path of travel, the individual develops an awareness of
the overall layout of that region of the environment. “When the vistas
have been put in order through locomotion,” Gibson writes, “the invari-
ant structure of the house, the town, or the whole habitat will be appre-
hended. The hidden and the unhidden become one environment”
(p. 189). This assertion should be understood as a logical extension of
Gibson’s more familiar explanation of the perception of invariant struc-
ture in shape perception. To explain how one perceives a table top as
rectangular when from most observation points it appears to be trapezoi-
dal, he writes:
While the invariant relations in this case may specify the table top as seen
“from all sides at once,” the invariant structure of the house or town that
is revealed through a succession of transitions and vistas may likewise
result in an awareness of that region of the layout from “everywhere at
once.”
It is true that there is a different optic array for each point of observa-
tion and that different observers must occupy different points at any
one time. But observers move, and the same path may be traveled by
any observer. If a set of observers move around, the same invariants
under transformations and occlusions will be available to all. To the
extent that the invariants are detected, all observers will perceive the
same world.
(1979/2015, pp. 190–191)
Conclusion
The observation that objects persist in awareness even when they are not
immediately in sight was not a new one when Kaplan and Gibson designed
their experiment on the occluding edge. What was new, however, was their
attention to the perceptual information that specified an object going out
of sight and yet persisting, and eventually the broader implications Gibson
drew from this analysis.
It is surely unconventional in mainstream perceptual theory––if not
thoroughly paradoxical––to claim that objects not immediately in sight
can be perceived. If perception is understood to be a process of converting
sensations into percepts, as standard approaches have it, then it follows
that an object that does not presently stimulate sensory receptors cannot
possibly be perceived. One of the problems with this stance is that it
assumes that the proper way to develop an account of perceiving is to
begin with what is taken to be the simplest units of analysis. Over a
century ago that strategy was reaffirmed for many by the discovery of
individual sensory receptors. As a result, functional considerations that
might have been brought to forefront in the study of perception in the
wake of evolutionary theory took a back seat to anatomical considera-
tions. Rather than getting clear about how animals function in the
environment, how it is that animals stay in touch with the affordances of
an extended environment over time, and how perceptual systems support
those functions, the aim of much research from the days of Helmholtz is
to explain how animals overcome the apparent limitations of their
biology. Given that history, it is ironic that a number of notable func-
tional discoveries at the biological level of analysis originated with atten-
tion to perceptual experience (phenomenology).
204 Harry Heft
Ecological psychology has benefited immeasurably from attention to
phenomenology, from the analysis of optic flow to studies of affordances
(Heft, 2003), and some of its most important contributions to perceptual
theory writ large stem from that work. References to the character of per-
ceptual experience recur throughout the corpus of Gibson’s writings (e.g.,
Gibson, 1958, 1969, 1979/2015; Gibson & Crooks, 1938). His appeal to
phenomenology is especially evident in the films he created to demonstrate
his discoveries and proposals (Gibson, 1955, 1968; Gibson & Kaushall,
1973).
Among the perceptual phenomena examined by Gibson, dynamic occlu-
sion has arguably received the least attention in the years following the
publication of The Ecological Approach to Visual Perception. Its sweeping
implications for perceptual theory and for social theory have barely been
realized.
Notes
1. For a demonstration of this effect, see Gibson’s film, “The Change from Visible
to Invisible: A Study of Optical Transitions,” available at: www.youtube.com/
watch?v=1qQLtIICXoE
2. Imagining can also be with reference to occurrences “out of time” (fantasy).
12 Looking with the Head and Eyes
John M. Franchak
;ĂͿ ;ďͿ
Figure 12.1 (a) Mobile eye tracker devised by Land (1992). (b) Simultaneous eye
and field of view recording for tracking eye gaze.
Source: From Land (1992). Reprinted by permission from Springer Nature.
Looking with the Head and Eyes 209
;ĂͿ ;ďͿ
Figure 12.2 (a) Lightweight, mobile adult eye tracker. (b) Mobile infant eye
tracker.
Source: (a) Photo courtesy of Jason Babcock, (b) Photo courtesy of Chaun Luo.
researchers to precisely measure how eye, head, hand, and body move-
ments are coordinated in everyday tasks. For example, Franchak and Yu
(2015) studied infants’ and caregivers’ naturalistic play to measure eye and
head alignment to toys while reaching. Adults adjusted the speed of their
reaches depending on visual alignment—when they aligned their eyes
directly on toys they reached more quickly but slowed their reaches when
toys were in the periphery. In contrast, infants did not systematically
coordinate eye movements with reaching speed. Matthis and colleagues
(Matthis, Yates, & Hayhoe, 2018) combined mobile eye tracking with full-
body inertial sensing to examine visual guidance of locomotion while
hiking outdoors and found that walkers altered their gaze allocation to
account for the different task demands associated with walking over
different terrains (e.g., rocky trail vs uniform dirt path).
Scoring where observers look is challenging for mobile eye tracking
research compared with stationary eye tracking research. With stationary
eye trackers, the locations of gaze targets in a stimulus are the same for
every participant. For example, calculating how often participants look at
a face in a photograph means defining which pixels comprise the face, and
then calculating how frequently participants’ gaze coordinates fall on those
pixels. In contrast, in mobile eye tracking the locations of gaze targets in
the field of view are unique to each participant, depending on how each
participant chose to orient the field of view by moving the body and head
(Franchak, 2017). For example, scoring how often infants look at care-
givers’ faces in a naturalistic interaction means scoring where faces are in
each infant’s field of view at every moment—each infant will have their
caregiver’s face in view in different locations at different times. Con-
sequently, researchers manually score where each individual participant
210 John M. Franchak
looked frame-by-frame. The time-intensive scoring means that sample sizes
in mobile eye tracking studies are typically smaller than in comparable sta-
tionary eye tracking studies and might also contribute to why the size of
the mobile eye tracking literature is still modest.
One promising solution is automated computer vision detection of
targets of interest. For example, researchers have developed systems to
detect where faces (Frank, Simmons, Yurovsky, & Pusiol, 2013), hands
(Bambach, Franchak, Crandall, & Yu, 2014), and objects (Yu & Smith,
2013) are located in the field of view for each participant to automatically
score looking behavior. However, accuracy varies and is typically worse
compared with human coding because detection depends on visual clutter
in the scene and the discriminability of targets. Another solution is to use
eye trackers embedded in virtual reality headgears; manual coding of
looking is unnecessary because the location of gaze targets must be known
to render the virtual environment (Diaz, Cooper, Kit, & Hayhoe, 2013).
Yet another promising method is linking the mobile eye tracking reference
frame to that of a motion capture system to automatically detect gaze with
respect to body movements (Matthis et al., 2018).
d d d d d
ϭ Ϯ ϯ ϰ ϱ
Figure 12.3 Different stages of eye, head, and body rotations during a 90° shift of
gaze from one location in the world to the target “T.”
Source: From Land (2004). Reprinted by permission from Springer Nature.
;ĂͿŚŝůĚ ;ďͿĚƵůƚ
;ĐͿƌĂǁůŝŶŐŝŶĨĂŶƚ
;ĚͿtĂůŬŝŶŐŝŶĨĂŶƚ
in infants’ field of view declines over the first two years of life (Jayaraman,
Fausey, & Smith, 2015). Indeed, young infants spend nearly 50% of the
time held up off the ground (Franchak, 2019), which provides a good
vantage point for face-looking (Kretch & Adolph, 2015). As infants learn
218 John M. Franchak
to sit, crawl, and walk, they spend more time down on the floor in posi-
tions that are less conducive to face looking.
Differences between postures place constraints on what infants can see,
but within each posture infants can choose how to orient their heads to
determine what is in view. When sitting across from caregivers and playing
with toys, 12-month-old infants try to keep both toys and faces in view
simultaneously, which results in neither type of target being well-centered
in view (Franchak et al., under review). By 24 months, infants center toys
in view at the expense of caregivers’ faces. This is true of face and toy loca-
tions both while those targets are fixated as well as when they are not;
Figure 12.5 shows the differences in the locations of faces and toys in
head-centered field of view of 12- and 24-month-old infants during fixa-
tions. Biasing toys in view might help infants control manual actions with
toys and deal with unpredictable movement of toys in the field of view.
These effects may be task dependent. Whereas 12-month-olds did not bias
their view to favor toys over faces in the seated play task, they did bias
toys over faces in a locomotor play task (Luo & Franchak, in preparation).
Possibly, viewpoint bias depends on the relative locations of different
targets in the environment.
ϭϮŵŽŶƚŚƐ ϮϰŵŽŶƚŚƐ
&ĂĐĞƐ
dŽLJƐ
Figure 12.5 Heat maps showing the frequency of face (top row) and toy (bottom
row) locations within the head-centered field of view for 12- and
24-month-old infants during fixations. White indicates low frequency
while increasingly dark shades of gray indicate higher frequency.
Dashed black circles indicate a 15° radius around the center of view.
For 12-month-olds, neither toys nor faces were often within the middle
of the field of view. For 24-month-olds, toys were frequently in the
center of view but faces were not.
Looking with the Head and Eyes 219
Looking Ahead
To summarize, Gibson argued that studies of looking at photographs in
the laboratory cannot inform on the real-life behavior of looking around
the world. The advent of mobile eye tracking allowed researchers to collect
empirical data that support this claim. First, naturalistic studies of eye
tracking across a variety of tasks show that people use the eyes and head
to select visual targets that support their goals and actions within a task;
physical appearance of stimuli bears little on where people look. Second,
awareness of the self in the visual world—as an agent who actively engages
with the environment and other social agents—leads to different decisions
about where to look in natural tasks compared with mediated studies of
looking at images or videos by passive observers. Third, the coordination
of eyes, head, and body to select where to look is flexible, task-specific,
and a ubiquitous part of natural behavior that cannot be studied in screen-
based tasks. What adults see depends on how they choose to orient the
nested visual system; what infants see depends on how they are able to
orient the nested visual system. Efficient visual exploration depends on
deciding how to coordinate visual exploration amidst the informational
and motor demands of other ongoing actions. It is important to note that
the studies cited in each of the above sections could easily have been
applied to support any of these three points; nearly every study of
naturalistic visual exploration shows the importance of task, self, and
whole-body exploration.
Given these limitations of screen-based tasks, which have been
consistently highlighted by 20 years of mobile eye-tracking research, it is
important to consider the role that screen-based tasks should play in visual
perception research as Gibson did 40 years ago. The advantage of screen-
based tasks is the ability to show multiple observers the identical stimulus
and to be able to manipulate the qualities of those stimuli. Psychophysics
and physiological studies of vision would be impossible without the level
of control provided by screen-based tasks. However, this comes at the
expense of generalization because observers cannot move their bodies or
interact with their surroundings. Undoubtedly, the results of screen-based
tasks generalize to sedentary activities like watching television and, to a
lesser extent, using a computer (even interacting with computer software
with a mouse or touchscreen is an active task). Beyond this, generalization
is not guaranteed. At the broadest level of analysis, screen-based studies
are concordant with natural tasks in showing that task factors outweigh
target appearance in influencing where people look (e.g., Smith & Mital,
2013). But at a more detailed level, comparisons between active and
passive observers find differences in visual exploration even when watching
the same “stimulus” (e.g., Foulsham et al., 2011).
More work is needed to test which aspects of screen-based tasks
generalize to natural tasks. Eye tracking integrated with virtual reality
220 John M. Franchak
provides a means by which researchers can study whole-body visual
exploration while maintaining similar experimental control (and the ability
to replicate) screen-based tasks in three dimensions. For example, visual
search time is comparable in both 2D (screen) and 3D (virtual reality)
search tasks, but those in 3D learn spatial associations better and use fewer
fixations to complete the task (C.-L. Li, Pilar Aivar, Kit, Tong, & Hayhoe,
2016).
However, the laboratory and naturalistic paradigms are about more
than methodological differences. Widespread acceptance of the screen-
based paradigm is rooted in the theoretical commitment to what Gibson
termed the “sequence theory” of visual perception. In this view, visual
perception is based on a series of discrete retinal images, so studying
observers’ perception of images on a screen is an appropriate
methodological choice. But Gibson argues that this theory cannot account
for our perception of a visual world that is persistent and stable in both
space and time:
The error was to suppose in the first place that perception of the
environment is based on a sequence of discrete images. If it is based
instead on invariance in a flow of stimulation, the problem of
integration does not arise. There is no need to unify or combine
different pictures if the scene is in the sequence, specified by the
invariant structure that underlies the samples of the ambient array.
The problem of explaining the experience of … what I would now call
the surrounding environment is a false problem. The retinal image is
bounded, to be sure, and the foveal image has even smaller bounds,
but the ambient array is unbounded.
(pp. 221–222, emphasis in original)
Perception-Action Reciprocity
In Gibson’s (1979/2015) view, perception and action are interdependent:
“We must perceive in order to move, but we must also move in order to
perceive” (p. 213). The primary function of perception is to guide action
adaptively. Perceiving possibilities for action (or “affordances” in Gibson’s
terminology) requires humans and other animals to detect relations
between the self and the world (see Wagman, Chapter 8, in this volume).
Reciprocally, motor actions generate perceptual information. Exploratory
movements such as looking with the head and eyes or feeling with hands,
whiskers, or antennae are intended to “forage” for perceptual information
(see Franchak, Chapter 12, in this volume). Performatory actions (e.g.,
catching a ball), and spontaneous movements (e.g., fidgeting) are not
intended to generate perceptual information, but do so nonetheless.
In everyday activities, perception and action are seamlessly intertwined.
Tiny swaying movements while sitting or standing yield information about
postural stability and body position relative to the environment. Locomo-
tion generates information about the body’s dimensions, abilities, and loca-
tion relative to the layout of surfaces. Grasping an object produces
information about the acting limb relative to the acted-on object. More-
over, the coupling between perception and action creates an ongoing flow
of information, blurring the boundary between remembering and planning
Gibson’s Ecological Approach to Locomotion 223
(see Thomas, Riley, & Wagman, Chapter 14, this volume). Every move-
ment creates perceptual information about its consequences and about
what to do next. Thus, action ties perception to a history of the recent past
and provides a roadmap into the near future.
Basic Actions
Gibson highlighted four types of basic actions: (1) postural actions,
(2) exploratory actions, (3) locomotion, and (4) manipulation. Each type
plays a prominent role in his theory. Gibson devoted many pages, and in
the case of visual exploration and locomotion/manipulation, whole chap-
ters to their discussion. In Gibson’s treatment, the four types of basic
actions are interconnected. Posture and exploration, for example, support
and guide locomotion and manipulation.
Postural actions are the foundation for every other kind of action (see
Reed, 1982b). Posture provides a stable base for moving the torso, head,
and eyes during visual exploration and for moving the limbs during loco-
motion and manipulation. Postural actions also keep the animal oriented
to gravity and the medium (air, water, or ground) in or on which it moves.
Exploratory actions are movements intended to generate information
for perceptual systems (E. J. Gibson & Pick, 2000). Exploration can guide
upcoming actions, aid knowledge acquisition, or support playful activity
(E. J. Gibson, 1988). Turning the head creates motion parallax and optic
flow, and brings new parts of the self and environment into view. Wielding
an object or palpating a surface creates torques at the joints and deforma-
tion and stretching of flesh and skin, thereby revealing information about
object and surface properties.
Locomotion involves moving the whole body through the environment.
It can be accomplished with a tremendous variety of means. Trotting dogs
create forces against the ground with their limbs; swimming fish sweep
their tails back and forth through the water; and flying insects beat their
wings against the air (Dickinson, Farley, Full, Koehl, Kram, & Lehman,
2000). Even during a single observation, animals exhibit a variety of means
of locomotion (Adolph & Berger, 2015). Human infants, for example,
produce multiple patterns of interlimb coordination when crawling.
Manipulation involves acting on objects and surfaces in the environ-
ment. Humans normally do it with their hands, but when their hands are
occupied, adults can use a hip to bump open a door or their neck and chin
to hold a package. People without hands can learn to use their feet to
thread a needle, play the piano, light a cigarette, or make a sandwich.
Non-human animals use necks, beaks, and mouths to manipulate and
transport objects.
In contrast to typical approaches (e.g., Schmidt & Lee, 2011), Gibson’s
ecological approach to locomotion and manipulation is not merely about
the well-studied human activities of walking and reaching. Instead,
224 Karen E. Adolph et al.
ibson’s theory is broad enough to cover a wide range of animals with a
G
wide range of perception-action systems performing the wide range of
behaviors that the species evolved to perform. His theory is robust enough
to explain perceptual control of action in a wide range of environments. It
applies to natural, uncultivated environments but also generalizes to the
designed world of manufactured artifacts and built environments—actions
like sitting on chairs, using binoculars, driving cars, landing planes, and
using tools such as hammers (see Pagano & Day, Chapter 3, in this
volume). However, because Gibson’s primary focus was on perceptual
control of functional action, his approach is less suited to explain many
popular topics in human perception research such as visual processing,
color perception, covert attention, face perception, and optical illusions.
Indeed, many classic and current perceptual phenomena—especially those
arising from artificial laboratory tasks—are not well handled by his theory.
Maintaining Balance
While upright, the body is always in motion. Even during quiet stance, the
body sways within the base of support. To maintain balance, a sway in
Gibson’s Ecological Approach to Locomotion 227
one direction must be met by a compensatory sway in the opposite direc-
tion. As Gibson supposed, optic flow is important for balance control.
When visual, haptic, vestibular, and muscle-joint information for body
sway are in conflict, visual information trumps the rest. Adults, for
example, adjust their standing posture in response to simulated optic
flow in a “moving room” (Lee & Lishman, 1975). Forward and back-
ward movements of the room’s side walls create a lamellar flow structure
in the visual periphery. The optic texture elements streaming in parallel
along the sides of the field of view simulate the visual information for
body sway (like the false perception of self-motion when an adjacent car
or train starts moving). In response, adults induce compensatory sways
in the opposite direction (see Shaw & Kinsella-Shaw, Chapter 6, in this
volume; Smart, Hasselbrock, & Teaford, Chapter 10, in this volume).
Older children (3–6 years) do likewise. Compensatory responses in tod-
dlers are so strong that they often step, stagger, and fall (for a review, see
Adolph & Berger, 2015). Although precrawlers do not respond to
peripheral lamellar flow in the moving room, after 15 days of
experimentally-boosted locomotor experience propelling themselves
around in baby go-carts, their responses are more similar to those of
infants who are independently mobile.
Of course, in everyday settings, vision is not the only source of informa-
tion for postural sway. As Gibson recognized, multiple sources of informa-
tion redundantly specify the body’s position in space. Indeed, merely the
light touch of a toddler’s hand resting on a horizontal surface provides
haptic information for postural stability (for a review, see Adolph &
Berger, 2015). Although touching a surface does not mechanically support
infants’ weight, it reduces postural sway, and walking experience improves
toddlers’ ability to benefit from a light touch.
Controlling Collision
In addition to lamellar flow, postural sway and locomotion also create
radial optic flow. As Gibson suggested, optic texture elements streaming
outward from a central point of expansion specify the direction of heading,
and can guide locomotion toward a goal, or around an obstacle (Warren,
1998). The rate of change in the expansion of optic flow specifies the time
to contact, so collisions can be softened or avoided (Lee, 2009). Gibson
pointed out that an approaching obstacle (e.g., a ball on a collision course
with the observer’s face) expands in the observer’s field of view. It also
hides more and more of the background vista until the obstacle fills the
field of view. In contrast, a receding obstacle (e.g., a car speeding away)
progressively reveals more and more of the background scene. Likewise, as
an observer approaches an obstacle, more and more of the background is
occluded, but as an observer approaches an aperture, more and more of
the background is revealed inside the opening.
228 Karen E. Adolph et al.
Long before infants are independently mobile, they distinguish an
approaching obstacle from an aperture. They blink their eyes and press
their heads backward in response to a looming obstacle but not in response
to a looming aperture (for a review, see Adolph & Berger, 2015). Infants
also take the path of the approaching obstacle into account and distinguish
objects on a head-on collision course from those that will pass safely to
one side (Schmuckler, Collimore, & Dannemiller, 2007). However,
younger infants use information about the size of the visual angle created
by the approaching object; they wait until the visual angle is a certain size
and as a consequence they blink too late to protect their eyes from objects
on a fast collision course (Kayed & van der Meer, 2007). In contrast, older
infants use information about the time to contact. Because this information
is available earlier in the object trajectory, they can respond to faster accel-
erations (see van der Meer & van der Weel, Chapter 7, in this volume).
After infants become mobile, they slow down and turn their bodies
while approaching apertures (for reviews, see Adolph & Berger, 2015;
Adolph & Robinson, 2015). But people of all ages and sizes, from infants
to elderly adults, attempt to squeeze through apertures too narrow to fit
their bodies. Pregnant women adjust their judgments to their growing
bellies, but they also attempt to navigate apertures that are slightly too
small. Apparently, attempting to fit is more compelling than the penalty of
entrapment. Unlike adults, however, infants repeatedly attempt to navigate
apertures so small that they can only wedge an arm or leg into the
opening.
Long before the child can discriminate one inch, or two, or three, he
can see the fit of the object to the pincer like action of the opposable
thumb. The child learns his scale of sizes as commensurate with his
body, not with a measuring stick.
(1979/2015, p. 224)
Acknowledgments
Work on this chapter was supported by the National Institute of Child
Health and Human Development (R01-HD033486 and R01-HD086034)
to Karen Adolph. We are grateful to Jennifer Rachwani and members of
the NYU infant action lab for their insightful comments.
14 Information and Its Detection
The Consequences of Gibson’s
Theory of Information Pickup
Brandon J. Thomas, Michael A. Riley, and
Jeffrey B. Wagman
Figure 14.1 Top: Apparatus and procedure for the overhead reaching task.
Average perceived/actual ratios when participants reported ability to
reach with stick while it was present, while it was absent, with remem-
bered stick length added to reach-ability without the stick (i.e., addi-
tive model), and reach-ability without the stick (right). Bottom:
Apparatus and procedure for the minimum pass-through-ability task.
Average perceived/actual ratios when participants reported ability to
pass through the aperture with stick while it was present, while it was
absent, with remembered stick length added to pass-through-ability
without the stick (i.e., additive model), and pass-through-ability
without the stick (right).
Source: From Thomas and Riley (2014). Adapted with permission from American Psycho-
logical Association.
ϱϰϴ
EŽŝŶƚĞŶƟŽŶ
LJ
Ϯϳϰ
ũ
nj dž
ϭ
;ĂͿ ϭ Ϯϳϰ ϱϰϴ
ŝ
ϱϳϮ
/ŶƚĞŶƟŽŶ
LJ
Ϯϴϲ
ũ
nj dž
ϭ
;ďͿ ϭ Ϯϴϲ ϱϳϮ
ŝ
Figure 14.2 (a) No perceptual intent condition and (b) perceptual intent condition of
the Experiment 1. Left panels show 3-D plots of exploratory wielding in
a single trial. Right panels show recurrence plots for the y coordinate.
Source: From Riley et al. (2002). Reprinted with permission from Sage.
248 Brandon J. Thomas et al.
wielding-to-perceive a given property emerge as a low-dimensional dynam-
ical system that enslaves the component level systems.
The concept of perceptual systems continues to develop under the eco-
logical approach to perception and action. Perception is a coordinated
activity. The behavioral dynamics of perceptual systems reflect functional,
softly assembled couplings between information and movement, and
between animal and environment. These couplings and the resulting
dynamics are constrained by the particulars of a task and by the perceiver’s
intentions and purpose.
Conclusion
Most contemporary theories of psychology create an epistemic gap that
separates reality from experience. Ambiguous and passive stimulation of
sense receptors by meaningless physical variables must necessarily be
enriched by cognitive mediation to become meaningful to the agent. Gib-
son’s theory of information pickup, and his ecological approach more
broadly, sidestep this issue by making different assumptions about what is
perceived, how perception occurs, and why perception occurs. Gibson
posited the animal-environment system as the fundamental unit of analysis
and described perception of the environment in terms of ecological physics,
rather than in terms of the abstract physics of Newton and the imaginary
geometry of Euclid. He shifted the focus from sensory stimulation to
information in ambient energetic media and redefined the senses as percep-
tual systems that actively and continuously detect such information. Also,
he rescaled the time course of perception, beyond the presence or absence
of sensory stimulation to the boundaries of ecological events (see Blau et
al., 2013).
Since the release of Gibson’s (1979/2015) book, the theory of informa-
tion pickup has continued to be validated and expanded. Ecological
physics has withstood considerable empirical scrutiny. Perceptual systems
have been shown to exhibit anatomical and modality independence, and
advances in the understanding of perceptual systems as smart perceptual
instruments have occurred in tandem with advances in the understanding
of perception-action as the soft assembly of coordinative structures. Bur-
geoning work has begun to demonstrate the efficacy of the theory of
information pickup for understanding memory as well as other functions
of higher-order cognition. With some of the implications yet unexplored
and more still to be realized, Gibson’s ecological approach to psychology
continues to provide fruitful lines of scientific inquiry even after 40 years.
Part IV
Depiction
15 The Use and Uses of Depiction
Thomas A. Stoffregen
James Gibson cared deeply about theoretical issues relating to the nature,
creation, and perception of images, pictures, and other depictions. He wrote
about these issues throughout his career and devoted an entire section of his
most mature theoretical statement to depiction. In this chapter, I point out
that depictions have been of great significance in the behavioral sciences. I
note that behavioral science research continues to rely on measurements of
experience and behavior relative to depictions and that, therefore, theoretical
issues relating to the nature and status of depictions are important for our
evaluation of much current research. A related issue is whether observers
(correctly) perceive the fact of depiction, or (erroneously) believe that they
are observing “the real thing” (i.e., that which is depicted), rather than a
depiction. My discussion leads me to consider the affordances of animal-
depiction systems. I argue that animal-depiction systems and animal-
undepicted systems (to use an unwieldy term) have markedly different
affordances, and that both humans and non-humans readily differentiate
these affordances. I claim that the fact of depiction is specified, and that
exploratory activity generates information in the global array (Stoffregen &
Bardy, 2001; Stoffregen, Mantel, & Bardy, 2017) that permits direct percep-
tion of the affordances of animal-depiction systems.
Figure 15.1 A camera obscura, in which light enters through a hole. The figure
illustrates the projection of an image that could be viewed by a human
observer.
From Antiquity, the aperture through which light entered the camera
obscura was a hole (such devices still do exist; small versions are called
pinhole cameras; Wade & Finger, 2001). At the beginning of the 16th
century Johannes Kepler fitted a camera obscura with a convex glass lens,
which had the effect of making the projected image both brighter and
sharper. Kepler’s camera + lens system bore a physical resemblance to the
(stationary) chambered eye, which has a lens that refracts and focuses
light, a chamber through which the light passes, and a “far wall”, the
retina, onto which an inverted image is projected (Figure 15.2). Like many
of his contemporaries, Kepler had extensive experience looking at the pro-
jected images inside cameras obscura. Combining this common experience
with the physical resemblance between the camera + lens and the cham-
bered eye, Kepler claimed that the chambered eye functions as a camera
obscura, and that perception of the illuminated environment consists of
seeing (or looking at) the image that exists on the retina (Lindberg, 1976).
Kepler’s theory (i.e., that the eye yields an image on the retina) was con-
firmed by dissecting the eye of an ox and scaping the back until the tissue
was so thin that the projected image could be seen by a person looking at
the back of the eye (Gal & Chen-Morris, 2010; see Carello & Turvey,
Chapter 4, in this volume).
Kepler’s analysis was the origin of the “picture theory” of perception
(Gibson, 1979, pp. 52–55; Ronchi, 1957; Stoffregen, 2013; Wade &
Finger, 2001; cf. Cutting, 1988). James Gibson rejected the picture theory.
The Use and Uses of Depiction 257
Figure 15.2 The chambered eye considered as a camera with a lens. This figure
was used by Descartes, and was inspired by Kepler (Lindberg,
1976). Note the human observer, who views the projected image.
Aesthetic Experience
This category comprises most of what we refer to as art; painting, sculp-
ture, photography, cinema, and other creations whose primary purpose is
to give rise to some type of aesthetic awareness (e.g., of “beauty”). Aes-
thetic art is functional only as part of an artist-observer system.
To Study Behavior
Participant-depiction systems afford a great deal of behavioral science
research. Behavioral scientists have been busily, creatively, industriously
exploiting this affordance for several centuries. Participant-environment
systems that do not include depictions afford different behavioral science
research. Whether, and how, these two types of affordances may overlap is
subject to empirical tests which, thus far, have been far too rare.
Looking at art is an ancient activity, giving rise to conscious experience
that can be profound. However, depictions have uses that extend far
beyond aesthetic appreciation. Animal-depiction systems have a very wide
range of affordances; affordances that are routinely actualized in ordinary
life. The utterly common exploitation of these diverse affordances suggests
that, in general, we accurately differentiate animal-depiction systems from
animal-environment systems that do not include depictions. In The Ecolo-
gical Approach to Perception and Action, accurate differentiation arises
from direct perception which, in turn, entails the claim that affordances
relating to depictions may be specified. It is to that topic that I now turn.
The global array is not ambient to the massless, geometric points of the
conventions of linear perspective (i.e., to pure kinematics); rather, it
intrinsically is ambient to and provides information about the full
dynamics (i.e., both kinematics and kinetics) of animal-environment
systems, including animal-depiction systems. The physical activity of
animals (including small, subtle movements, such as changes in gaze used
in viewing a painting) simultaneously (but differently) alters the structure
of multiple forms of ambient energy.
Specification of the animal-environment system in the global array has
important consequences for our ability to differentiate animal-depiction
systems from other types of animal-environment systems. As one example,
observer movement relative to a flat screen (e.g., a projection surface for
cinema or virtual reality) simultaneously alters the structure of optics and
of gravitoinertial force, thereby yielding patterns in the global array that
are different from patterns generated by observer movement relative to a
three-dimensional layout (Stoffregen, 1997). This point was made origin-
ally by Gibson (1966a, 1979/2015), and remains valid. The same point
applies, in slightly subtler form, to the consequences of observer movement
relative to three-dimensional depictions. Similarly, touching of depictions
(e.g., a painting of a baseball) yields relations between haptics and optics
that differ qualitatively from touching of the depicted object (e.g., a base-
ball). It can be argued that for active, unconstrained observers, depiction is
specified in the global array and, therefore, that it is always possible to dif-
ferentiate depiction from the corresponding physical reality (Stoffregen et
al., 2003). It is for this reason that depictions can be differentiated from
undepicted reality: The fact of depiction is specified. This argument applies
not only to depictions in the illuminated environment but also to depic-
tions in other single-energy arrays (e.g., acoustics, as in recorded or synthe-
sized sound, including speech). It applies equally to “multimodal”
simulations (see Figures 9 and 10 in Stoffregen et al., 2017). The fact that
experimental participants can perceive depictions (i.e., can recognize the
thing or activity that is being depicted) and can control depiction-based
systems does not imply that the information detected and used under
experimental conditions is the same as in the outside world.
A central premise of The Ecological Approach to Perception and Action
is that perceptual information is not imposed but, rather, is obtained. In
270 Thomas A. Stoffregen
1966, Gibson devoted an entire chapter to this claim (1966a, Chapter 2),
and the obtaining of information was the principal subject of Part 3 of
Gibson (1979/2015). Our physical interaction with the environment causes
changes in the stimulation of receptors. The nature of changes in stimu-
lation is a reciprocal function of relations between the nature of an organ-
ism’s movement and the nature of the environment, that is, of the
animal-environment system.
One common use of skilled movements for the generation and obtaining
of information about affordances is in the exploratory activity of infants.
Infants use particular movements to differentiate the affordances for loco-
motion of surfaces (e.g., Adolph, Eppler, Marin, Wiese, & Clearfield,
2000). The literature on infants’ manipulation of handheld objects (e.g.,
Bushnell & Boudreau, 1993), can be re-interpreted in terms of exploratory
actions generating information about affordances of the infant-toy system,
as can recent studies relating infants’ control of gaze in the learning of new
words (Rader, 2018; Rader & Zukow-Goldring, 2012).
In adults, early qualitative examples include Solomon and Turvey
(1988), who showed analytically that active exploration of objects in iner-
tial space generates dynamic patterns of stimulation that are sufficient for
perception of an affordance for reaching. Riley, Wagman, Santana,
Carello, and Turvey (2002) reported that the patterns of movement
selected in manual wielding differed as a function of task instructions, that
is, as a function of which properties of the animal-environment system
participants were attempting to detect. Mantel et al. (2015; cf. Bingham &
Stassen, 1994), demonstrated analytically that particular types of embodied
movement generate patterns in the global array that carry information
about particular affordances. Specifically, they demonstrated that changes
in the optic array (such as might be presented to an otherwise stationary
observer) were intractably ambiguous with respect to the affordance that
participants were asked to perceive (i.e., whether a virtual object was
depicted as being within reach). That is, they showed that self-generated
exploratory activity was essential for the generation (and, therefore, the
pickup) of information in the global array.
Many classical illusions yield the intended conscious awareness only
when exploratory activity is restricted. The Ames room can be understood
as a three-dimensional depiction of an oddly-shaped room. The illusion
occurs when observers fail to differentiate the depiction from common,
rectilinear rooms (e.g., Runeson, 1988). The illusion works only when the
point of observation is fixed, whether through restraint of the head (e.g., a
chin rest), or of a camera. Movement of the point of observation rapidly
generates information about the actual geometry of the room, thereby
allowing observers to detect the depiction for what it is. The analysis of
Mantel et al. (2015) might be extended to the Ames room situation. We
could examine emergent patterns extending across optics and gravito
inertial force, generated by self-controlled motion of the head, that differ
The Use and Uses of Depiction 271
as a function of the three-dimensional geometry of the displays. Such an
analysis could predict particular observer movements that should optimize
specification of the peculiar physical geometry of the physical Ames room
(cf. Riley et al., 2002). Similarly, in Mark, Balliett, Craver, Douglas, and
Fox (1990), the elimination of ordinary standing body sway was sufficient
both to prevent participants from detecting and learning about changes in
affordances for sitting. Body sway simultaneously (but differently) alters
the position and movement of the body relative to the illuminated environ-
ment, the acoustic environment, the gravitoinertial force environment, and
so on, thereby generating higher-order, emergent patterns in the global
array that appear to be sufficient for the perception of maximum sitting
height. While looking at a painting, sway likely generates patterns in the
global array that are unique to the two-dimensional geometry of the paint-
ing. As has been shown with manual wielding, the complex dynamics of
body sway vary as a function of changes in the goals of perception (e.g.,
Palatinus, Kelty-Stephen, Kinsella-Shaw, Carello, & Turvey, 2014; cf.
Riley et al., 2002), and so I predict that task-specific variations in sway
likely optimize global array parameters that specify particular aspects of
the animal-painting system.
Conclusion
In his Chapter 15, Gibson (1979/2015) covered many topics, only some of
which are addressed in the present chapter. For example, Gibson wrote
about how children learn to draw (pp. 262–266), and about conventions
of drawing (pp. 273–277). Figure 15.6 satirizes some of these conventions
(cf. Gibson’s Figures 15.3, 15.5, and 16.1). The development of skills of
depiction, and conventions that guide the creation of depiction are
important issues. Gibson’s insights on these topics have been given insuffi-
cient consideration.
Consistent with his peers, Gibson (1979/2015) defined depiction in
terms external to the observer, and candidate definitions were evaluated in
terms of conscious experience, or “visual awareness.” I have reviewed the
practice, in behavioral science research, of using depictions as substitutes
for un-depicted reality which, historically, is based upon the claim that
Figure 15.6 The original caption reads; Locating the vanishing point. This whimsi-
cal satire demonstrates that the conventions of linear perspective are
conventions, not laws.
Source: Plate from McCaulay (1978). Copyright ©1978 by David McCaulay. Reprinted by
permission of Houghton Mifflin Publishing Company. All rights reserved.
The Use and Uses of Depiction 273
looking at the world is meaningfully related to looking at images. I have
suggested that an essential step in our understanding of depictions must be
to consider them in terms of the animal-environment system, with primary
emphasis on affordances that exist in such systems. I state that the
affordances of animal-environment systems that include depictions differ
from the affordances of animal-environment systems that do not include
depictions. My claims are developed from Gibson’s arguments that depic-
tions are differentiated from the corresponding, undepicted reality, and
that depictions have an uncertain relevance to general theories of percep-
tion and action. I do not propose that depictions can never be used in
experimental research on perception and action. Rather, I claim that the
fact of depiction is specified in global array patterns that are available to
research participants and that, depending upon the hypotheses being
tested, this fact may have consequences for the interpretation of experi-
mental data, and for any attempt to use such data to evaluate general the-
ories of perception-action. I have further suggested that conscious
awareness of depictions, while important, is a subset of a much larger class
of behaviors that are afforded in animal-depiction systems, and that a
general understanding of depictions will require us to examine the full
range of these affordances.
16 Revisiting Ecological Film Theory
Julia J. C. Blau
In the final chapter of his (1979/2015) text, Gibson laid out a plan for an
ecological theory of film perception. It should, he argued, be understood as
a presentation of a changing optic array. The information present in the
optic array displayed in a film is analogous to that encountered by a typical
observer in a natural scene: Overall optical expansion is generated by the
camera moving forward just as surely as it is generated by the organism
moving in the same way. As such, the audience would view these events as
if they were occupying the same position in space as the camera and have a
sense of being present in the depicted events.
However, Gibson (1979/2015) struggled to reconcile this egocentric
view with the inability on the part of the audience to in any way shape
those events. If the perceptual system is about moving to perceive, then
what do we say about an event that cannot be moved by the observer?
Moreover, he pointed out that “no one is ever wholly deceived” (p. 287)
into believing the events are actually happening in front of them––they do
not run from the cinematic monster (cf. Stoffregen, 1997), and they do not
feel as though their own head has turned when the camera has panned. I
believe it is possible to reconcile Gibson’s intuition––that the information
presented in a film is lawful––with the concerns about realness by drawing
some distinctions between perceiving depictions and perceiving depicted
events, as well as deepening the understanding of the technology involved.
Imagine that you are watching a film. The heroine draws her sword and
places the blade at the base of the villain’s throat; he grimaces but does not
back away; instead, the two glare angrily at each other. Assuming the film-
maker has done their job, you might be having a complex emotional
response to this moment. Her anger is righteous, and you empathize with
it. The villain probably deserves to die, but you are impressed by his
bravery and hope she shows mercy. Thunder rolls in the background, and
you feel a sense of dread.
There is a great deal to unpack here. First, the technology that produced
the film presents a depiction of events. That technology evolved, through
trial-and-error, to make the viewer perceive motion where there (techni-
cally) is none. The heroine draws her sword, and you perceive her arm
Revisiting Ecological Film Theory 275
move through the action as seamlessly as if she were in front of you.
Second, the story is having an emotional impact on you, specifically of an
empathetic type. You fear for the villain’s life, even as you understand and
are moved by the heroine’s fury. That emotional impact has been deliber-
ately crafted through the careful selection of which events will be presented
and in what fashion. And finally, you have noticed that the weather in the
scene is stormy, and since storms have preceded violence several times in
the film, you sense that things are not looking good for the villain. The
thoughtful use of consistent visual and auditory symbols has given you a
deeper insight to the scene than you might have had otherwise.
Because of the broad applicability of the ecological approach—and,
more importantly, parsimony—I contend that the ecological theory of per-
ception can explain the psychological phenomena present in all three of
these cases.
;ĂͿ ;ďͿ
Figure 16.1 (a) A camera receives light from a scene and focuses it, then (b) a spin-
ning shutter exposes each frame.
revolution takes 1/24th of a second. When the shutter is not covering the
opening in the metal plate, the film is held still, and the film is exposed.
The light from the scene strikes part of the film that is positioned directly
behind the rectangular opening. When the film is later treated with a
different chemical, the light causes a chemical reaction that darkens that
portion of the film—the more light that strikes the film, the more it
darkens. In color films, three layers of photosensitive film are pressed
together. Each has chemicals that are sensitive to different parts of the light
spectrum: red, green, and blue. The red layer would respond to differing
levels of red light, for example; the process is otherwise the same.
When the shutter spins to cover the opening, the light no longer reaches
the film. Using perforations on the sides of the film, the gears in the camera
advance the film to the next section (called a frame) ready to be exposed
when the shutter uncovers the rectangle once more. In this way, the film
does not smoothly and continuously move through the camera; rather, it
stops and starts 24 times per second. Importantly, the shutter is open and
exposing the film for a non-instantaneous amount of time. The film is
receiving light for about 1/48th of a second, meaning each frame is sam-
pling the optic flow for that entire time. Any movement during that
window will be recorded as such (more on this below).
The technology of film, film viewing, and filmmaking has changed––in
some ways drastically––since the publication of Gibson’s 1979 text. The
vast majority of independent filmmaking and even a large portion of pro-
fessionally made films are recorded by digital cameras instead of film. That
said, a digital camera works very similarly to a film camera. Instead of
photosensitive film, the light is focused on a computer chip which converts
the amount of light into a digital print (stored in binary: 1s and 0s). Instead
of a spinning shutter, the camera turns the sensitivity of the chip on and
Revisiting Ecological Film Theory 277
off in the same pattern: record for 1/48th of a second, stop recording for
1/48th of a second, repeat. These data are then stored on either magnetic
tape (like you might find in a VHS tape) or (more commonly) in the hard
drive of the camera.
Figure 16.2 (a) The original scene (in grayscale); (b) the negative; (c) the positive
print.
278 Julia J. C. Blau
;ĂͿ ;ďͿ
Figure 16.3 (a) A bright light is projected through the positive print; (b) a spinning
shutter exposes then blocks each frame three times.
signal bouncing back and forth at a time) or interleaved (i.e., two signals
bouncing back and forth, one from the first image, one from the second
image). To make things more complicated, the TV typically presents 30
images per second1 but presents each image twice, once on each of the two
alternate lines.
Broadly, there are two important differences between screens (either TV
or computer) and projectors (film or digital) that will be at issue in our dis-
cussions of film perception. First, for screens there is no full darkness: the
screen is always projecting some light. Second, unlike the projector, there
is never a full image presented on screens––only one or two small lines.
the process by which the human brain retains an image for a fraction
of a second longer than the eye records it. You can observe this phe-
nomenon by quickly switching a light on and then off in a dark room.
Doing this, you should see an afterimage of the objects in the room.
(Barsam & Monahan, 2004/2016, pp. 47–48)
Leaving aside the conflation of retinal afterimage and cortical processes for
the moment, this passage is also supposed to be an explanation for why we
can see motion in a series of still photographs. Rather than pointing out
the fallacy in such an explanation––after all, Anderson and Anderson
(1993) have already provided a logical deconstruction of the topic––I will
instead present a more ecological take on the two concepts.
Flicker fusion is a physiological phenomenon, roughly equivalent to
making sure there is enough light in the room to stimulate the eye’s rods
and cones (i.e., above the absolute threshold). That is, it is not about
detecting information for affordances or events, rather it is about the pre-
sentation rate of light exceeding the minimum temporal threshold of our
retinal cells. While our perceptual system as a whole does not sample dis-
cretely (it samples the optic flow continuously), individual neurons are
limited in their ability to continuously fire. After being stimulated by light,
there is a refractory period during which––no matter how much stimu-
lation is presented to it––a retinal cell cannot fire. As long as light reaches
the cell before this refractory period is over, the cell cannot tell the differ-
ence between continuous and discontinuous light.
Flicker fusion happens at roughly 50 flashes per second, well below the 72
flashes per second used by film projection (Berry & Meister, 1998; Simonson
& Brozek, 1952). There is a flash of light, the retinal cell fires in response,
and by the time it is ready to fire again, more light has been presented. Or in
other words, the retinal cells are firing as often as they possibly can––with or
without constant presentation of light. When presented with continuous light,
the retinal ganglion cells do not fire all at once; some fire while others are in
their refractory period. When presented with flickering light, however, they
synchronize with the flicker (Berry, Warland, & Meister, 1997).
Perceiving motion while watching a series of purportedly still photo-
graphs is a far more interesting and far more––forgive the pun––illuminating
phenomenon. It is true that there is no actual motion on the screen. But to
be fair, there is no actual heroine, no sword, no villain either. However,
the information for all of these is presented. The traditional explanations
Revisiting Ecological Film Theory 281
of persistence of vision make two (largely incorrect) assumptions. The first
is that the eye (or the cortex, depending on the theorist) is replicating––and
holding onto––an image to compare to the next incoming image. The
second is that because the images presented are individually still, all motion
perception must be an inference.
The first assumption is patently false. If a complete image were required
for persistence, then the television (or computer) screen would not work. A
full image is never presented, only a thin line (or two). Unless you are willing
to posit two different visual mechanisms––one for theater projection and one
for television––this cannot be. Additionally, the eye is never passive, even
when sitting purportedly still in a movie theater. If film motion perception
required point-by-point comparison of an incoming image to the one captured
just previously, those points would have to be positioned in the exact same
part of the retina: an impossible task for a continuously moving eye.
The second assumption would require that the shutter on a camera be
open for an instantaneous length of time. Or in other words, if the shutter
is open for even a fraction of a second, the photoreceptive film will capture
light from that entire time period. If there is movement, optic flow will be
captured in the image (see Figure 16.5). Research demonstrates that even a
portion of the optic flow is sufficient to specify the motion of the self or of
objects in the scene (Bardy, Warren, & Kay, 1999). Why would perception
of motion in film be any different? Perceiving motion in film, then, is an
illusion in the truly Gibsonian sense of the word (Turvey, Shaw, Reed, &
Mace, 1981) in that motion is lawfully specified, and so we see motion.
The camera is sampling the optic flow that has to have been generated
lawfully. As Stoffregen (1997) explained:
If the optic array is a product of physical law, then its structure cannot
bear a paradoxical relation to physical reality, any more than physical
law can produce a paradox. Similarly, if films are created in accord-
ance with physical law, then they cannot structure optic arrays in ways
that are incompatible or paradoxical.
(p. 166)
Figure 16.5 (a) No blur specifies the ball (and camera) are still. (b) Global blur
(global optic flow) specifies the camera is moving. (c) Localized blur
(localized optic flow) specifies the ball is moving.
282 Julia J. C. Blau
The last part of this quote brings up an interesting point: Not all film or
television is produced in accordance with physical law. Digital animation,
computer generated imagery (CGI), stop-motion animation, and even the
humble cartoon are all presenting artificially-generated events that are not
necessarily subject to physical law.
Take stop-motion animation: small figures are made out of clay, and
positioned in front of a still camera which takes a photograph, the clay
figure is moved a very tiny amount, another photograph is taken. This
process repeats until the photos can be strung together and presented
through a film projector. The clay figure appears to move!
As this technology developed, the filmmakers learned very quickly that
the movements of the clay figures appeared jerky, particularly when they
were supposed to be moving at higher speeds (Brostow & Essa, 2001). The
solution was to introduce artificial motion blur (see Figure 16.5), which
ecological psychologists would call optic flow. When filming a moving
subject, the slight trace of light left by that motion on film is lawfully gen-
erated and so motion is specified. When the motion is artificially generated,
however, the same cannot be said (the clay figure is still when its photo is
taken). To make the motion appear fluid, you have to specify that the char-
acter continues to move during the time that the shutter on the camera is
open, or in other words, you have to introduce blur (Andreev, 2010; Dai
& Wu, 2008; Kawagishi, Hatsuyama, & Kondo, 2003).
Editing
After the actors have completed their scene, but before the audience can
watch a finished film, editing takes place. Typically, scenes are filmed from
many different angles, many different times, and the best bits are strung
together into a coherent narrative. Putting it so simply, however, may give
the false impression that this task is simple. Editing the movie often takes
far longer than filming it did (Singleton, 1991)—up to six months for films
without special effects and CGI, longer for those with them. Deciding what
to include, what to exclude, and even what is missing and needs to be re-
filmed is an incredibly difficult art.
Orientation to a Scene
The most fundamental question of editing is: Why does discontinuous film-
making (as opposed to continuous filmmaking, where the entire story is
captured in one shot) work at all? In our scene with the heroine and the
villain, the camera jumps instantaneously across a cut (see Figure 16.6).
Why does this not bother the audience? If we argue that the audience per-
ceives these events by virtue of the lawful relationship between the pre-
sented information and the events to be perceived, then the audience
should feel as though they have been instantaneously transported from
Revisiting Ecological Film Theory 283
Figure 16.6 Typical editing structure. (a) Two shot (also known as an establishing
shot); (b) over-the-shoulder from the villain’s perspective; (c) over-the-
shoulder from the heroine’s perspective. The camera does not move
continuously from one position to the next, but rather makes a discon-
tinuous jump (at the edit point).
kneeling next to the villain, to standing next to the heroine—and yet, they
do not. In fact, viewers frequently fail to notice editing altogether (Ander-
son, 1996; Murch, 2001).
The ecological approach has an answer to this question: Editing works
because we are oriented in the Gibsonian sense. Murch (2001) is deeply
mistaken when he asserts that “[n]othing in our day-to-day experience
seems to prepare us for such a thing.” Orientation is fundamental to daily
perception and fairly clearly demonstrated in the way that films are edited.
The passage where Gibson (1979/2015) describes the process of orienta-
tion even sounds like it could be a page out of a film textbook:
When the vistas have been put in order by exploratory locomotion, the
invariant structure of the house, the town, or the whole habitat will be
apprehended. The hidden and the unhidden become one environment
… One is oriented to the environment. It is not so much having a
bird’s-eye view of the terrain as it is being everywhere at once. The
getting of a bird’s-eye view is helpful in being oriented, and the
explorer will look down from a high place if possible.
(p. 189)
The theory asserts that an observer can perceive the persisting layout
from other places than the one occupied at rest. This means that the
layout can be perceived from the position of another observer. The
common assertion, then, that “I can put myself in your position” has
meaning in ecological optics and is not a mere figure of speech. To
adopt the point of view of another person is not an advanced achieve-
ment of conceptual thought.
(Gibson, 1979/2015, p. 191)
Revisiting Ecological Film Theory 285
In other words, because of the fully public nature of information, we are
capable of perceiving the events and affordances of other people (Creem-
Regehr, Gagnon, Geuss, & Stefanucci, 2013; Mark, 2007), even when
those people are presented in kinematic displays (e.g., Stoffregen, Gorday,
Sheng, & Flynn, 1999).
I agree with Stoffregen (1997; Chapter 15, in this volume) that what we
perceive when we watch a movie is information specifying a depiction. I
further argue that well-made films are a depiction of someone else’s
events—more specifically, a subset of someone else’s world line. A world
line (Kugler, Turvey, Carello, & Shaw, 1985) is a collection of events and
affordances that are meaningful to a specific person—a collection that,
taken together, constitutes the complete narrative of that person’s life.
At any given moment, an organism is surrounded by a broad array of
information specifying an impossibly large number of events and
affordances (see Shaw & Kinsella-Shaw, Chapter 6, in this volume;
Wagman, Chapter 8, in this volume). An organism does not attend to this
entire infinite set, however, they selectively attend to those events directly
relevant to their current intentions and needs. As such, the selection of
attended-to events says something about the internal state of the person
attending to them (Shaw, McIntyre, & Mace, 1974). From this perspective,
event perception is a reflection of the organism (Bingham, 2000) or, for
these purposes, the character. If the villain—instead of paying attention to
the sword at his throat—is paying attention to the set of keys tied to the
heroine’s belt, that tells the audience something about the character’s state
of mind.
I assert that an immersive narrative is created by presenting a coherent
(and coherently nested) series of events and affordances, meaningful to a
particular entity, usually a character within the story. Admittedly, not all
films are told from the perspective of one person. When they are not, they
are often told from the perspective of a group of people or switch between
distinct subsets of characters. In such cases, the presented world line is the
events and affordances they share. Regardless, the events garner empathy
in precisely the same way that other people do in non-cinematic life: We
experience secondhand sadness in dramatic films and secondhand embar-
rassment in cringe comedy. We do not run from the monster that is, after
all, not threatening us, but we urge the character on screen to do so. We
understand that the events are theirs.
character’s world line (Figure 16.7). The editor (working with the director)
selects those events that will give us the desired insight into a character,
building empathy while advancing the plot (the narrative set).
The phrase “killing your babies” refers to times during the editing
process when the editor must cut out a beloved scene. This could be a
scene in which the acting was particularly good, or a shot was particularly
beautiful, or the dialogue was particularly amusing. These scenes will last
until the very last stage of editing, surviving cut after cut. But every time
the editor and director watch the movie, that scene brings the rhythm of
the movie to a screeching halt. They may not even be sure why: They just
know that it does, and so the scene is removed.
“Killing your babies” is an interesting phenomenon that offers insight
to the process of selecting which events to present in a film. From an ecolo-
gical perspective, I believe it is prompted by a need to fix incoherent
nesting. For example, an editor might cut a scene where the villain uses
technology that did not exist at the time of the movie (i.e., the scene vio-
lates the specified infinite set) or a scene between the captain of the hero-
ine’s guard and his wife (i.e., the scene violates the specified world line
because it is neither the heroine nor the villain’s event). Understanding the
last kind of incoherent nesting––violating the story set––is a more subtle
art, one that sets professional filmmakers apart from amateurs.
Revisiting Ecological Film Theory 287
Which events need to be presented to specify a story? Trouble arises
whenever the filmmakers include an unnecessary event or exclude a neces-
sary one. The problem with the latter is obvious: If the audience does not
have enough information to understand the sequence of events, they will
be lost. The difficulty with the former is less obvious. After all, in non-
cinematic life, events irrelevant to our current needs are literally happening
all the time. Why would it be problematic in a cinematic context to show
too much?
The answer lies in the fact that the audience has an unspoken contract
with the filmmakers. The filmmaker promises to show all information
needed to specify the relevant events, and the audience promises to believe
that (Gibson, 1979/2015; Proffitt, 1977; Willats, 1990). Including an event
without specific relevance to that narrative leaves the audience frustrated.
Chekhov’s (1889) oft-used trope perhaps best exemplifies this concern:
“One must not put a loaded rifle on stage if it isn’t going to go off. It is
wrong to make promises you don’t mean to keep” (p. 163). Put another
way, if you include a gun and it does not go off, you have essentially put in
an open parenthesis (i.e., the beginning to an event) and never a closing
parenthesis (i.e., the end of the event). A gun makes this incompleteness
particularly obvious because it is such a salient object, but any event
without later relevance will have precisely the same effect on the audience.
If the event in question is not absolutely essential to the understanding of
the overarching narrative, it ought to be removed.
Sound Balancing
Once synchronized sound was perfected (in the early 1900s), an entire
branch of technical expertise was created: the sound editor (or later, sound
engineer or sound designer). These technicians quickly realized that simply
placing a microphone in the center of the room will not result in a sound-
track worth listening to. Films selectively present visual events to specify
the narrative; the same is true for auditory events.
In non-cinematic life, we selectively attend to relevant affordances speci-
fied by the auditory array. We will not notice the sound of the fan while
we are listening to our colleague speak. A microphone is not capable of
such selective attention, and a speaker will play the sound of the fan as
loudly as the sound of the words, making the latter inaudible. To replicate
the experience of viewing someone else’s events, sound must be balanced.
That is, it must have the most important sounds be the loudest and the
least important be the quietest.
Sound Effects
In addition to dialogue and music, the majority of films include sound
effects. Sound effects are sounds that are not recorded during filming but
are added later (in post-production). There are several different kinds of
sound effects, but here I would like to focus on background (sometimes
called ambience or walla) and Foley effects.
Background sound effects are the incidental sounds that surround us but
to which we do not attend. We attend to the sound of the villain pleading
for his life but not the sound of the birds chirping and the nearby guards
talking quietly to each other. Typically, the source of these sounds is either
not present (no birds are seen) or ambiguous (it is not clear which guards
are speaking). While proper sound balancing would ensure that these
sounds would be quiet, it would also make sure they were not absent. A
scene without background noise feels artificial, as if it were filmed in a
tin can.
Foley effects (named after Jack Foley, their inventor) are sounds that
link up with a specific event happening on the screen (e.g., a sword being
drawn from its scabbard, or rain falling). Oddly, not only are they are not
recorded with the scene, they are often created by recording an entirely
different event. In actuality, a metal sword being drawn from a leather
scabbard makes very little noise. However, a film will feel wrong without a
290 Julia J. C. Blau
noise to punctuate that moment, so a recording of a kitchen knife being
drawn across a hammer head will stand in. Carello, Wagman, and Turvey
(2005) suggest that Foley effects have some “shared properties” (p. 100)
with the sounds they are meant to mimic, and this may be so, but then
why would recording the actual sound (say, rain falling) create unrealistic
sounds (recorded rain sounds like popping), while recording an entirely
different event (salt falling on tinfoil) creates a more realistic sound?
Both background and Foley sound effects are somewhat puzzling from a
perceptual standpoint. Background effects are only noticeable when they
are absent; in a sense, they need to be there so they can disappear. Foley
effects often only sound realistic when they are recorded in a fully unrealis-
tic fashion. The psychology behind this is not well understood, and given
the profound effect they have on whether a film “works” or not, more
research is certainly warranted.
Conclusion
A great deal of work remains to be done to complete an ecological
accounting of film perception. I argue that the beginnings of that account
should start with the findings of filmmakers. Filmmakers are scientists,
whether they intend to be or not, and they have been performing experi-
ments on the human perceptual system for generations. As ecological psy-
chologists, we should at least attempt to understand their findings.
Whether in the context of perceiving depictions (see Stoffregen, Chapter
15, in this volume) or perceiving natural scenes, what “works” for film
offers insights into human perception—should we choose to listen.
Notes
1. Computer screen image presentation rates (called the refresh rate) are often far
higher, which is important for video game programmers and scientists trying to
present stimuli very quickly.
2. Examples from video games are far more common, likely because those allow
you to actually manipulate events. However, cut scenes (over which the player
has no control) almost exclusively switch to a third-person point of view.
3. I am using the literary terms “symbolism” and “symbols” quite broadly here.
The psychological principles here described would apply similarly to any similar
terms and so the distinctions are superfluous to this discussion. I chose to use
these terms (as opposed to terms used in psychological literature such as “sign”)
because these are the terms typically used in literature and film criticism.
4. This applies to other organisms as well, but filmmakers are not terribly con-
cerned with the ability of, say, a dog in this respect.
5. This was for a variety of reasons but mostly because of the difficulty in synchro-
nizing the different mechanisms being used for visual and auditory recording
and playback (Ulano, 2009) and quality of recording (Crafton, 1999).
References
Aanondsen, C. M., Van der Meer, A. L. H., Brubakk, A.-M., Evensen, K. A. I.,
Skranes, J. S. Myhr, G. E., & Van der Weel, F. R. (2007). Differentiating pro-
spective control information for catching in at-risk and control adolescents.
Developmental Medicine and Child Neurology, 49, 112–116.
Adcock, C. (1990). James Turrell: The art of light and space. Berkeley, CA: Univer-
sity of California Press.
Adelson, E. H. (2001). On seeing stuff: The perception of materials by humans and
machines. In B. E. Rogowitz & T. N. Pappas (Eds.) Proceedings of the SPIE, Vol.
4299 Human Vision and Electronic Imaging VI (pp 1–12). doi:10.1117/12.429489.
Adelson, E. H., & Bergen, J. R. (1991). The plenoptic function and the elements of
early vision. In M. Landy, & J. A. Movshon (Eds.), Computational models of
visual processing (pp. 3–20). Cambridge, MA: MIT Press.
Adolph, K. E. (2008). Learning to move. Current Directions in Psychological
Science, 17, 213–218.
Adolph, K. E., & Berger, S. E. (2015). Physical and motor development. In M. H.
Bornstein & M. E. Lamb (Eds.), Development science: An advanced textbook (7
ed., pp. 261–333). New York: Psychology Press.
Adolph, K. E., Eppler, M. A., & Gibson, E. J. (1993). Crawling versus walking
infants’ perception of affordances for locomotion over sloping surfaces. Child
Development, 64, 1158–1174.
Adolph, K. E., Eppler, M. A., Marin, L., Wiese, I. B., & Clearfield, M. W. (2000).
Exploration in the service of prospective control. Infant Behavior & Develop-
ment, 23, 441–460.
Adolph, K. E., & Hoch, J. E. (2019). Motor development: Embodied, embedded,
enculturated, and enabling. Annual Review of Psychology, 70, 26.1–26.24.
Adolph, K. E., & Joh, A. S. (2009). Multiple learning mechanisms in the develop-
ment of action. In A. Woodward & A. Needham (Eds.), Learning and the infant
mind (pp. 172–207). New York: Oxford University Press.
Adolph, K. E., & Kretch, K. S. (2012). Infants on the edge: Beyond the visual cliff.
In A. Slater & P. Quinn (Eds.), Developmental psychology: Revisiting the classic
studies. London: Sage Publications.
Adolph, K. E., & Kretch, K. S. (2015). Gibson’s theory of perceptual learning. In
H. Keller (Ed.), International encyclopedia of the social and behavioral sciences
(2 ed., vol. 10, pp. 127–134). New York: Elsevier.
Adolph, K. E., Kretch, K. S., & LoBue, V. (2014). Fear of heights in infants?
Current Directions in Psychological Science, 23, 60–66.
292 References
Adolph, K. E., & Robinson, S. R. (2015). Motor development. In L. Liben & U.
Muller (Eds.), Handbook of child psychology and developmental science (7th
ed., vol. 2 Cognitive Processes, pp. 113–157). New York: Wiley.
Agyei, S. B., Holth, M., Van der Weel, F. R., & Van der Meer, A. L. H. (2015).
Longitudinal study of perception of structured optic flow and random visual
motion in infants using high-density EEG. Developmental Science, 18, 436–451.
Agyei, S. B., Van der Weel, F. R., & Van der Meer, A. L. H. (2016a). Development
of visual motion perception for prospective control: Brain and behavioral studies
in infants. Frontiers in Psychology, 7, 100.
Agyei, S. B., Van der Weel, F. R., & Van der Meer, A. L. H. (2016b). Longitudinal
study of preterm and full-term infants: High density EEG analyses of cortical
activity in response to visual motion. Neuropsychologia, 84, 89–104.
Altenhoff, B. M., Pagano, C. C., Kil, I., & Burg, T. C. (2017). Haptic distance-to-
break in the presence of friction. Journal of Experimental Psychology: Human
Perception and Performance, 43, 231–244.
Andersen, G. J., & Braunstein, M. L. (1983). Dynamic occlusion in the perception
of rotation in depth. Perception & Psychophysics, 34, 356–362.
Anderson, J. D. (1996). The reality of illusion: An ecological approach to cognitive
film theory. Carbondale, IL: Southern Illinois University Press.
Anderson, J. D., & Anderson, B. (1993). The myth of persistence of vision revis-
ited. Journal of Film and Video, 5, 3–12.
Anderson, R. C., Mather, J. A., & Wood, J. B. (2013). Octopus: The ocean’s intel-
ligent invertebrate. Portland, OR: Timber Press.
Andreev, D. (2010). Real-time frame rate up-conversion for video games: or how
to get from 30 to 60 fps for free. In ACM SIGGRAPH 2010 Talks, 16. ACM.
Aristotle. (1907). De anima. In R. D. Hicks (Ed.), Aristotle de anima with trans-
lation, introduction and notes. Cambridge, MA: Harvard University Press.
Atiyah, M. (2007). Duality in Mathematics and Physics, lecture notes from the
Institut de Matematica de la Universitat de Barcelona (p. 69). Barcelona: Univer-
sity of Barcelona.
Atkinson, J., & Braddick, O. (2007). Visual and visuocognitive development in
children born very prematurely. Progress in Brain Research, 164, 123–149.
Austad, H., & Van der Meer, A. L. H. (2007). Prospective dynamic balance control
in healthy children and adults. Experimental Brain Research, 181, 289–295.
Avant, L. (1965). Vision in the Ganzfeld. Psychological Bulletin, 64, 246–258.
Bache, C., Kopp, F., Springer, A., Stadler, W., Lindenberger, U., & Werkle-
Bergner, M. (2015). Rhythmic neural activity indicates the contribution of atten-
tion and memory to the processing of occluded movements in 10-month-old
infants. International Journal of Psychophysiology, 98, 201–212.
Baggs, E. (2015). A radical empiricist theory of speaking: Linguistic meaning
without conventions. Ecological Psychology, 27, 251–264.
Baggs, E. (2018). A psychology of the in between? Constructivist Foundations, 13,
395–397.
Baggs, E., & Chemero, A. (2018). Radical embodiment in two directions. Synthese.
doi:https://doi.org/10.1007/s11229-018-02020-9.
Bambach, S., Franchak, J. M., Crandall, D. J., & Yu, C. (2014). Detecting hands in
children’s egocentric views to understand embodied attention during social inter-
action. Proceedings of the 36th Annual Meeting of the Cognitive Science
Society, 36.
References 293
Bambach, S., Smith, L. B., Crandall, D. J., & Yu, C. (2016). Objects in the center:
How the infant’s body constrains infant scenes. Paper presented at the IEEE 6th
Joint International Conference on Development and Learning and Epigenetic
Robotics.
Barac-Cikoja, D., & Turvey, M. T. (1991). Perceiving aperture size by striking.
Journal of Experimental Psychology: Human Perception and Performance, 17,
330–346.
Bardy, B. G., Warren, W. H., & Kay, B. A. (1999). The role of central and periph-
eral vision in postural control during walking. Perception & Psychophysics, 61,
1356–1368.
Barker, R. G. (1968). Ecological Psychology. Stanford, CA: Stanford University
Press.
Barsam, R., & Monahan, D. (2004|2016). Looking at movies: An introduction to
film. (5th ed.). New York: W. W. Norton & Company.
Battro, A. M., Netto, S. P., & Rozestraten, R. J. A. (1976). Riemannian geometries
of variable curvature in visual space: Visual alleys, horopters, and triangles in big
open fields. Perception, 5, 9–23.
Baylor, D. A., Lamb, T. D., & Yau, K.-W. (1979). Responses of retinal rods to
single photons. Journal of Physiology, 288, 613–634.
Benedikt, M. (in preparation). Architecture beyond experience. University of Texas
at Austin.
Benedikt, M., & Burnham, C. (1985). Perceiving architectural space: From optic
arrays to isovists. In W. H. Warren, Jr. & R. E. Shaw, Persistence and change:
Proceedings of the first international conference on event perception. (pp.
103–114). Mahwah, NJ: Lawrence Erlbaum Associates.
Bennett, K. B. (2017). Ecological interface design and system safety: One facet of
Rasmussen’s legacy. Applied Ergonomics, 59, 625–636.
Bennett, K. B., & Flach, J. M. (2011). Display and interface design: Subtle science,
exact art. London: Taylor & Francis.
Bentley, A. F. (1954/1975). The fiction of “retinal image.” In A. F. Bentley & S.
Ratner (Eds.), Inquiry into inquiries: Essays in social theory (pp. 268–285).
Boston, MA: Beacon Press.
Berger, C. C., Gonzalez-Franco, M., Ofek, E., & Hinckley, K. (2018). The uncanny
valley of haptics. Science Robotics, 3, eaar7010.
Bergson, H. (1889). Time and free will: An essay on the immediate data of con-
sciousness. New York: Dover Publications.
Bernstein, N. (1967). The coordination and regulation of movements. London: Per-
gamon.
Berry II, M. J., & Meister, M. (1998). Refractoriness and neural precision. Journal
of Neurosciences, 18(6), 2200–2211.
Berry, M. J., Warland, D. K., & Meister, M. (1997). The structure and precision of
retinal spike trains. Proceedings of the National Academy of Sciences of the
U.S.A., 94, 5411–5416.
Bertenthal, B. J., Longo, M. R., & Kenny, S. (2007). Phenomenal permanence and the
development of predictive tracking in infancy. Child Development, 78, 350–363.
Berthier, N. E., & Carrico, R. L. (2010). Visual information and object size in
infant reaching. Infant Behavior and Development, 33, 555–566.
Beusmans, J. M. H. (1998). Optic flow and the metric of the visual ground plane.
Vision Research, 38, 1153–1170.
294 References
Bian, Z., Braunstein, M. L., & Andersen, G. J. (2005). The ground dominance
effect in the perception of 3-D layout. Perception & Psychophysics, 67,
815–828.
Bingham, G. P. (1987). Dynamical systems and event perception: A working paper,
Part II. Perceiving Acting Workshop Review, 2, 4–7.
Bingham, G. P. (1988). Task-specific devices and the perceptual bottleneck. Human
Movement Science, 7, 225–264.
Bingham, G. P.(1995). Dynamics and the problem of visual event recognition. In
R. Port, & T. van Gelder (Eds.), Mind as motion: Dynamics, behavior and cog-
nition (pp. 403–448). Cambridge, MA: MIT Press.
Bingham, G. P. (2000). Events (like objects) are things, can have affordance prop-
erties, and can be perceived. Ecological Psychology, 12, 29–36.
Bingham, G. P., Bradley, A., Bailey, M., & Vinner, R. (2001). Accommodation,
occlusion, and disparity matching are used to guide reaching: A comparison of
actual versus virtual environments. Journal of Experimental Psychology: Human
Perception and Performance, 27, 1314–1344.
Bingham, G. P., Crowell, J. A., & Todd, J. T. (2004). Distortions of distance and
shape are not produced by a single continuous transformation of reach space.
Perception & Psychophysics, 66, 152–169.
Bingham, G. P., & Pagano, C. C. (1998). The necessity of a perception–action
approach to definite distance perception: Monocular distance perception to guide
reaching. Journal of Experimental Psychology: Human Perception and Perform-
ance, 24, 145–168.
Bingham, G. P., Pan, J. S., & Mon-Williams, M. A. (2014). Calibration is both
functional and anatomical. Journal of Experimental Psychology: Human Percep-
tion and Performance, 40, 61–70.
Bingham, G. P., Rosenblum, L., & Schmidt, R. (1995). Dynamics and the orienta-
tion of kinematic forms in visual event recognition. Journal of Experimental
Psychology: Human Perception and Performance, 21, 1473–1493.
Bingham, G. P., Schmidt, R. C., & Rosenblum, L. D. (1989). Hefting for a
maximum distance throw: A smart perceptual mechanism. Journal of Experi-
mental Psychology: Human Perception and Performance, 15, 507–528.
Bingham, G. P., & Stassen, M. G. (1994). Monocular egocentric distance informa-
tion generated by head movement. Ecological Psychology, 6, 219–238.
Bingham, G. P., & Wickelgren, E. (2008). Events and actions as dynamically
molded spatiotemporal objects: A critique of the motor theory of biological
motion perception. In T. Shipley & J. Zacks (Eds.), Understanding events:
From perception to action (pp. 255–286). New York: Oxford University
Press.
Birmingham, E., Bischof, W. F., & Kingstone, A. (2008). Gaze selection in complex
social scenes. Visual Cognition, 16, 341–355.
Bjork, D. W. (1983). The compromised scientist: William James in the develop-
ment of American psychology. New York: Columbia University Press.
Blau, J. J. C., Petrusz, S., & Carello, C (2013). Fractal structure of event segmenta-
tion: Lessons from reel and real events. Ecological Psychology, 25, 81–101.
Bodenheimer, B., Meng, J., Wu, H., Narasimham, G., Rump, B., McNamara, T.
P., … Rieser, J. J. (2007). Distance estimation in virtual and real environments
using bisection. Paper presented at the 4th Symposium on Applied Perception in
Graphics and Visualization.
References 295
Boeddeker, N., Dittmar, L., Stürzl, W., & Egelhaaf, M. (2010). The fine structure
of honeybee head and body yaw movements in a homing task. Proceedings of
the Royal Society of London B: Biological Sciences, 277, 1899–1906.
Boppré, M., Vane-Wright, R. I., & Wickler, W. (2017). A hypothesis to explain
accuracy of wasp resemblances. Ecology and Evolution, 7, 73–81.
Boring, E. G. (1942). Sensation and perception in the history of experimental
psychology. New York: Appleton-Century-Crofts, Inc.
Borji, A., & Itti, L. (2013). State-of-the-art in visual attention modeling. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 35, 185–207.
Borst, C., Flach, J. M., & Ellerbroek, J. (2015). Beyond ecological interface design:
Lessons from concerns and misconceptions. IEEE: Transactions on Systems,
Man, and Cybernetics, 45, 164–175.
Bourgeois, J., & Coello, Y. (2012). Effect of visuomotor calibration and uncer-
tainty on the perception of peripersonal space. Attention, Perception, & Psycho-
physics, 74, 1268–1283.
Boynton, R. M. (1974). The visual system: Environmental information. In E. C.
Carterette & M. Friedman (Eds.). Handbook of perception (pp. 285–308). New
York: Academic Press.
Branch, G. M. (1979). Aggression by limpets against invertebrate predators.
Animal Behaviour, 27, 408–410.
Braunstein, M. L., Andersen, G. J., & Riefer, D. M. (1982). The use of occlusion to
resolve ambiguity in parallel projections. Perception & Psychophysics, 31, 261–267.
Bremner, J. G., Johnson, S. P., Slater, A. M., Mason, U., Foster, K., Cheshire, A.,
& Spring, J. (2005). Conditions for young children’s perception of object traject-
ories. Child Development, 113(76), 1029–1043.
Bremner, J. G., Slater, A. M., & Johnson, S. P. (2015). Perception of object persist-
ence: The origins of object permanence in infancy. Child Development Perspec-
tives, 9, 7–13.
Bridson, R. (2016). Fluid simulation for computer graphics (2nd ed.). New York:
CRC Press.
Bridson, R., & Batty, C. (2010). Computational physics in film. Science, 330,
1756–1757.
Brostow, G. J., & Essa, I. (2001). Image-based motion blur for stop motion anima-
tion. In Proceedings of the 28th Annual Conference on Computer Graphics and
Interactive Techniques (pp. 561–566). ACM.
Bruggeman, H., Zosh, W., & Warren, W. H. (2007). Optic flow drives human
visuo-locomotor adaptation. Current Biology, 17, 2035–2040.
Bruineberg, J., Chemero, A., & Rietveld, E. (2018). General ecological information
supports engagement with affordances for ‘higher’ cognition. Synthese, 1–21.
Bruineberg, J., & Rietveld, E. (2014). Self-organization, free energy minimization,
and optimal grip on a field of affordances. Frontiers in Human Neuroscience,
8, 599.
Brunswik, E. (1956). Perception and the representative design of psychological
experiments. Berkeley, CA: University of California Press.
Bullmore, E., & Sporns, O. (2009). Complex brain networks: Graph theoretical
analysis of structural and functional systems. Nature Reviews Neuroscience, 10,
186–198.
Burns, M., & Hajdukiewicz, J. R. (2004). Ecological interface design. New York:
CRC Press.
296 References
Burton, G. (1992). Nonvisual judgment of the crossability of path gaps. Journal of
Experimental Psychology: Human Perception and Performance, 18, 698–713.
Burton, G. (1993). Non-neural extensions of haptic sensitivity. Ecological Psych-
ology, 5, 105–124.
Burton, G., & Cyr, J. (2008). Movements of the cane prior to locomotion judg-
ments: the informer fallacy and the training fallacy versus the role of expecta-
tion. In O. Y. Chebykin, G. Z. Bedny, & W. Karwowski (Eds.) Ergonomics and
psychology: Developments in theory and practice. New York: CRC Press.
Bushnell, E. W., & Boudreau, J. P. (1993). Motor development and the mind: The
potential role of motor abilities as a determinant of aspects of perceptual devel-
opment. Child Development, 64, 1005–1021.
Buswell, G. T. (1935). How people look at pictures: A study of the psychology of
perception in art. Chicago: University of Chicago Press.
Cabrera, F., Sanabria, F., Jiménez, Á. A., & Covarrubias, P. (2013). An affordance
analysis of unconditioned lever pressing in rats and hamsters. Behavioural Pro-
cesses, 92, 36–46.
Campbell, J. (1817). On vision. Annals of Philosophy, 10, 17–29.
Campos, J. J., Anderson, D. I., Barbu-Roth, M. A., Hubbard, E. M., Hertenstein,
M. J., & Witherington, D. (2000). Travel broadens the mind. Infancy, 1,
149–219.
Cardinali, L., Frassinetti, F., Brozzoli, C., Urquizar, C., Roy, A. C., & Farne, A.
(2009). Tool-use induces morphological updating of the body schema. Current
Biology, 19, R478–R479.
Carello, C., Anderson, K. L., & Kunkler-Peck, A. J. (1998). Perception of object
length by sound. Psychological Science, 9, 211–214.
Carello, C., Fitzpatrick, P., Domaniewicz, I., Chan, T., & Turvey, M. T. (1992).
Effortful touch with minimal movement. Journal of Experimental Psychology:
Human Perception and Performance, 18, 290–302.
Carello, C., Fitzpatrick, P., & Turvey, M. T. (1992). Haptic probing: Perceiving
the length of a probe and the distance of a surface probed. Perception & Psycho-
physics, 51, 580–598.
Carello, C., Grosofsky, A., Reichel, F. D., Solomon, H. Y., & Turvey, M. T.
(1989). Visually perceiving what is reachable. Ecological Psychology, 1, 27–54.
Carello, C., & Turvey, M. T. (2017). Useful dimensions of haptic perception: 50
years after the senses considered as perceptual systems. Ecological Psychology,
29, 95–121.
Carello, C., Wagman, J. B., & Turvey, M. T. (2005). Acoustic specification of
object properties. In J. Anderson & B. Anderson (Eds.), Moving image theory:
Ecological considerations (pp. 79–104). Carbondale, IL: Southern Illinois Univer-
sity Press.
Cassirer, E. (1944). The concept of group and the theory of perception. Philosophy
and Phenomenological Research: A Quarterly Journal, 5, 1–36.
Cesari, P., Formenti, F., & Olivato, P. (2003). A common perceptual parameter for
stair climbing for children, young and old adults. Human Movement Science, 22,
111–124.
Cesari, P., & Newell, K. M. (2000). Body-scaled transitions in human grip configu-
rations. Journal of Experimental Psychology: Human Perception and Perform-
ance, 26, 1657–1668.
Chapman, S. (1968). Catching a baseball. American Journal of Physics, 36, 868–870.
References 297
Chebat, D. R., Maidenbaum, S., & Amedi, A. (2015). Navigation using sensory
substitution in real and virtual mazes. PloS ONE, 10, e0126307.
Chebat, D. R., Rainville, C., Kupers, R., & Ptito, M. (2007). Tactile–‘visual’ acuity
of the tongue in early blind individuals. Neuroreport, 18, 1901–1904.
Chekhov, A. (1889). Personal letter to playwright Aleksandr Semenovich Lazarev.
In L. Goldberg (1976). Russian literature in the nineteenth century: Essays. Jeru-
salem: Magnes Press, Hebrew University.
Chemero, A. (2003). An outline of a theory of affordances. Ecological Psychology,
15, 181–195.
Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: MIT
Press.
Chemero, A., & Turvey, M. T. (2007). Complexity, hypersets, and the ecological
perspective on perception-action. Biological Theory, 2, 23–36.
Chen, Y. P., Keen, R., Rosander, K., & von Hofsten, C. (2010). Movement plan-
ning reflects skill level and age changes in toddlers. Child Development, 81,
1846–1858.
Chippindale, C., & Tacon, P. S. C. (1998). The archeology of rock art. New York:
Cambridge University Press.
Chrastil, E. R., & Warren, W. H. (2014). Does the human odometer use an extrin-
sic or intrinsic metric? Attention, Perception, & Psychophysics, 76, 230–246.
Claxton, L. J., Keen, R., & McCarty, M. E. (2003). Evidence of motor planning in
infant reaching behavior. Psychological Science, 14, 354–356.
Coats, R. O., Pan, J. S., & Bingham, G. P. (2014). Perturbation of perceptual units
reveals dominance hierarchy in cross calibration. Journal of Experimental Psych-
ology: Human Perception and Performance, 40, 328–341.
Cole, W. G., Chan, G. L. Y., Vereijken, B., & Adolph, K. E. (2013). Perceiving
affordances for different motor skills. Experimental Brain Research, 225, 309–319.
Comalli, D. M,, Franchak, J., Char, A., & Adolph, K. (2013). Ledge and wedge:
Younger and older adults’ perception of action possibilities. Experimental Brain
Research, 228, 183–192.
Comalli, D. M., Keen, R., Abraham, E., Foo, V. J., Lee, M. H., & Adolph, K. E.
(2016). The development of tool use: Planning for end-state comfort. Develop-
mental Psychology, 52, 1878–1892.
Comalli, D. M., Persand, D., & Adolph, K. E. (2017). Motor decisions are not
black and white: Selecting actions in the “gray zone”. Experimental Brain
Research, 235, 1793–1807.
Cook, H. E., Hassebrock, J. A., & Smart, L. J. (2018). Responding to other peo-
ple’s posture: Visually induced motion sickness from naturally generated optic
flow. Frontiers in Psychology, 9(1901), 1–9. doi:10.3389/fpsyg.2018.01901.
Cordovil, R., Santos, C., & Barreiros, J. (2012). Perceiving children’s behavior and
reaching limits in a risk environment. Journal of Experimental Child Psychology,
111, 319–330.
Costall, A. (1995). Socializing affordances. Theory & Psychology, 5, 467–481.
Costall, A. (2012). Canonical affordances in context. AVANT. Pismo Awangardy
Filozoficzno-Naukowej, 85–93.
Cottingham, J., Stoothoff, R., & Murdoch, B. (1985). The philosophical writings
of Descartes, Vol. 1: Optics. Cambridge: Cambridge University Press.
Crafton, D. (1999). The talkies: American cinema’s transition to sound, 1926–1931
(Vol. 4). Oakland, CA: University of California Press.
298 References
Creem-Regehr, S. H., Gagnon, K. T., Geuss, M. N., & Stefanucci, J. K. (2013).
Relating spatial perspective taking to the perception of other’s affordances: Pro-
viding a foundation for predicting the future behavior of others. Frontiers in
Human Neuroscience, 7, 596.
Crombie, A. C. (1996). Science, art and nature in medieval and modern thought.
London: Hambledon Press.
Cutting, J. E. (1986). Perception with an eye for motion. Cambridge, MA: MIT
Press.
Cutting, J. E. (1988). Affine distortions of pictorial space: Some predictions for
Goldstein (1987) that La Gournerie (1859) might have made. Journal of Experi-
mental Psychology: Human Perception & Performance, 14, 305–311.
d’Abro, A. (1951). The rise of the new physics: Its mathematical and physical the-
ories (2nd ed.). New York: Dover.
Dai, S., & Wu, Y. (2008). Motion from blur. In Computer Vision and Pattern
Recognition, 2008. CVPR 2008, 1–8. Evanston, IL: Northwestern University
Press.
Dainoff, M. J. (2008). Ecological ergonomics. In O. Y. Chebykin, G. Z. Bedny, &
W. Karwowski (Eds.), Ergonomics and psychology: Developments in theory and
practice (pp. 1897–1900). Boca Raton, FL: CRC Press.
Dainoff, M. J., & Mark, L. S. (2001). Affordances. In W. Karwowski (Ed.) Inter-
national encyclopedia of human factors (Vol. II, pp. 3–28), London: Routledge.
Dainoff, M., & Wagman, J. B. (2004). Implications of dynamic touch for human
factors/ergonomics: Contributions from ecological psychology. In Proceedings of
the Human Factors and Ergonomics Society Annual Meeting, 48, 1319–1320.
Davies, R. (1985). What’s bred in the bone. New York: Viking.
Day, B., Ebrahimi, E., Hartman, L. S., Pagano, C. C., & Babu, S. V. (2017). Cali-
bration to tool use during visually-guided reaching. Acta Psychologica, 181,
27–39.
Day, B., Ebrahimi, E., Hartman, L. S., Pagano, C. C., Robb, A. C., & Babu, S. V.
(2019). Examining the effects of altered avatars on perception-action in virtual
reality. Journal of Experimental Psychology: Applied, 25(1). doi:10.1037/
xap0000192.
Delagnes, A., & Roche, H. (2005). Late Pliocene hominid knapping skills: The case
of Lokalalei 2C, West Turkana, Kenya. Journal of Human Evolution, 48,
435–472.
Dennett, D. (2017, January 1). What scientific term or concept ought to be more
widely known? Edge. Retrieved from www.edge.org/response-detail/27002.
De Wit, M. M., De Vries, S., Van der Kamp, J., & Withagen, R. (2017).
Affordances and neuroscience: Steps towards a successful marriage. Neuro-
science & Biobehavioral Reviews, 80, 622–629.
Diaz, G., Cooper, J., Kit, D., & Hayhoe, M. M. (2013). Real-time recording and
classification of eye movements in an immersive virtual environment. Journal of
Vision, 13, 1–14.
Dibble, H. L., & Pelcin, A. (1995). The effect of hammer mass and velocity on
flake mass. Journal of Archaeological Science, 22, 429–439.
Dibble, H. L., & Režek, Ž. (2009). Introducing a new experimental design for con-
trolled studies of flake formation: Results for exterior platform angle, platform
depth, angle of blow, velocity, and force. Journal of Archaeological Science, 36,
1945–1954.
References 299
Dickinson, M. H., Farley, C. T., Full, R. J., Koehl, M. A. R., Kram, R., & Lehman,
S. (2000). How animals move: An integrative view. Science, 288, 100–106.
Dijkstra, T. M. H., Schöner, G., & Gielen, C. C. A. M. (1994). Temporal stability
of the action-perception cycle for postural control in a moving visual environ-
ment. Experimental Brain Research, 97, 477–486.
Di Paolo, E., Buhrmann, T., & Barandiaran, X. (2017). Sensorimotor life: An enac-
tive proposal. Oxford: Oxford University Press.
Dixon, M. W., Wraga, M. J., Proffitt, D. R., & Williams, G. C. (2000). Eye
height scaling of absolute size in immersive and nonimmersive displays. Journal
of Experimental Psychology: Human Perception and Performance, 26,
582–593.
Dolezal, H. (1982). Living in a world transformed: Perceptual and performatory
adaptation to visual distortion. Cambridge, MA: Academic Press.
Donald, M. (1991). The origins of modern mind: Three stages in the evolution of
culture and cognition. Cambridge, MA: Harvard University Press.
Dotov, D. G., de Wit, M. M., & Nie, L. (2012). Understanding affordances:
History and contemporary development of Gibson’s central concept. AVANT.
Pismo Awangardy Filozoficzno-Naukowej, 28–39.
Dotov, D. G., Nie, L., & Chemero, A. (2010). A demonstration of the transition
from ready-to-hand to unready-to-hand. PLoS ONE, 5, e9433.
Durgin, F. H., & Li, Z. (2011). Perceptual scale expansion: An efficient angular
coding strategy for locomotor space. Attention, Perception, & Psychophysics,
73, 1856–1870.
Effken, J. A. (2006). Improving clinical decision making through ecological inter-
faces. Ecological Psychology, 18, 283–318.
Effken, J. A., Kim, N.-G., & Shaw, R. E. (1997). Making the constraints visible:
Testing the ecological approach to interface design, Ergonomics, 40, 1–27.
Effken, J. A., Loeb, R. G., Kang, Y., & Lin, Z-C. (2008). Clinical information dis-
plays to improve ICU outcomes. International Journal of Medical Informatics,
77, 765–777.
Einhauser, W., Schumann, F., Bardins, S., Bartl, K., Boning, G., Schneider, E., &
Konig, P. (2007). Human eye-head co-ordination in natural exploration.
Network: Computation in Neural Systems, 18, 267–297.
Einstein, E. (1905/1923). Zur Elektrodynamik bewegter Körper, Annalen der
Physik, 17, 891; English trans. On the Electrodynamics of Moving Bodies by
George Barker Jeffery and Wilfrid Perrett (1923).
Epstein, W. (1966). Perceived depth as a function of relative height under three
background conditions. Journal of Experimental Psychology, 72, 335–338.
Fagard, J., Spelke, E. S., & von Hofsten, C. (2009). Reaching and grasping a
moving object in 6-, 8-, and 10-month-old infants: Laterality and performance.
Infant Behavior and Development, 32, 137–146.
Fajen, B. R. (2005a). Calibration, information, and control strategies for braking
to avoid a collision. Journal of Experimental Psychology: Human Perception and
Performance, 31, 480–501.
Fajen, B. R. (2005b). Perceiving possibilities for action: On the necessity of cali-
bration and perceptual learning for the visual guidance of action. Perception, 34,
717–740.
Fajen, B. R. (2007). Affordance-based control of visually guided action. Ecological
Psychology, 19, 383–410.
300 References
Fajen, B. R. (2008). Perceptual learning and the visual control of braking. Percep-
tion & Psychophysics, 70, 1117–1129.
Fajen, B. R., Riley, M. A., & Turvey, M. T. (2009). Information, affordances, and
the control of action in sport. International Journal of Sport Psychology, 40,
79–107.
Fajen, B. R., & Warren, W. H. (2003). Behavioral dynamics of steering, obstacle
avoidance, and route selection. Journal of Experimental Psychology: Human
Perception and Performance, 29, 343–362. doi:10.1037/0096-1523.29.2.343.
Fajen, B. R., & Warren, W. H. (2004). Visual guidance of intercepting a moving
target on foot. Perception, 33, 689–715. doi:10.1068/p5236.
Favela, L. H., Riley, M. A., Shockley, K., & Chemero, A. (2018). Perceptually
equivalent judgments made visually and by haptic sensory-substitution devices.
Ecological Psychology. https://doi.org/10.1080/10407413.2018.1473712.
Feria, C. S., Braunstein, M. L., & Andersen, G. J. (2003). Judging distance across
texture discontinuities. Perception, 32, 1423–1440.
Ferwerda, J. A. (2001). Elements of early vision for computer graphics. IEEE Com-
puter Graphics and Applications, 21, 22–33.
Feynman, R. (1985). Surely you’re joking Mr. Feynman! Adventures of a curious
character. New York: W. W. Norton & Company.
Field, D. J. (1987). Relations between the statistics of natural images and the
response properties of cortical cells. Journal of the Optical Society of America A,
4, 2379–2394.
Field, G. D., & Rieke, F. (2002). Mechanisms regulating variability of the single
photon responses of mammalian rod photoreceptors. Neuron, 35, 733–747.
Fitch, H. L., Tuller, B., & Turvey, M. T. (1982). The Bernstein perspective: III.
Tuning of coordinative structures with special reference to perception. In A. S.
Kelso (Ed.), Human motor behavior (pp. 271–281). New York: Routledge.
Fitzpatrick, P., Carello, C., Schmidt, R. C., & Corey, D. (1994). Haptic and visual
perception of an affordance for upright posture. Ecological Psychology, 6,
265–287.
Flach, J. M. (1990). The ecology of human-machine systems I: Introduction. Eco-
logical Psychology, 2, 191–205.
Flach, J. M., Hancock, P., Caird, J., & Vicente, K. (1995). Global perspectives on
the ecology of human-machine systems, Hillsdale, NJ: Lawrence Erlbaum Asso-
ciates.
Flach, J. M., Reynolds, P., Cao, C., & Staffell, T. (2017). Engineering representa-
tions to support evidence-based clinical practice. Proceedings of the 2017 Inter-
national Symposium on Human Factors and Ergonomics in Health Care, 6,
66–73.
Fodor, J. A. (1981). Representations: Philosophical essays on the foundations of
cognitive science. Cambridge, MA: The MIT Press.
Foley, J. M., Ribeiro-Filho, N. P., & DaSilva, J. A. (2004). Visual perception of
extent and the geometry of visual space. Vision Research, 44, 147–156.
Foulsham, T., Chapman, C., Nasiopoulos, E., & Kingstone, A. (2014). Top-down
and bottom-up aspects of active search in a real-world environment. Canadian
Journal of Experimental Psychology, 68, 8–19.
Foulsham, T., Walker, E., & Kingstone, A. (2011). The where, what, and when of
gaze allocation in the lab and the natural environment. Vision Research, 51,
1920–1931.
References 301
Franchak, J. M. (2017). Using head-mounted eye tracking to study development. In
B. Hopkins, E. Geangu, & S. Linkenauger (Eds.), The Cambridge encyclopedia
of child development (2nd ed., pp. 113–116). Cambridge: Cambridge University
Press.
Franchak, J. M. (2019). Changing opportunities for learning in everyday life: Infant
body position over the first year. Infancy, 24, 187–209.
Franchak, J. M., & Adolph, K. E. (2010). Visually guided navigation: Head-
mounted eye-tracking of natural locomotion in children and adults. Vision
Research, 50, 2766–2774.
Franchak, J. M., & Adolph, K. E. (2014). Gut estimates: Pregnant women adapt to
changing possibilities for squeezing through doorways. Attention, Perception, &
Psychophysics, 76, 460–472.
Franchak, J. M., Celano, E. C., & Adolph, K. E. (2012). Perception of passage
through openings depends on the size of the body in motion. Experimental Brain
Research, 223, 301–310.
Franchak, J. M., Heeger, D. J., Hasson, U., & Adolph, K. E. (2016). Free-viewing
gaze behavior in infants and adults. Infancy, 21, 262–287.
Franchak, J. M., Kretch, K. S., & Adolph, K. E. (2018). See and be seen: Infant-
caregiver social looking during locomotor free play. Developmental Science, 21,
e12626.
Franchak, J. M., Kretch, K. S., Soska, K. C., & Adolph, K. E. (2011). Head-
mounted eye tracking: A new method to describe infant looking. Child Develop-
ment, 82, 1738–1750.
Franchak, J. M., Kretch, K. S., Soska, K. C., Babcock, J. S., & Adolph, K. E.
(2010). Head-mounted eye-tracking in infants’ natural interactions: A new
method. Paper presented at the Proceedings of the 2010 Symposium on Eye
Tracking Research and Applications, Austin, TX.
Franchak, J. M., Smith, L. B., & Yu, C. (under review). Developmental changes in
how head orientation structures infants’ visual experiences.
Franchak, J. M., van der Zalm, D. J., & Adolph, K. E. (2010). Learning by doing: Action
performance facilitates affordance perception. Vision Research, 50, 2758–2765.
Franchak, J. M., & Yu, C. (2015). Visual-motor coordination in natural reaching
of young children and adults. Proceedings of the 37th Annual Meeting of the
Cognitive Science Society. Austin, TX: Cognitive Science Society.
Frank, M. C., Simmons, K., Yurovsky, D., & Pusiol, G. (2013). Developmental
and postural changes in children’s visual access to faces. In M. Knauff, M.
Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual
Meeting of the Cognitive Science Society (pp. 454–459). Austin, TX: Cognitive
Science Society.
Frank, M. C., Vul, E., & Johnson, S. P. (2009). Development of infants’ attention
to faces during the first year. Cognition, 110, 160–170.
Fukusima, S. S., Loomis, J. M., & Da Silva, J. A. (1997). Visual perception of ego-
centric distance as assessed by triangulation. Journal of Experimental Psych-
ology: Human Perception and Performance, 23, 86–100.
Fultot, M., Nie, L., & Carello, C. (2016). Perception-action mutuality obviates
mental construction. Constructivist Foundations, 11, 298–307.
Gal, O., & Chen-Morris, R. (2010). Baroque optics and the disappearance of the
observer: From Kepler’s optics to Descartes’ doubt. Journal of the History of
Ideas, 71, 191–217.
302 References
Geduld, H. M. (1975). The birth of the talkies: From Edison to Jolson. Blooming-
ton, IN: Indiana University Press.
Geisler, W. S. (2008). Visual perception and the statistical properties of natural
scenes. Annual Review of Psychology, 59, 167–192.
Gelfand, I. M. (1989). Two archetypes in the psychology of man: The lecture notes
for Kyoto Prize. Retrieved from www.israelmgelfand.com/talks/two_archetypes.
pdf.
Gesell, A. L. (1928). Infancy and human growth. New York: Macmillan.
Gibson, E. J. (1963). Perceptual learning. Annual Review of Psychology, 14,
29–56.
Gibson, E. J. (1969). Principles of perceptual learning and development. New
York: Appleton-Century-Crofts.
Gibson, E. J. (1988). Exploratory behavior in the development of perceiving, acting
and the acquiring of knowledge. Annual Review of Psychology, 39, 1–41.
Gibson, E. J. (1991). An odyssey in learning and perception. Cambridge, MA: MIT
Press.
Gibson, E. J. (1994). An odyssey in learning and perception. Cambridge, MA: MIT
Press.
Gibson, E. J., & Pick, A. D. (2000). An ecological approach to perceptual learning
and development. New York: Oxford University Press.
Gibson, E. J., Riccio, G., Schmuckler, M. A., Stoffregen, T. A., Rosenberg, D., &
Taormina, J. (1987). Detection of the traversability of surfaces by crawling and
walking infants. Journal of Experimental Psychology: Human Perception and
Performance, 13, 533–544.
Gibson, E. J., & Walk, R. D. (1960). The “visual cliff ”. Scientific American, 202,
64–71.
Gibson, J. J. (Ed.) (1947). Motion picture testing and research. Report No. 7. Army
Air Forces Aviation Psychology Program Research Reports.
Gibson, J. J. (1950a). Perception of the visual world. Boston: Houghton Mifflin.
Gibson, J. J. (1950b). The perception of visual surfaces. The American Journal of
Psychology, 63, 367–384.
Gibson, J. J. (1951). What is a form? Psychological Review, 58, 403–412.
Gibson, J. J. (1954). The visual perception of objective motion and subjective
movement. Psychological Review, 61, 304–314.
Gibson, J. J. (1955). Optical motions and transformations as stimuli for visual per-
ception. State College, PA: Psychological Cinema Register.
Gibson, J. (1957). Optical motions and transformations as stimuli for visual per-
ception. Psychological Review, 64, 288–294.
Gibson, J. J. (1958). Visually controlled locomotion and visual orientation in
animals. British Journal of Psychology, 49, 182–194.
Gibson, J. J. (1959). Perception as a function of stimulation. In S. Koch (Ed.),
Psychology: A study of a science (Vol. I, pp. 456–473). New York: McGraw-
Hill.
Gibson, J. J. (1960a). The concept of the stimulus in psychology. American Psy-
chologist, 15, 694–703.
Gibson, J. J. (1960b). The information contained in light. Acta Psychologica, 17,
23–30.
Gibson, J. J. (1961a). The contribution of experimental psychology to the formula-
tion of the problem of safety: A brief for basic research. In L. W. Mayo (Ed.).
References 303
Behavioral approaches to accident research. New York: Association for the Aid
of Crippled Children.
Gibson, J. J. (1961b). Ecological optics. Vision Research, 1, 253–262.
Gibson, J. J. (1963). The useful dimensions of sensitivity. American Psychologist,
18, 1–15.
Gibson, J. J. (1966a). The senses considered as perceptual systems. Boston, MA:
Houghton Mifflin.
Gibson, J. J. (1966b). The problem of temporal order in stimulation and percep-
tion. The Journal of Psychology, 62, 141–149.
Gibson, J. J. (1967a). New reasons for realism. Synthese, 17, 162–172.
Gibson, J. J. (1967b). For new book (“Useful Vision”?). Unpublished manuscript,
James J. Gibson papers, #14–23–1832. Division of Rare and Manuscript Collec-
tions, Cornell University Library.
Gibson, J. J. (1967c). Outline of an “uncompromising” new book on Vision (Note
for the Introduction). Unpublished manuscript, James J. Gibson papers,
#14–23–1832. Division of Rare and Manuscript Collections, Cornell University
Library.
Gibson, J. J. (1969). The change from visible to invisible: A study of optical trans-
itions. State College, PA: Psychological Cinema Register.
Gibson, J. J. (1971a). Everyday visual perception (tentative outline). Unpublished
manuscript, James J. Gibson papers, #14–23–1832. Division of Rare and Manu-
script Collections, Cornell University Library.
Gibson, J. J. (1971b). Do we ever see light? Unpublished manuscript, retrieved
from www.trincoll.edu/depts/ecopsyc/perils/.
Gibson, J. J. (1971c). On the distinction between objects and substances. Unpub-
lished manuscript, retrieved from www.trincoll.edu/depts/ecopsyc/perils/.
Gibson, J. J. (1971d). A preliminary list of postulates for an ecological psychology.
Unpublished manuscript, James J. Gibson papers, #14–23–1832. Division of
Rare and Manuscript Collections, Cornell University Library.
Gibson, J. J. (1971e). The environment to be perceived: Part I of an ecological
approach to visual perception. Unpublished manuscript, James J. Gibson papers,
#14–23–1832. Division of Rare and Manuscript Collections, Cornell University
Library.
Gibson, J. J. (1971f ). Nov 4 discussion of Part I of MS. Unpublished manuscript,
James J. Gibson papers, #14–23–1832. Division of Rare and Manuscript Collec-
tions, Cornell University Library.
Gibson, J. J. (1971g). The information available in pictures. Leonardo, 4, 27–35.
Gibson, J. J. (1973). The consequence of assuming that there is a persisting
environmental layout along with a changing environmental layout. Unpublished
manuscript, James J. Gibson papers, #14–23–1832. Division of Rare and Manu-
script Collections, Cornell University Library.
Gibson, J. J. (1974). A note on ecological optics. In E. C. Carterette & M. Fried-
man (Eds.). Handbook of perception (pp. 309–312). New York: Academic Press.
Gibson, J. J. (1975). Events are perceivable but time is not. In J. T. Fraser & N. Law-
rence (Eds.), The study of time II (pp. 295–301). New York: Springer-Verlag.
Gibson, J. J. (1977). A tentative formula for the perception of persistence and
awareness of reality. Unpublished manuscript, December 1977. Available at
https://commons.trincoll.edu/purpleperils/1972-1979/a-t entative-formula-
for-the-perception-of-persistence-and-awareness-of-reality/.
304 References
Gibson, J. J. (1978a). The theory of further scrutiny. Unpublished manuscript,
James J. Gibson papers, #14–23–1832. Division of Rare and Manuscript Collec-
tions, Cornell University Library.
Gibson, J. J. (1978b). The perceiving of hidden surfaces. In P. Machamer & R.
Turnbull (Eds.), Studies in perception. Columbus, OH: Ohio State University.
Gibson, J. J. (1979/2015). The ecological approach to visual perception: classic
edition. New York: Psychology Press. (originally published in 1979).
Gibson, J. J. (1982a). Notes on affordances. In E. Reed & R. Jones (Eds.), Reasons
for realism: Selected essays of James J. Gibson (pp. 401–418). Hillsdale, NJ:
Lawrence Erlbaum Associates.
Gibson, J. J. (1982b). The myth of passive perception: A reply to Richards. In E.
Reed & R. Jones (Eds.), Reasons for realism: Selected essays of James J. Gibson
(pp. 397–400). Hillsdale, NJ: Lawrence Erlbaum Associates.
Gibson, J. J. (1982c). A history of the ideas behind ecological optics: Introductory
remarks at the workshop on ecological optics. In E. Reed & R. Jones (Eds.).
Reasons for realism: Selected essays of James J. Gibson (pp. 90–101). Mahwah,
NJ: Lawrence Erlbaum Associates, Inc.
Gibson, J. J., & Crooks, L. E. (1938). A theoretical field analysis of automobile
driving. American Journal of Psychology, 51, 453–471.
Gibson, J. J., & Gibson, E. J. (1955). Perceptual learning: Differentiation or enrich-
ment? Psychological Review, 62, 32–41.
Gibson, J. J., & Gibson, E. J. (1957). Continuous perspective transformations and
the perception of rigid motion. Journal of Experimental Psychology, 54,
129–138.
Gibson, J. J., Kaplan, G., Reynolds, H., & Wheeler, K. (1969). The change from
visible to invisible: A study of optical transitions. Perception & Psychophysics,
5, 113–116.
Gibson, J. J., & Kaushall, P. (1973). Reversible and irreversible events. State
College, PA: Psychological Cinema Register.
Gibson, J. J., Olum, P., & Rosenblatt, F. (1955). Parallax and perspective during
aircraft landings. The American Journal of Psychology, 68, 372–385.
Gibson, J. J., Purdy, J., & Lawrence, L. (1955). A method of controlling stimu-
lation for the study of space perception: The optical tunnel. Journal of Experi-
mental Psychology, 50, 1–14.
Gibson, J. J., Smith, O., Steinschneider, A., & Johnson, C. (1957). The relative
accuracy of visual perception of motion during fixation and pursuit. The Amer
ican Journal of Psychology, 70, 64–68.
Gibson, J. J., & Waddell, D. (1952). Homogeneous retinal stimulation and visual
perception. American Journal of Psychology, 65, 263–270.
Gilchrist, A. L. (2018). To compute lightness, illumination is not estimated, it is
held constant. Journal of Experimental Psychology: Human Perception and Per-
formance, 44, 1258–1267.
Gilinsky, A. S. (1951). Perceived size and distance in visual space. Psychological
Review, 58, 460–482.
Goffman, E. (1974). Frame analysis. New York: Harper & Row.
Golonka, S. (2015). Laws and conventions in language-related behaviors. Ecolo-
gical Psychology, 27, 236–250.
Gombrich, E. H. (1972). Art and illusion (2nd ed.). Princeton, NJ: Princeton
University Press.
References 305
Gomer, J. A., Dash, C., Moore, K. S., & Pagano, C. C. (2009). Using radial
outflow to provide depth information during teleoperation. Presence: Teleopera-
tors and Virtual Environments, 18, 304–320.
Guitton, D., & Volle, M. (1987). Gaze control in humans: Eye-head coordination
during orienting movements to targets within and beyond the oculomotor range.
Journal of Neurophysiology, 58, 427–459.
Haber, R. N. (1983). The impending demise of the icon: A critique of the concept
of iconic storage in visual information processing. Behavioral and Brain Sciences,
6, 1–11.
Haber, R. N., & Levin, C. A. (2001). The independence of size perception and dis-
tance perception. Perception & Psychophysics, 63, 1140–1152.
Hajdukiewicz, J. R., & Vicente, K. J. (2004). What does computer-mediated
control of a thermal-hydraulic system have to do with moving your jaw to
speak? Evidence for synergies in process control. Ecological Psychology, 16,
255–285.
Haken, H. (1977). Synergetics: An introduction. Nonequilibrium phase trans-
itions and self-organization in physics, chemistry, and biology. Berlin: Springer-
Verlag.
Hancock, P., Flach, J., Caird, J., & Vicente, K. (1995). Local applications of the
ecological approach to human-machine systems. Hillsdale, NJ: Lawrence
Erlbaum Associates.
Hancock, P. A., & Manser, M. P. (1997). Time-to-contact: More than tau alone.
Ecological Psychology, 9, 265–297.
Hanke, W. (2014). Natural hydrodynamic stimuli. In H. Bleckmann et al. (Eds.),
Flow sensing in air and water. Berlin: Springer-Verlag.
Hanke, W., Wieskotten, S., Marshall, C., & Dehnhardt, G. (2013). Hydrodynamic
perception in true seals (Phocidae) and eared seals (Otariidae). Journal of Com-
parative Physiology A, 199, 421–440.
Hardy, L. H., Rand, G., & Rittler, M. C. (1951). Investigation of visual space: The
Blumenfeld alleys. AMA Archives of Ophthalmology, 45, 53–63.
Harmand, S., Lewis, J. E., Feibel, C. S., Lepre, C. J., Prat, S., Lenoble, A. et al.
(2015). 3.3-million-year-old stone tools from Lomekwi 3, West Turkana, Kenya.
Nature, 521, 310–315.
Harris, C. S. (1965). Perceptual adaptation to inverted, reversed, and displaced
vision. Psychological Review, 72, 419–444.
Harrison, S. J., & Turvey, M. T. (2010). Place learning by mechanical contact.
Journal of Experimental Biology, 213, 1436–1442.
Hartman, A. M. (1994). Cloudy skies: Information for perception of objects near
horizons. Ecological Psychology, 6, 289–296.
Hartman, L. S. (2018). Perception-action system calibration in the presence of
stable and unstable perceptual perturbations. All Dissertations.2144. Retrieved
from https//tigerprints.clemson.edu/all_dissertations/2144
Hartman, L. S., Kil, I., Pagano, C. C., & Burg, T. C. (2016). Investigating haptic
distance-to-break using linear and nonlinear materials in a simulated minimally
invasive surgery task. Ergonomics, 59, 1171–1181.
Hay, J. C. (1966). Optical motions and space perception: An extension of Gibson’s
analysis. Psychological Review, 73, 550–565.
Hayhoe, M. M., Shrivastava, A., Mruczek, R. E. B., & Pelz, J. B. (2003). Visual
memory and motor planning in a natural task. Journal of Vision, 3, 49–63.
306 References
He, Z. J., Wu, B., Ooi, T. L., Yarbrough, G., & Wu, J. (2004). Judging egocentric dis-
tance on the ground: Occlusion and surface integration. Perception, 33, 789–806.
Head, H. (1920). Studies in neurology, Vol. 2. London: Oxford University Press.
Hecht, H., & Savelsbergh, G. (2004). Time-to-contact. Amsterdam: Elsevier Science.
Hecht, S., Schlaer, S., & Pirenne, M. H. (1942). Energy, quanta, and vision.
Journal of General Physiology, 25, 819–840.
Heft, H. (1983). Way‑finding as the perception of information over time. Popula-
tion and Environment: Behavioral and Social Issues, 6, 133–150.
Heft, H. (2001). Ecological psychology in context: James Gibson, Roger Barker,
and the legacy of William James’s radical empiricism. Mahwah, NJ: Lawrence
Erlbaum Associates.
Heft, H. (2002). Restoring naturalism to James’s epistemology: A belated reply to
Miller & Bode. Transactions of the Charles S. Peirce Society, 38, 557–580.
Heft, H. (2003). Affordances, dynamic experience, and the challenge of reification.
Ecological Psychology, 15, 149–180.
Heft, H. (2007). The social constitution of perceiver-environment reciprocity. Eco-
logical Psychology, 19, 85–105.
Heft, H. (2017). Perceptual information of “an entirely different order”: The “cul-
tural environment” in The Senses Considered as Perceptual Systems. Ecological
Psychology, 29, 122–145.
Heft, H. (2018). Places: Widening the scope of an ecological approach to perception–
action with an emphasis on child development. Ecological Psychology, 30, 99–123.
Heft, H., & Nasar, J. (2000). Evaluating environmental scenes using dynamic
versus static displays. Environment & Behavior, 32, 301–322.
Held, R., & Hein, A. (1963). Movement-produced stimulation in the development
of visually guided behaviour. Journal of Comparative and Physiological Psych-
ology, 56, 872–876.
Helmholtz, von H. (1866/2000). Handbuch der physiologischen Optik, Dritter
Brand. (J. P. C. Southall Trans., as Helmhotz’s treatise on physiological optics,
Vol. III, pp. 91–92). Bristol: Thoemmes.
Hettinger, L. J., & Riccio, G. E. (1992). Visually induced motion sickness in virtual
environments. Presence: Teleoperators and Virtual Environments, 1, 306–310.
Heyser, C. J., & Chemero, A. (2012). Novel object exploration in mice: Not all
objects are created equal. Behavioural Processes, 89, 232–238.
Higuchi, T., Cinelli, M. E., Greig, M. A., & Patla, A. E. (2006). Locomotion
through apertures when wider space for locomotion is necessary: Adaptation to
artificially altered bodily states. Experimental Brain Research, 175, 50–59.
Higuchi, T., Murai, G., Kijima, A., Seya, Y., Wagman, J. B., & Imanaka, K.
(2011). Athletic experience influences shoulder rotations when running through
apertures. Human Movement Science, 30, 534–549.
Higuchi, T., Takada., H., Matsuura, Y., & Imanaka, K., (2004). Visual estimation
of spatial requirements for locomotion in novice wheelchair users. Journal of
Experimental Psychology: Applied, 10, 55–66.
Hillier, B. (1996). Space is the machine: A configural theory of architecture. Cam-
bridge: Cambridge University Press.
Hochberg, J. E. (1968). Perception (2nd ed.). Englewood Cliffs, NJ: Prentice Hall.
Hockney, D. (2001). Secret knowledge: Rediscovering the lost techniques of the
old masters. New York: Penguin Putnam.
Holt, E. B. (1914). The concept of consciousness. New York: Macmillan.
References 307
Holt, E. B. (1915). The Freudian wish and its place in ethics. New York:
Henry Holt.
Holth, M., Van der Meer, A. L. H., & Van der Weel, F. R. (2013). Combining
findings from gaze and electroencephalography recordings to study timing in a
visual tracking task. NeuroReport, 24, 968–972.
Hsu, J. (2019). Machines on mission possible. Nature Machine Intelligence, 1,
124–127.
Hutto, D. D. & Myin, E. (2017). Evolving enactivism: Basic minds meet content.
Cambridge, MA: MIT Press.
Ingber, D. E. (2005). Mechanical control of tissue growth: function follows form. Pro-
ceedings of the National Academy of Sciences of the U.S.A., 102, 11571–11572.
Ingber, D. E. (2006). Cellular mechanotransduction: Putting all the pieces together
again. FASEB Journal, 20, 811–827.
Ingle, D. (1973). Spontaneous shape discrimination by frogs during unconditioned
escape behaviour. Physiological Psychology, 1, 71–73.
Iodice, P., Scuderi, N., Saggini, R., & Pezzulo, G. (2015). Multiple timescales of
body schema reorganization due to plastic surgery. Human Movement Science,
42, 54–70.
Ittelson, W. H. (1996). Visual perception of markings. Psychonomic Bulletin &
Review, 3, 171–187.
Itti, L., & Koch, C. (2001). Computational modelling of visual attention. Nature
Reviews Neuroscience, 2, 194–203.
Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention
for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 20, 1254–1259.
Iwamoto, M., Ueyama, D., & Kobayashi, R. (2014). The advantage of mucus for
adhesive locomotion in gastropods. Journal of Theoretical Biology, 353, 133–141.
Jacobs, D. M., & Michaels, C. F. (2001). Individual differences and the use of non-
specifying variables in learning to perceive distance and size: Comments on McCon-
nell, Muchisky, and Bingham (1998). Perception & Psychophysics, 63, 563–571.
Jacobs, D. M., & Michaels, C. F. (2007). Direct learning. Ecological Psychology,
19, 321–349.
Jacobs, D. M., Runeson, S., & Michaels, C. F. (2001). Learning to visually perceive
the relative mass of colliding balls in globally and locally constrained task ecolo-
gies. Journal of Experimental Psychology: Human Perception and Performance,
27, 1019–1038.
Jacobs, D. M., Silva, P. L., & Calvo, J. (2009). An empirical illustration and
formalization of the theory of direct learning: The muscle-based perception of
kinetic properties. Ecological Psychology, 21, 245–289.
James, W. (1890). The principles of psychology. New York: Henry Holt and
Company.
James, W. (1892). The stream of consciousness. In Psychology, Chapter XI. Cleve-
land and New York: World Publishing Company. Retrieved April 26, 2018 from
http://psychclassics.yorku.ca/James/jimmy11.htm.
James, W. (1904). A world of pure experience. Journal of Philosophy, Psychology,
and Scientific Methods, 1, 533–543.
Jayaraman, W., Fausey, C. M., & Smith, L. B. (2015). The faces in infant-
perspective scenes change over the first year of life. PloS ONE. doi:10.1371/
journal.pone.0123780.
308 References
Jayne, B. C., & Riley, M. A. (2007). Scaling of the axial morphology and gap-
bridging ability of the brown tree snake, Boiga irregularis. Journal of Experi-
mental Biology, 210, 1148–1160.
Jenison, R. L. (1997). On acoustic information for motion. Ecological Psychology,
9, 131–151.
Jiang, Y., O’Neal, E. E., Yon, J. P., Franzen, L., Rahimian, P., Plumert, J. M., &
Kearney, J. (2018). Acting together: Joint pedestrian road crossing in an immer-
sive virtual environment. ACM Transactions on Applied Perception, 15(2),
Article 8.
Jiménez, Á. A., Sanabria, F., & Cabrera, F. (2017). The effect of lever height on the
microstructure of operant behavior. Behavioural Processes, 140, 181–189.
Joh, A. S., Adolph, K. E., Narayanan, P. J., & Dietz, V. A. (2007). Gauging possib-
ilities for action based on friction underfoot. Journal of Experimental Psych-
ology: Human Perception and Performance, 33, 1145–1157.
Johansson, G. (1964). Perception of motion and changing form. Scandinavian
Journal of Psychology, 5, 181–208.
Johansson, G. (1973). Visual perception of biological motion and a model for its
analysis. Perception and Psychophysics, 14, 201–211.
Johansson, G., von Hofsten, C., & Jansson, G. (1980). Event perception. Annual
Review of Psychology, 31, 27–63.
Johnson, S. P., Amso, D., & Slemmer, J. A. (2003). Development of object con-
cepts in infancy: Evidence for early learning in an eye-tracking paradigm.
PANAS, 100, 10568–10573.
Latash, M. L. (2008). Synergy. New York: Oxford University Press.
Latash, M. L., Scholz, J. P., & Schöner, G. (2002). Motor control strategies
revealed in the structure of motor variability. Exercise and Sport Sciences
Reviews, 30, 26–31.
Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., & Hudspeth, A. J.
(Eds.). (2013). Principles of neural science. New York: McGraw-Hill Education.
Kaplan, G. (1969). Kinetic disruption of optical texture. The perception of depth at
an edge. Perception & Psychophysics, 6, 193–198.
Katz, D. (1935). The world of colour. (R. B. MacLeod & C. W. Fox, Trans.)
London: Kegan Paul, Trench, Trubner & Co.
Kawagishi, Y., Hatsuyama, K., & Kondo, K. (2003, July). Cartoon blur: Nonpho-
torealistic motion blur. In Computer Graphics International, 2003. Proceedings
(pp. 276–281). IEEE.
Kawashima, W., Hatake, K., Kudo, R., Nakanishi, M., Tamaki, S. et al. (2015).
Estimating the time after death on the basis of corneal opacity. Journal of For-
ensic Research, 6(1), 1–5.
Kaye, D., LeBrecht, J., & Budries, D. (2016). Sound and music for the theatre: The
art and technique of design (4th ed.). New York: Routledge.
Kayed, N. S., Farstad, H., & Van der Meer, A. L. H. (2008). Preterm infants’
timing strategies to optical collisions. Early Human Development, 84,
381–388.
Kayed, N. S., & Van der Meer, A. L. H. (2000). Timing strategies used in defensive
blinking to optical collisions in 5- to 7-month-old infants. Infant Behavior &
Development, 23, 253–270.
Kayed, N. S., & Van der Meer, A. L. H. (2007). Infants’ timing strategies to optical
collisions: A longitudinal study. Infant Behavior & Development, 30, 50–59.
References 309
Kayed, N. S., & Van der Meer, A. L. H. (2009). A longitudinal study of prospec-
tive control in catching by full-term and preterm infants. Experimental Brain
Research, 194, 245–258.
Kellman, P. J., Garrigan, P. B., & Shipley, T. F. (2005). Object interpolation in
three dimensions. Psychological Review, 112, 586–609.
Kellman, P. J., & Shipley, T. F. (1992). Visual interpolation in object perception.
Current Directions in Psychological Science, 1, 193–199.
Kelly, J. W., Cherep, L. A., & Siegel, Z. D. (2017). Perceived space in the HTC
vive. ACM Transactions on Applied Perception, 15, 1–17.
Kepler, J. (1604/1996). Astronomiae Pars Optica (The optical part of astronomy).
In A. C. Crombie (1996). Science, art and nature in medieval and modern
thought. London: Hambledon Press.
Kim, N-G. (2008). Dynamic occlusion and optical flow from corrugated surfaces.
Ecological Psychology, 20, 209–239.
Kim, N. G., Effken, J., & Shaw, R. (1995). Perceiving persistence under change
and over structure. Ecological Psychology, 7, 217–256.
Kim, N. G., & Kim, H. (2017). Schizophrenia: An impairment in the capacity to per-
ceive affordances. Frontiers in Psychology, 8. doi:10.3389/fpsyg.2017.01052.
Kim, S., Carello, C., & Turvey, M. T. (2016). Size and distance are perceived inde-
pendently in an optical tunnel: Evidence for direct perception. Vision Research,
125, 1–11.
Kinsella-Shaw, J. M., Shaw, B., & Turvey, M. T. (1992). Perceiving ‘walk-on-able’
slopes. Ecological Psychology, 4, 223–239.
Kirschfeld, K. (2017). How we perceive our own retina. Proceedings of the Royal
Society, B, 284, 1904–1909.
Klein, F. C. (1872/2008). A comparative review of recent researches in geometry.
arXiv preprint arXiv:0807.3161.
Knapp, J. M., & Loomis, J. M. (2004). Limited field of view of head-mounted dis-
plays is not the cause of distance underestimation in virtual environments. Pres-
ence: Teleoperators & Virtual Environments, 13, 572–577.
Koenderink, J. J. (1986). Optic flow. Vision Research, 26, 161–180.
Koenderink, J. J., van Doorn, A. J., Kappers, A. M. L., & Todd, J. T. (2001).
Ambiguity and the ‘mental eye’ in pictorial relief. Perception, 30, 431–448.
Koenig, D., & Hofer, H. (2011). The absolute threshold of cone vision. Journal of
Vision, 11, 1–24.
Koffka, K. (1935). The principles of Gestalt psychology. New York: Harcourt-Brace.
Kohler, I. (1964). The formation and transformation of the perceptual world.
Psychological Issues, 3, 1–173.
Konczak, J., Meeuwsen, H. J., & Cress, M. E. (1992). Changing affordances in
stair climbing: The perception of maximum climbability in young and older
adults. Journal of Experimental Psychology: Human Perception and Perform-
ance, 18, 691–697.
Kretch, K. S., & Adolph, K. E. (2013). Cliff or step? Posture-specific learning at the
edge of a drop-off. Child Development, 84, 226–240.
Kretch, K. S., & Adolph, K. E. (2015). Active vision in passive locomotion: Real-
world free viewing in infants and adults. Developmental Science, 18, 736–750.
doi:10.1111/desc.12251.
Kretch, K. S., & Adolph, K. E. (2017). The organization of exploratory behaviors
in infant locomotor planning. Developmental Science, 20, e12421.
310 References
Kretch, K. S., Franchak, J. M., & Adolph, K. E. (2014). Crawling and walking
infants see the world differently. Child Development. doi:10.1111/cdev.12206.
Kugler, P. N., Turvey, M. T., Carello, C., & Shaw, R. E. (1985). The physics of
controlled collisions: A reverie about locomotion. In Persistence and change:
Proceedings of the First International Conference on Event Perception (pp.
195–222). Hillsdale, NJ: Erlbaum.
Kull, K. (2009). Umwelt and modelling. In P. Cobley (Ed.), The Routledge hand-
book of semiotics. (pp. 43–56). New York: Routledge.
Kumin, L., Lazar, J., Feng, J. H., Wentz, B., & Ekedebe, N. (2012). A usability
evaluation of workplace-related tasks on a multi-touch tablet computer by adults
with Down syndrome. Journal of Usability Studies, 7, 118–142.
Laidlaw, K. E. W., Foulsham, T., Kuhn, G., & Kingstone, A. (2011). Potential
social interactions are important to social attention. Proceedings of the National
Academy of Sciences of the U.S.A., 108, 5548–5553.
Lanczos. C. (1970). The variational principles of mechanics (4th ed.). Mineola,
NY: Dover Publications.
Land, M. F. (1992). Predictable eye-head coordination during driving. Nature,
359, 318–320.
Land, M. F. (2004). The coordination of rotations of the eyes, head and trunk in
saccadic turns produced in natural situations. Experimental Brain Research, 159,
151–160.
Land, M. F. (2006). Eye movements and the control of actions in everyday life.
Progress in Retinal and Eye Research, 25, 296–324.
Land, M. F., & Fernald, R. D. (1992). The evolution of eyes. Annual Review of
Neuroscience, 15, 1–29.
Land, M. F., & Furneaux, S. (1997). The knowledge base of the oculomotor
system. Philosophical Transactions of the Royal Society of London B, 352,
1231–1239.
Land, M. F., & Lee, D. N. (1994). Where we look when we steer. Nature, 369,
742–744.
Land, M. F., Mennie, N., & Rusted, J. (1999). The roles of vision and eye move-
ments in the control of activities of daily living. Perception, 28, 1311–1328.
Land, M. F., & Nilsson, D-E. (2006). General-purpose and special-purpose visual
systems. In E. Warrant & D-E. Nilsson (Eds.) Invertebrate vision (pp. 167–210).
Cambridge: Cambridge University Press.
Langewiesche, W. (1944). Stick and rudder. New York: McGraw-Hill.
Lappin, J. S. (2016). Identifying spatiotemporal information. In J. W. Houpt & L.
M. Blaha (Eds.), Mathematical models of perception and cognition (Vol. I, pp.
121–165): London: Psychology Press.
Lappin, J. S., Shelton, A. L., & Rieser, J. J. (2006). Environmental context influ-
ences visually perceived distance. Attention, Perception, & Psychophysics, 68,
571–581.
Lashley, K. S. (1922). Studies of cerebral function in learning: IV. Vicarious func-
tion after destruction of the visual areas. American Journal of Physiology, 59,
44–71.
Lee, D. N. (1976). A theory of visual control of braking based on information
about time-to-collision. Perception, 5, 437–459.
Lee, D. N. (1980). The optic flow-field: The foundation of vision. Philosophical
Transactions of the Royal Society of London B, 290, 169–179.
References 311
Lee, D. N. (1993). Body-environment coupling. In U. Neisser (Ed.), The perceived
self: Ecological and interpersonal sources of self-knowledge (pp. 43–67). Cam-
bridge, MA: Cambridge University Press.
Lee, D. N. (1998). Guiding movement by coupling taus. Ecological Psychology,
10, 221–250.
Lee, D. N. (2009). General tau theory: Evolution to date. Perception, 38, 837–858.
Lee, D. N., & Aronson, E. (1974). Visual proprioceptive control of standing in
human infants. Perception & Psychophysics, 15, 529–532.
Lee, D. N., & Lishman, J. R. (1975). Visual proprioceptive control of stance.
Journal of Human Movement Studies, 1, 87–95.
Lee, D. N., & Reddish, P. (1981). Plummeting gannets: A paradigm for ecological
optics. Nature, 293, 293–294.
Lee, D. N., Van der Weel, F. R., Hitchcock, T., Matejowsky, E., & Pettigrew, J. D.
(1992). Common principle of guidance by echolocation and vision. Journal of
Comparative Physiology A, 171, 563–571.
Lee, D. N., Young, D. S., and McLaughlin, C. M. (1984). A roadside simulation of
road crossing for children. Ergonomics, 27, 1271–1281.
Lee, Y., Lee, S., Carello, C., & Turvey, M. T. (2012). An archer’s perceived form
scales the “hitableness” of archery targets. Journal of Experimental Psychology:
Human Perception and Performance, 38, 1125–1131.
Le Poidevin, R. (2019). The experience and perception of time. In Edward N . Zalta
(Ed.), The Stanford encyclopedia of philosophy, summer 2019 ed.). Retrieved from
https://plato.stanford.edu/archives/sum2019/entries/time-experience/
Leudar, I., & Costall, A. (2009). Introduction: Against ‘theory of mind.’ In I.
Leudar & A. Costall (Eds.), Against theory of mind (pp. 1–15). Basingstoke: Pal-
grave Macmillan.
Li, C.-L., Pilar Aivar, M., Kit, D. M., Tong, M. H., & Hayhoe, M. M. (2016).
Memory and visual search in naturalistic 2D and 3D environments. Journal of
Vision, 16, 1–20.
Li, D., Babcock, J., & Parkhurst, D. J. (2006). openEyes: A low-cost head-mounted
eye-tracking solution. Proceedings of the 2006 Symposium on Eye Tracking
Research and Applications.
Li, Z., & Durgin, F. H. (2010). Perceived slant of binocularly viewed large-scale surfaces:
A common model from explicit and implicit measures. Journal of Vision, 10, 13–13.
Li, Z., Phillips, J., & Durgin, F. H. (2011). The underestimation of egocentric dis-
tance: Evidence from frontal matching tasks. Attention, Perception, & Psycho-
physics, 73, 2205–2217.
Lindberg, D. C. (1968). The theory of pinhole images from antiquity to the thir-
teenth century. Archive for History of Exact Sciences, 5, 154–176.
Lindberg, D. C. (1976). Theories of vision from Al-Kindi to Kepler. Chicago:
University of Chicago Press.
Lishman, J. R., & Lee, D. N. (1973). The autonomy of visual kinaesthesis. Percep-
tion, 2, 287–294.
Littman, E. (2011). Adaptation to simultaneous multi-dimensional distortions.
(electronic thesis or dissertation). Miami University, OH. Retrieved from https://
etd.ohiolink.edu/.
Littman, E. M., Otten, E. W., & Smart, L. J. (2010). Consequence of self versus
externally generated visual motion on postural regulation. Ecological Psych-
ology, 22, 150–167.
312 References
Lockman, J. J., Fears, N. E., & Jung, W. P. (2018). The development of object
fitting: The dynamics of spatial coordination (Vol. 55). Amsterdam: Elsevier.
Lombardo, T. J. (1987). The reciprocity of perceiver and environment. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Long, L. O., Pagano, C. C, Singapogu, R. B., & Burg, T. C. (2016). Surgeons’ per-
ception of soft tissue constraints and distance-to-break in a simulated minimally
invasive surgery Task. Proceedings of the Human Factors and Ergonomics
Society 2016 Annual Meeting (pp. 1598–1602). Washington, DC, September
19–23, 2016.
Longo, M. R., & Haggard, P. (2010). An implicit body representation underlying
human position sense. Proceedings of the National Academy of Sciences of the
U.S.A., 107, 11727–11732.
Loomis, J. M. (2014). Three theories for reconciling the linearity of egocentric dis-
tance perception with distortion of shape on the ground plane. Psychology &
Neuroscience, 7, 245–251.
Loomis, J. M. (2016). Presence in virtual reality and everyday life: Immersion
within a world of representation. Presence: Teleoperators and Virtual Environ-
ments, 25, 169–174.
Loomis, J. M., DaSilva, J. A., Fujita, N., & Fukusima, S. S. (1992). Visual space
perception and visually directed action. Journal of Experimental Psychology:
Human Perception and Performance, 18, 906–921.
Loomis, J. M., & Knapp, J. M. (2003). Visual perception of egocentric distance in
real and virtual environments. In L. J. Hettinger & M. W. Haas (Eds.), Virtual
and adaptive environments (pp. 21–46). Mahwah, NJ: Erlbaum.
Loomis, J. M., & Philbeck, J. W. (1999). Is the anisotropy of perceived 3-D shape
invariant across scale? Perception & Psychophysics, 61, 397–402.
Loomis, J. M., & Philbeck, J. W. (2008). Measuring spatial perception with
spatial updating and action. In R. L. Klatzky, B. MacWhinney, & M. Behr-
mann (Eds.), Embodiment, ego-space, and action (pp. 1–43). New York:
Psychology Press.
Loomis, J. M., Philbeck, J. W., & Zahorik, P. (2002). Dissociation between loca-
tion and shape in visual space. Journal of Experimental Psychology: Human Per-
ception and Performance, 28, 1202–1212.
Luck, S. J. (2005). An introduction to the event-related potential technique. Cam-
bridge, MA: MIT Press.
Luneburg, R. K. (1947). Mathematical analysis of binocular vision. Princeton, NJ:
Princeton University Press.
Luo, C., & Franchak, J. M. (in preparation). Infant eye-head alignment to objects
and faces during free play.
Mace, W. M. (1977). James J. Gibson’s strategy for perceiving: Ask not what’s
inside your head, but what your head’s inside of. In R. E. Shaw & J. Bransford
(Eds.), Perceiving, acting, and knowing (pp. 43–65). Hillsdale, NJ: Lawrence
Erlbaum Associates, Inc.
Maidenbaum, S., Hanassy, S., Abboud, S., Buchs, G., Chebat, D. R., Levy-Tzedek,
S., & Amedi, A. (2014). The “EyeCane”, a new electronic travel aid for the
blind: Technology, behavior & swift learning. Restorative Neurology and Neuro-
science, 32, 813–824.
Makeig, S., Gramann, K., Jung, T. P., Sejnowski, T. J., & Poizner, H. (2009). Linking
brain, mind and behavior. International Journal of Psychophysiology, 73, 95–100.
References 313
Makin, A. D. J. (2018). The common rate control account of prediction motion.
Psychonomic Bulletin & Review, 25, 1784–1797.
Mangalam, M., Wagman, J. B., & Newell, K. M. (2018). Temperature influences
perception of the length of a wielded object via effortful touch. Experimental
Brain Research, 236, 505–516.
Mantel, B., Hoppenot, P., & Colle, P. (2012). Perceiving for acting with teleoper-
ated robots: Ecological principles to human–robot interaction design. IEEE
Transactions on Systems, Man, and Cybernetics: Part A: Systems and Humans,
42, 1460–1475.
Mantel, B., Stoffregen, T. A., Campbell, A., & Bardy, B. G. (2015). Exploratory
movement generates higher-order information that is sufficient for accurate per-
ception of scaled egocentric distance. PLoS ONE, 10, e0120025.
Maravita, A., & Iriki, A. (2004). Tools for the body (schema). TRENDS in Cogni-
tive Sciences, 8, 79–86.
Marino, C. J., & Mahan, R. P. (2005). Configural displays can improve nutrition-
related decisions: An application of the proximity compatibility principle.
Human Factors, 47, 121–130.
Mark, L. S. (1987). Eyeheight-scaled information about affordances: A study of
sitting and stair climbing. Journal of Experimental Psychology: Human Percep-
tion and Performance, 13, 361–370.
Mark, L. S. (2007). Perceiving the actions of other people. Ecological Psychology,
19, 107–136.
Mark, L. S., Balliett, J. A., Craver, K. D., Douglas, S. D., & Fox, T. (1990). What
an actor must do in order to perceive the affordance for sitting. Ecological
Psychology, 2, 325–366.
Mark, L. S., Nemeth, K., Gardner, D., Dainoff, M. J., Paasche, J., Duffy, M., &
Grandt, K. (1997). Postural dynamics and the preferred critical boundary for vis-
ually guided reaching. Journal of Experimental Psychology: Human Perception
and Performance, 23, 1365–1379.
Mark, L. S., Shaw, R., & Pittenger, J. (1988): Natural constraints, scales of ana-
lysis, and information for the perception of growing faces. In T. Alley (Ed.),
Social and applied aspects of perceiving faces (pp. 11–49). Hillsdale, NJ:
Erlbaum.
Marr, D. (1982). Vision. San Francisco, CA: W. H. Freeman.
Marsh, K. L., Richardson, M. J., Baron, R. M., & Schmidt, R. C. (2006). Contrasting
approaches to perceiving and acting with others. Ecological Psychology, 18, 1–38.
Marratto, S. L. (2012). The intercorporeal self: Merleau-Ponty and subjectivity.
Albany, NY: SUNY Press.
Marshall, W. E. (2018). Understanding international road safety disparities: Why
is Australia so much safer than the United States? Accident Analysis & Preven-
tion, 111, 251–265.
Matin, L., & Fox, C. R. (1989). Visually perceived eye level and perceived eleva-
tion of objects: Linearly additive influences from visual field pitch and gravity.
Vision Research, 29, 315–324.
Matthis, J. S., & Fajen, B. R. (2014). Visual control of foot placement when
walking over complex terrain. Journal of Experimental psychology: Human Per-
ception and Performance, 40, 106–115.
Matthis, J. S., Yates, J. L., & Hayhoe, M. M. (2018). Gaze and the control of foot
placement when walking in natural terrain. Current Biology, 28, 1224–1233.
314 References
Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition: The realiza-
tion of the living., London: D. Reidel.
McCann, B. C., Hayhoe, M. M., & Geisler, W. S. (2018). Contributions of monoc-
ular and binocular cues to distance discrimination in natural scenes. Journal of
Vision, 18(4):12, doi:10.1167/18.4.12.
McCarty, M. E., Clifton, R. K., & Collard, R. R. (1999). Problem solving in infancy:
The emergence of an action plan. Developmental Psychology, 35, 1091–1101.
McCaulay, D. (1978). Great moments in architecture. Boston: Houghton
Mifflin Co.
McGann, M. (2014). Enacting a social ecology: Radically embodied intersubjectiv-
ity. Frontiers in Psychology, 5, 1321.
McPherron, S. P., Alemseged, Z., Marean, C. W., Wynn, J. G., Reed, D., Geraads,
D., et al. (2010). Evidence for stone-tool-assisted consumption of animal tissues
before 3.39 million years ago at Dikika, Ethiopia. Nature, 466, 857–860.
Meeuwsen, H. J., & Cress, M. E. (1992). Changing affordances in stair climbing:
The perception of maximum climbability in young and older adults. Journal of
Experimental Psychology: Human Perception and Performance, 18, 691–697.
Meng, J. C., & Sedgwick, H. A. (2001). Distance perception mediated through
nested contact relations among surfaces. Perception & Psychophysics, 63, 1–15.
Meng, J. C., & Sedgwick, H. A. (2002). Distance perception across spatial discon-
tinuities. Perception & Psychophysics, 64, 1–14.
Merleau-Ponty, M. (1962). The phenomenology of perception. London: Routledge
& Kegan Paul.
Messing, R., & Durgin, F. H. (2005). Distance perception and the visual horizon in
head-mounted displays. ACM Transactions on Applied Perception (TAP), 2,
234–250.
Metzger, W. (1930). Optische Untersuchungen im Ganzfeld II [Optical investiga-
tions in the whole field]. Psychologische Forschung, 13, 6–29.
Michaels, C. F. (2003). Affordances: Four points of debate. Ecological Psychology,
15, 135–148.
Michaels, C. F., & Carello, C. (1981). Direct perception. Englewood Cliffs, NJ:
Prentice Hall.
Michaels, C. F., & de Vries, M. M. (1998). Higher order and lower order variables
in the visual perception of relative pulling force. Journal of Experimental Psych-
ology: Human Perception and Performance, 24, 526–546.
Michotte, A. (1963). The perception of causality (T. R. Miles & E. Miles, Trans.).
London: Methuen.
Michotte, A., Thinès, G. O., & Crabbé, G. (1964). Les compléments amodaux des
structures perceptives. Louvain: Institut de psychologie de l’Université de Louvain.
Mohler, B. J., Creem-Regehr, S. H., & Thompson, W. B. (2006). The influence of
feedback on egocentric distance judgments in real and virtual environments. Pro-
ceedings of the 3rd Symposium on Applied Perception in Graphics and Visuali-
zation (pp. 9–14). ACM.
Momtahan, K., & Burns, C. M. (2004). Applications of ecological interface design
in supporting the nursing process. Journal of Healthcare Information Manage-
ment, 18, 74–82.
Mon-Williams, M., & Bingham, G. P. (2007). Calibrating reach distance to visual
targets. Journal of Experimental Psychology: Human Perception and Perform-
ance, 33, 645–656.
References 315
Moore, M. K, Borton, R., & Darby, B. L. (1978). Visual tracking in young infants:
Evidence for object identity or object permanence? Journal of Experimental
Child Psychology, 25, 183–198.
Moore, M. K., & Meltzoff, A. N. (1999). New findings on object permanence: A
developmental difference between two types of occlusion. British Journal of
Developmental Psychology, 17, 563–584.
Munafo, J., Diedrick, M., & Stoffregen, T. A. (2017). The virtual reality
head‑mounted display Oculus Rift induces motion sickness and is sexist in its
effects. Experimental Brain Research, 235, 889–901.
Murch, W. (2001). In the blink of an eye: A perspective on film editing. Holly-
wood, CA: Silman-James Press.
Nakayama, K. (1994). James J. Gibson; An appreciation. Psychological Review,
101, 329–335.
Neisser, U. (1967). Cognitive psychology. Englewood Cliffs, NJ: Prentice-Hall.
Neisser, U. (1976). Cognition and reality. San Francisco: W. H. Freeman.
Newton, I. (1704/1952). Opticks or a treatise of the reflections, refractions, inflec-
tions & colors of light. New York: Dover Publications.
Ng, R., Levoy, N., Brédif, M., Duval, G., Horowitz, M., & Hanrahan, P. (2005).
Light field photography with a hand-held plenoptic camera. Stanford Tech
Report CTSR 2005–02.
Ni, R., Braunstein, M. L., & Andersen, G. J. (2004). Perception of scene layout
from optical contact, shadows, and motion. Perception, 33, 1305–1318.
Nixon, M. S., & Aguado, A. S. (2012). Feature extraction & image processing for
computer vision. Amsterdam: Elsevier.
Noether, E. (1918). Invariante Variationsprobleme. Nachricht der Königliche Ges-
ellschaft des Wissens zu Göttingen, Math-phys. Klasse (1918) (pp. 235–257).
English Reprint: physics/0503066. doi:org/10.1080/00411457108231446, 57.
Nonaka, T. (2012). What exists in the environment that motivates the emergence,
transmission, and sophistication of tool use? Behavioral and Brain Sciences, 35,
233–234.
Nonaka, T., & Bril, B. (2014). Fractal dynamics in dexterous tool use: The case of
hammering behavior of bead craftsmen. Journal of Experimental Psychology:
Human Perception and Performance, 40, 218–231.
Nonaka, T., Bril, B., & Rein, R. (2010). How do stone knappers predict and
control the outcome of flaking? Implications for understanding early stone tool
technology. Journal of Human Evolution, 59, 155–167.
Norman, D. (2013). The design of everyday things. (Rev. ed.). New York: Basic
Books.
Norman, J. F., Adkins, O. C., Dowell, C. J., Shain, L. M., Hoyng, S. C., &
Kinnard, J. D. (2017). The visual perception of distance ratios outdoors. Atten-
tion, Perception & Psychophysics, 79, 1195–1203.
Norman, J. F., Crabtree, C. E., Clayton, A. M., & Norman, H. F. (2005). The per-
ception of distances and spatial relationships in natural outdoor environments.
Perception, 34, 1315–1324.
Norman, J. F., Todd, J. T., Perotti, V. J., & Tittle, J. S. (1996). The visual percep-
tion of three-dimensional length. Journal of Experimental Psychology: Human
Perception and Performance, 22, 173–186.
Ohno, H., Rogers, B. J., Ohmi, M., & Ono, M. E. (1988). Dynamic occlusion and
motion parallax in depth perception. Perception, 17, 255–266.
316 References
Ooi, T. L., & He, Z. J. (2007). A distance judgment function based on space per-
ception mechanisms: Revisiting Gilinsky’s (1951) equation. Psychological
Review, 144, 441–454.
Ooi, T. L., Wu, B., & He, Z. J. (2001). Distance determined by the angular decli-
nation below the horizon. Nature, 414, 197–200.
Ooi, T. L., Wu, B., & He, Z. J. (2006). Perceptual space in the dark affected by the
intrinsic bias of the visual system. Perception, 35(5), 605–624.
Oomen, B. S., Smith, R. M., & Stahl, J. S. (2004). The influence of future gaze ori-
entation upon eye-head coupling during saccades. Experimental Brain Research,
155, 9–18.
O’Shea, R. P., & Ross, H. E. (2007). Judgments of visually perceived eye level
(VPEL) in outdoor scenes: Effects of slope and height. Perception, 36,
1168–1178.
Oudejans, R. R., Michaels, C. F., Bakker, F. C., & Dolné, M. A. (1996). The relev-
ance of action in perceiving affordances: Perception of catchableness of fly balls.
Journal of Experimental Psychology: Human Perception and Performance, 22,
879–891.
Pagano, C. C., & Bingham, G. P. (1998). Comparing measures of monocular dis-
tance perception: Verbal and reaching errors are not correlated. Journal of
Experimental Psychology: Human Perception and Performance, 24, 1037–1051.
Pagano, C. C., Grutzmacher, R. P., & Jenkins, J. C. (2001). Comparing verbal and
reaching responses to visually perceived egocentric distances. Ecological Psych-
ology, 13, 197–226.
Pagano, C. C., & Turvey, M. T. (1998). Eigenvectors of the inertia tensor and per-
ceiving the orientation of limbs and objects. Journal of Applied Biomechanics,
14, 331–359.
Palatinus, Z., Kelty-Stephen, D., Kinsella-Shaw, J., Carello, C., & Turvey, M. T.
(2014). Haptic perceptual intent in quiet standing affects multifractal scaling of
postural fluctuations. Journal of Experimental Psychology: Human Perception
& Performance, 40, 1808–1818.
Palmer, E. M., Kellman, P. J., & Shipley, T. F. (2006). A theory of dynamic
occluded and illusory object perception. Journal of Experimental Psychology:
General, 135, 513–541.
Palmer, S. E. (1999). Vision science. Cambridge, MA: MIT Press.
Pan, J. S., Bingham, N., & Bingham, G. P. (2013). Embodied memory: Effective
and stable perception by combining flow and image structure. Journal of Experi-
mental Psychology: Human Perception and Performance, 39, 1638–1651.
Pan, J. S., Bingham, N., & Bingham, G. P. (2017). Embodied memory allows
accurate and stable perception of hidden objects despite orientation change.
Journal of Experimental Psychology: Human Perception and Performance, 43,
1343–1358.
Pan, J. S., Bingham, N., Chen, C., & Bingham, G. P. (2017). Breaking camouflage
and detecting targets require optic flow and image structure information. Applied
Optics, 56, 6410–6418.
Pan, J. S., Coats, R. O., & Bingham, G. P. (2014). Calibration is action specific but
perturbation of perceptual units is not. Journal of Experimental Psychology:
Human Perception and Performance, 40, 404–415.
Passos, P., Cordovil, R., Fernandes, O., & Barreiros, J. (2012). Perceiving
affordances in rugby union. Journal of Sports Sciences, 30, 1175–1182.
References 317
Patla, A. E., & Vickers, J. N. (1997). Where and when do we look as we approach
and step over an obstacle in the travel path? Neuroreport, 8, 3661–3665.
Peirce, C. S. (1955). The fixation of belief. In J. Buchler (Ed.), Philosophical writ-
ings of Peirce. New York: Dover Publications.
Pelegrin, J. (2005). Remarks about archaeological techniques and methods of knap-
ping: Elements of a cognitive approach to stone knapping. In V. Roux & B. Bril
(Eds.), Stone knapping: The necessary conditions for a uniquely hominin behav-
ior (pp. 35–52). Cambridge: McDonald Institute for Archaeological Research.
Pelz, J. B., & Canosa, R. (2001). Oculomotor behavior and perceptual strategies in
complex tasks. Vision Research, 41, 3587–3596.
Pelz, J. B., Hayhoe, M. M., & Loeber, R. (2001). The coordination of eye, head,
and hand movements in a natural task. Experimental Brain Research, 139,
266–277.
Petrusz, S. C., & Turvey, M. T. (2010). On the distinctive features of ecological
laws. Ecological Psychology, 22, 44–68.
Philbeck, J. W., & Loomis, J. M. (1997). Comparison of two indicators of per-
ceived egocentric distance under full-cue and reduced-cue conditions. Journal of
Experimental Psychology: Human Perception and Performance, 23, 72–85.
Philbeck, J. W., Loomis, J. M., & Beall, A. C. (1997). Visually perceived location is
an invariant in the control of action. Perception and Psychophysics, 59,
601–612.
Pijpers, J. R., Oudejans, R. R., & Bakker, F. C. (2007). Changes in the perception
of action possibilities while climbing to fatigue on a climbing wall. Journal of
Sports Sciences, 25, 97–110.
Pittenger, J., & Shaw, R. (1975). Aging faces as viscal-elastic events: Implications
for a theory of non-rigid shape perception. Journal of Experimental Psychology:
Human Performance and Perception, 1, 374–382.
Plumert, J. M., Kearney, J. K., & Cremer, J. F. (2004). Children’s perception of
gap affordances: Bicycling across traffic-filled intersections in an immersive
virtual environment. Child Development, 75, 1243–1253.
Powys, V. Taylor, H., & Probets, C. (2013). A little flute music: Mimicry, memory,
and narrativity. Environmental Humanities, 3, 43–70.
Prieske, B., Withagen, R., Smith, J., & Zaal, F. T. (2015). Affordances in a simple
playscape: Are children attracted to challenging affordances?. Journal of
Environmental Psychology, 41, 101–111.
Profeta, V. L., & Turvey, M. T. (2018). Bernstein’s levels of movement construc-
tion: A contemporary perspective. Human Movement Science, 57, 111–133.
Proffitt, D. R. (1977). Demonstrations to investigate the meaning of everyday
experience (Doctoral dissertation, ProQuest Information & Learning).
Proffitt, D. R. (2006). Embodied perception and the economy of action. Perspec-
tives on Psychological Science, 1, 110–122.
Proffitt, D. R., & Linkenauger, S. A. (2013). Perception viewed as a phenotypic
expression. In W. Prinz, M. Beisert, & A. Herwig (Eds.), Action science: Founda-
tions of an emerging discipline. Cambridge, MA: MIT Press.
Purdy, J., & Gibson, E. J. (1955). Distance judgment by the method of fractiona-
tion. Journal of Experimental Psychology, 50(6), 374.
Purdy, W. P. (1958). The hypothesis of psychophysical correspondence in space
perception. Dissertation Abstracts International, 19, 1454–1455. (University
Microfilms No. 1458–5594.)
318 References
Rachwani, J., Golenia, L., Herzberg, O., & Adolph, K. E. (in press). Postural, visual,
and manual coordination in the development of prehension. Child Development.
Rader, N. d. V. (2018). Uniting Jimmy and Jackie: Foundation for a research program
in developmental ecological psychology. Ecological Psychology, 30, 129–145.
Rader, N. de V., & Zukow-Goldring, P. (2012). Caregivers’ gestures direct infant
attention during early work learning: The importance of dynamic synchrony.
Language Sciences, 34, 559–568.
Ramenzoni, V. C., Davis, T. J., Riley, M. A., & Shockley, K. (2010). Perceiving
action boundaries: Learning effects in perceiving maximum jumping-reach
affordances. Attention, Perception, & Psychophysics, 72, 1110–1119.
Ramenzoni, V. C., Riley, M. A., Davis, T., Shockley, K., & Armstrong, R. (2008).
Tuning in to another person’s action capabilities: Perceiving maximal jumping-
reach height from walking kinematics. Journal of Experimental Psychology:
Human Perception & Performance, 34, 919–928.
Ramenzoni, V. C., Riley, M. A., Shockley, K., & Davis, T. (2008). An information-
based approach to action understanding. Cognition, 106, 1059–1070.
Rasouli, O., Stensdotter, A.-K., & Van der Meer, A. L. H. (2016). TauG-guidance
of dynamic balance control during gait initiation in patients with chronic fatigue
syndrome and fibromyalgia. Clinical Biomechanics, 37, 147–152.
Reed, E. S. (1982a). Darwin’s earthworms: A case study in evolutionary psych-
ology. Behaviorism, 10, 165–185.
Reed, E. S. (1982b). An outline of a theory of action systems. Journal of Motor
Behavior, 14, 98–134.
Reed, E. S. (1988). James J. Gibson and the psychology of perception. New Haven,
CT: Yale University Press.
Reed, E. S. (1996a). The necessity of experience. New Haven, CT: Yale University
Press.
Reed, E. S. (1996b). Encountering the world: Toward an ecological psychology.
New York: Oxford University Press.
Regia-Corte, T., & Wagman, J. B. (2008). Perception of affordances for standing
on an inclined surface depends on height of center of mass. Experimental Brain
Research, 191, 25–35.
Režek, Ž., Dibble, H. L., McPherron, S. P., Braun, D. R., & Lin, S. C. (2018). Two
million years of flaking stone and the evolutionary efficiency of stone tool tech-
nology. Nature Ecology & Evolution, 2, 628–633.
Riccio, G. E., & Stoffregen, T. A. (1991). An ecological theory of motion sickness
and postural instability. Ecological Psychology, 3, 195–240.
Richardson, A. R., & Waller, D. (2007). Interaction with an immersive virtual
environment corrects users’ distance estimates. Human Factors, 47, 507–517.
Richardson, M. J., Marsh, K. L., & Baron, R. M. (2007). Judging and actualizing
intrapersonal and interpersonal affordances. Journal of Experimental Psych-
ology: Human Perception and Performance, 33, 845–859.
Richardson, M. J., Marsh, K. L., & Schmidt, R. C. (2010). Challenging the egocen-
tric view of coordinated perceiving, acting, and knowing. In B. Mesquita, L. F.
Barrett, & E. Smith (Eds.), The mind in context (pp. 307–333). New York: The
Guilford Press.
Rieffel, J. A., Valero-Cuevas, F. J., & Lipson, H. (2010). Morphological communi-
cation: Exploiting coupled dynamics in a complex mechanical structure to
achieve locomotion. Journal of the Royal Society Interface, 7, 613–621.
References 319
Riegler, A. (2005). The constructivist challenge. Constructivist Foundations, 1, 1–8.
Rieser, J. J., Ashmead, D. H., Talor, C. R., & Youngquist, G. A. (1990). Visual
perception and the guidance of locomotion without vision to previously seen
targets. Perception, 19, 675–689.
Rieser, J. J., Pick, H. L., Ashmead, D. H., & Garing, A. E. (1995). Calibration of
human locomotion and models of perceptual-motor organization. Journal of
Experimental Psychology: Human Perception and Performance, 21, 480–497.
Rietveld, E. (2016). Situating the embodied mind in a landscape of standing
affordances for living without chairs: Materializing a philosophical worldview.
Sports Medicine, 46, 27–932.
Rietveld, E., & Kiverstein, J. (2014). A rich landscape of affordances. Ecological
Psychology, 26, 325–352.
Riley, M. A., Richardson, M. J., Shockley, K., & Ramenzoni, V. C. (2011). Inter-
personal synergies. Frontiers in Psychology, 2, 38.
Riley, M. A., Wagman, J. B., Santana, M. V., Carello, C., & Turvey, M. T. (2002).
Perceptual behavior: Recurrence analysis of a haptic exploratory procedure. Per-
ception, 31, 481–510.
Roche, H. (2005). From simple flaking to shaping: Stone-knapping evolution
among early hominins. In V. Roux & B. Bril (Eds.), Stone knapping: The neces-
sary conditions for a uniquely hominin behavior (pp. 35–52). Cambridge:
McDonald Institute for Archaeological Research.
Roche, H., Blumenschine, R. J., & Shea, J. J. (2009). Origins and adaptations of
early Homo: What archeology tells us. In F. E. Grine, J. G. Fleagle, & R. E.
Leakey (Eds.), The first humans: Origin and early evolution of the genus homo
(pp. 135–150). New York: Springer.
Roche, H., Delagnes, A., Brugal, J. P., Feibel, C., Kibunjia, M., Mourre, V., &
Texier, P. J. (1999). Early hominid stone tool production and technical skill 2.34
Myr ago in West Turkana, Kenya. Nature, 399, 57–60.
Rodegerdts, L., Bansen, J., Tiesler, C., Knudsen, J., Myers, E., Johnson, M., et al.
(2010). Roundabouts: An informational guide (2nd ed.). Washington, DC:
Transportation Research Board.
Rogers, B. J. (1984). Dynamic occlusion, motion parallax, and the perception of
3-D surfaces, Perception, 13, A46.
Rogers, B. J., & Graham, M. E. (1983). Dynamic occlusion in the perception of
depth structure. Perception, 12, A15.
Rogers, S. (1996). The horizon-ratio relation as information for relative size in pic-
tures. Perception & Psychophysics, 58, 142–152.
Rogoff, B. (1995). The cultural nature of human development. New York: Oxford
University Press.
Rolnick, A., & Bles, W. (1989). Performance and well-being under tilting con-
ditions: The effects of visual reference and artificial horizon. Aviation, Space, and
Environmental Medicine, 60, 779–785.
Ronchi, V. (1957). Optics: The science of vision (E. Rosen, Trans.). New York:
New York University Press. (Original work published in 1955).
Rosander, K., & von Hofsten, C. (2004). Infants’ emerging ability to represent
occluded object motion, Cognition, 91, 1–22.
Rosenbaum, D. A. (2010). Walking down memory lane: Where walkers look as
they descend stairs provides hints about how they control their walking behav-
ior. The American Journal of Psychology, 122, 425–430.
320 References
Rosenblum, L. D. (2010). See what I’m saying: The extraordinary power of our
five senses. New York: W. W. Norton.
Rothkopf, C. A., Ballard, D. H., & Hayhoe, M. M. (2007). Task and context
determine where you look. Journal of Vision, 7, 1–20.
Rothwell, J., & Atlas, R. [Directors] (2010). Sour Grapes. Screenplay by Al
Morrow. London: Met Film Production.
Ruda, R., Livitz, G., Riesen, G., & Mingolla, E. (2015). Computational modeling
of depth ordering in occlusion through accretion or deletion of texture. Journal
of Vision, 15(9), 1–23. doi:10.1167/15.9.20.
Ruderman, D. L. (1994). The statistics of natural images. Computation in Neural
Systems, 5, 517–548.
Runeson, S. (1977). On the possibility of “smart” perceptual mechanisms. Scan-
dinavian Journal of Psychology, 18, 172–179.
Runeson, S. (1988). The distorted room illusion, equivalent configurations, and the
specificity of static optic arrays. Journal of Experimental Psychology: Human
Perception and Performance, 14, 295–304.
Runeson, S., & Frykholm, G. (1983). Kinematic specification of dynamics as an
informational basis for person-and-action perception: Expectation, gender recog-
nition, and deceptive intention. Journal of Experimental Psychology: General,
112, 585–615.
Runeson, S., & Vedeler, D. (1993). The indispensability of pre-collision kinematics in
the visual perception of relative mass. Perception & Psychophysics, 53, 617–633.
Russell, M. K., & Turvey, M. T. (1999). Auditory perception of unimpeded
passage. Ecological Psychology, 11, 175–188.
Saab Affiches. (2018, April 17). Retrieved from http://saabactu.blogspot.
com/2010/08/saab-affiches-saab-vs-bahaus.html.
Sakitt, B. (1972). Counting every quantum. Journal of Physiology, 223, 131–150.
Scarfe, P., & Glennerster, A. (2015). Using high-fidelity virtual reality to study per-
ception in freely moving observers. Journal of Vision, 15(9), 1–11.
doi:10.1167/15.9.3.
Schiff, W. (1965). Perception of impending collision: A study of visually directed
avoidant behavior. Psychological Monographs: General and Applied, 79, 1–26.
Schiff, W., Caviness, J. A., & Gibson, J. J. (1962). Persistent fear responses in
rhesus monkeys to the optical stimulus of ” looming”. Science, 136, 982–983.
Schmidt, R. A., & Lee, T. D. (2011). Motor control and learning: A behavioral
emphasis (5th ed.). Champaign, IL: Human Kinetics.
Schmuckler, M. A., Collimore, L. M., & Dannemiller, J. L. (2007). Infants’ reac-
tions to object collision on hit and miss trajectories. Infancy, 12, 105–118.
Scholl, B. J., & Pylyshyn, Z. W. (1999). Tracking multiple items through occlusion:
Clues to visual objecthood. Cognitive Psychology, 38, 259–290.
Sedgwick, H. A. (1973). The visible horizon: A potential source of visual informa-
tion for the perception of size and distance. (Doctoral dissertation), Cornell
University.
Sedgwick, H. A. (1986). Space perception. In K. R. Boff, L. Kaufman, & J. P.
Thomas (Eds.), Handbook of perception and human performance, Vol. 1:
Sensory processes and perception (pp. 21.21–21.57). New York: Wiley.
Seifert, L., Cordier, R., Orth, D., Courtine, Y., & Croft, J. L. (2017). Role of route
previewing strategies on climbing fluency and exploratory movements. PloS
ONE, 12(4), e0176306.
References 321
Sekuler, A., & Palmer, S. (1992). Perception of partly occluded objects: A microge-
netic analysis. Journal of Experimental Psychology: General, 121, 95–111.
Sekuler, R., & Blake, R. (2002). Perception. New York: McGraw-Hill.
Semaw, S., Renne, P., Harris, J. W., Feibel, C. S., Bernor, R. L., Fesseha, N., &
Mowbray, K. (1997). 2.5-million-year-old stone tools from Gona, Ethiopia.
Nature, 385, 333–336.
Shannon, C., & Weaver, W. (1949). The mathematical theory of information.
Urbana, IL: University of Illinois Press.
Shapiro, M. A., & McDonald, D. G. (1992). I’m not a real doctor, but I play one
in virtual reality: Implications of virtual reality for judgments about reality.
Journal of Communication, 42, 94–114.
Shaw, R. E. (2002). Theoretical hubris and the willingness to be radical. Ecological
Psychology, 14, 235–247.
Shaw, R. E., & Bransford, J., (Eds.) (1977). Perceiving, acting, and knowing:
Towards an ecological psychology. Hillsdale, NJ: Lawrence Erlbaum Associates.
Shaw, R. E., Flascher, O., & Mace, W. M. (1996). Dimensions of event perception.
In W. Prinz & B. Bridgeman (Eds.) Handbook of perception and action (Vol. 1,
pp. 345–395). London: Academic Press.
Shaw, R., & Kinsella-Shaw J. (1988). Ecological mechanics: A physical geometry
for intentional constraints. Human Movement Science, 7, 155–200.
Shaw, R. E., & Kinsella-Shaw, J. M. (2007). Could optical ‘pushes’ be inertial
forces? A geometro-dynamical hypothesis. Ecological Psychology, 19, 305–320.
Shaw, R. E., Kinsella-Shaw, J. M., & Mace, W. M. (2019). Affordance types and
tokens: Are Gibson’s affordances trustworthy? Ecological Psychology, 31, 49–75.
Shaw, R. E., & Mace, W. M. (2005). The value of oriented geometry for ecological
psychology and moving image art. In J. Anderson & B. Anderson (Eds.). Moving
image theory: Ecological considerations. (pp. 38–48). Carbondale, IL: Southern
Illinois University Press.
Shaw, R. E., Mark, L. S., Jenkins, H., & Mingolla, E. (1982). A dynamic geometry
for predicting growth of gross craniofacial morphology. In A. Dixon & B. Sarnat
(Eds.), Factors and mechanisms influencing bone growth (pp. 423–431). New
York: Liss.
Shaw, R. E., & McIntyre, M. (1974). Algoristic foundations to cognitive psych-
ology. In W. Weimer, & D. Palermo (Eds.), Cognition and symbolic processes.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Shaw, R. E., McIntyre, M., & Mace, W. (1974). The role of symmetry in event
perception. In R. B. Macleod & H. L. Pick, Jr. (Eds.), Perception: Essays in
honor of James J. Gibson. (pp. 276–310). Ithaca, NY: Cornell University Press.
Shaw, R. E., & Pittenger, J. B. (1977). Perceiving the face of change in changing
faces: Implications for a theory of object perception. In R. Shaw & J. Bransford
(Eds.) Perceiving, acting and knowing: Toward an ecological psychology (pp.
103–132). Hillsdale, NJ: Lawrence Erlbaum Associates.
Shaw, R. E., & Pittenger, J. B. (1978). On perceiving change. In H. Pick & E. Salt-
zman (Eds.), Modes of perceiving and processing information. (pp. 187–204)
Hillsdale, NJ: Lawrence Erlbaum Associates.
Shaw, R. E., Turvey, M. T., & Mace, W. M. (1982). Ecological psychology: The
consequence of a commitment to realism. In W. Weimer & D. Palermo (Eds.),
Cognition and the symbolic processes (Vol. II, pp. 59–226). Hillsdale, NJ: Law-
rence Erlbaum Associates.
322 References
Shimamura, A. P., & Prinzmetal, W. (1999). The Mystery Spot illusion and its
relation to other visual illusions. Psychological Science, 10, 501–507.
Shipley, T. F., & Kellman, P. J. (1992). Perception of partly occluded objects and
illusory figures: Evidence for an identity hypothesis. Journal of Experimental
Psychology: Human Perception and Performance, 18, 106–120.
Siegel, Z. D., & Kelly, J. W. (2017). Walking through a virtual environment
improves perceived size within and beyond the walked space. Attention, Percep-
tion, & Psychophysics, 79, 39–44.
Simonson, E., & Brozek, J. (1952). Flicker fusion frequency: background and
applications. Physiological Reviews, 32, 349–378.
Sinai, M. J., Ooi, T. L., & He, Z. J. (1998). Terrain influences the accurate judg-
ment of distance. Nature, 395, 497–500.
Singleton, R. S. (1991). Film scheduling, or, how long will it take to shoot your
movie? New York: Lone Eagle.
Skinner, B. F. (1977). The experimental analysis of operant behavior. Annals of the
New York Academy of Sciences, 291, 374–385.
Slinning, R., Rutherford, C., & Van der Meer, A. L. H. (2018). Perception of
occlusion of moving objects in young infants: A high-density EEG study. Poster
presented at ICIS Philadelphia 2018, 21st Biennial International Congress on
Infant Studies.
Smart Jr, L. J., Otten, E. W., Strang, A. J., Littman, E. M., & Cook, H. E. (2014).
Influence of complexity and coupling of optic flow on visually induced motion
sickness. Ecological Psychology, 26, 301–324.
Smart, L. J., Stoffregen, T. A., & Bardy, B. G. (2002). Visually induced motion
sickness predicted by postural instability. Human Factors, 44, 451–465.
Smeets, J. B. J., Hayhoe, M. M., & Ballard, D. H. (1996). Goal-directed arm move-
ments change eye-head coordination. Experimental Brain Research, 109, 434–440.
Smith, T. J., & Mital, P. K. (2013). Attentional synchrony and the influence of
viewing task on gaze behavior in static and dynamic scenes. Journal of Vision,
13, 1–24.
Smitsman, A. W., & Corbetta, D. (2010). Action in infancy: Perspectives, concepts,
and challenges. In J. G. Bremner & T. D. Wachs (Eds.), The Wiley-Blackwell
handbook of infant development (2nd ed., Vol. 1, pp. 167–203). Chichester:
Wiley-Blackwell.
Solomon, H. Y., & Turvey, M. T. (1988). Haptically perceiving the distances
reachable with hand-held objects. Journal of Experimental Psychology: Human
Perception & Performance, 14, 404–427.
Sonoda, K., Asakura, A., Minoura, M., Elwood, R. W., & Gunji, Y. P. (2012). Hermit
crabs perceive the extent of their virtual bodies. Biology Letters, 8, 495–497.
Spencer, L. M., & Van der Meer, A. L. H. (2012). TauG-guidance of dynamic balance
control during gait initiation across adulthood. Gait & Posture, 36, 523–526.
Sperling, G. (1960). The information available in brief visual presentations. Psycho-
logical Monographs: General and Applied, 74, 1–29.
Sporrel, K., Caljouw, S. R., & Withagen, R. (2017). Gap-crossing behavior in a
standardized and a nonstandardized jumping stone configuration. PloS ONE,
12(5), e0176165.
Sposito, A., Bolognini, N., Vallar, G., & Maravita, A. (2012). Extension of per-
ceived arm length following tool-use: Clues to plasticity of body metrics. Neu-
ropsychologia, 50, 2187–2194.
References 323
Spröte, P., & Fleming, R. W. (2016). Bent out of shape: The visual inference of
non-rigid shape transformations applied to objects. Vision Research, 126,
330–346.
Stephen, D. G., Arzamarski, R., & Michaels, C. F. (2010). The role of fractality in
perceptual learning: Exploration in dynamic touch. Journal of Experimental
Psychology: Human Perception and Performance, 36, 1161–1173.
Stepp, N., & Turvey, M. T. (2010). On strong anticipation. Cognitive Systems
Research, 11, 148–164.
Sterelny, K. (2012). The evolved apprentice: How evolution made humans unique.
Cambridge, MA: MIT Press.
Stoffregen, T. A. (1985). Flow structure versus retinal location in the optical
control of stance. Journal of Experimental Psychology: Human Perception and
Performance, 11, 554–565.
Stoffregen, T. A. (1993). “Natural”, “real”, and the use of non-physical displays in
perception-action research. International Society for Ecological Psychology
Newsletter, 6, 4–9.
Stoffregen, T. A. (1997). Filming the world: An essay review of Anderson’s The
reality of illusion. Ecological Psychology, 9, 161–177.
Stoffregen, T. A. (2000a). Affordances and events. Ecological Psychology,
12, 1–28.
Stoffregen, T. A. (2000b). Affordances and events: Theory and research. Ecological
Psychology, 12, 93–107 (reply to commentaries).
Stoffregen, T. A. (2003a). Affordances as properties of the animal-environment
system. Ecological Psychology, 15, 115–134.
Stoffregen, T. A. (2003b). Affordances are enough: Reply to Chemero et al. (2003).
Ecological Psychology, 15, 29–36.
Stoffregen, T. A. (2004). Breadth and limits of the affordance concept. Ecological
Psychology, 16, 79–85.
Stoffregen, T. A. (2013). On the physical origins of inverted optic images. Ecolo-
gical Psychology, 25, 369–382.
Stoffregen, T. A., & Bardy, B. G. (2001). On specification and the senses. Behavi-
oral and Brain Sciences, 24, 195–213.
Stoffregen, T. A., Bardy, B. G., & Mantel, B. (2006). Affordances in the design of
enactive systems. Virtual Reality, 10, 4–10.
Stoffregen, T. A., Bardy, B. G., Merhi, O., & Oullier, O. (2004). Postural responses
to two technologies for generating optical flow. Presence, 13, 601–615.
Stoffregen, T. A., Bardy, B. G. Smart, L. J., & Pagulayan, R. J. (2003). On the
nature and evaluation of fidelity in virtual environments. In L. J. Hettinger & M.
W. Haas (Eds.), Virtual and adaptive environments: Applications, implications,
and human performance issues (pp. 111–128). Mahwah, NJ: Lawrence Erlbaum
Associates.
Stoffregen, T. A., Gorday, K. M., Sheng, Y.-Y., & Flynn, S. B. (1999). Perceiving
affordances for another person’s actions. Journal of Experimental Psychology:
Human Perception and Performance, 25, 120–136.
Stoffregen, T. A., & Mantel, B. (2015). Exploratory movement and affordances in
design. Artificial Intelligence for Engineering Design, Analysis and Manufac-
turing, 29, 257–265.
Stoffregen, T. A., Mantel, B., & Bardy, B. G. (2017). The senses considered as one
perceptual system. Ecological Psychology, 29, 165–197.
324 References
Stoffregen, T. A., & Smart Jr, L. J. (1998). Postural instability precedes motion
sickness. Brain Research Bulletin, 47, 437–448.
Stoffregen, T. A., Yang, C. M., Giveans, M. R., Flanagan, M., & Bardy, B. G.
(2009). Movement in the perception of an affordance for wheelchair locomotion.
Ecological Psychology, 21, 1–36.
Stoper, A. E., & Bautista, A. (1992). Apparent height as a function of pitched
environment and task. Paper presented at the Association for Research in Vision
and Ophthalmology, Sarasota, FL.
Stoper, A. E., & Cohen, M. M. (1986). Judgments of eye level in light and in dark-
ness. Perception & Psychophysics, 40, 311–316.
Stoper, A. E., & Cohen, M. M. (1989). Effect of structured visual environments on
apparent eye level. Perception & Psychophysics, 46, 469–475.
Stratton, G. M. (1897). Vision without inversion of the retinal image. Psycho-
logical Review, 4, 463–481.
Swanson, L. W. (2015). Neuroanatomical terminology. New York: Oxford Univer-
sity Press.
Tamboer, J. W. I. (1988). Images of the body underlying concepts of action. In O.
G. Meijer & K. Roth (Eds.), Complex movement behaviour: The motor-action
controversy (pp. 439–462). Amsterdam: Elsevier Science Publishers.
Tanrikulu, O., Froyen, V., Feldman, J., & Singh, M. (2018). When is accreting/
deleting texture seen as in front? Interpretation of depth from texture motion.
Perception, 47, 694–721.
Tatler, B. W. (2007). The central fixation bias in scene viewing: Selecting an
optimal viewing position independently of motor biases and image feature distri-
butions. Journal of Vision, 7, 1–17. doi:10.1167/7.14.4.
Tatler, B. W., Hayhoe, M. M., Land, M. F., & Ballard, D. H. (2011). Eye guidance
in natural vision: Reinterpreting salience. Journal of Vision, 11, 1–23.
Tatler, B. W., Hirose, Y., Finnegan, S. K., Pievilainen, R., Kirtley, C., & Kennedy,
A. (2013). Priorities for selection and representation in natural tasks. Philo-
sophical Transactions of the Royal Society B, 368, 20130066.
Tatler, B. W., & Land, M. F. (2015). Everyday visual attention. In J. M. Fawcett,
E. F. Risko, & A. Kingstone (Eds.), The handbook of attention. Cambridge, MA:
MIT Press.
Texier, P.-J. (1995). The Oldowan assemblage from NY 18 site at Nyabusosi
(Toro-Uganda). Comptes Rendus de l’Académie des Sciences, Paris, 320, II a,
647–653.
‘t Hart, B. M., & Einhauser, W. (2012). Mind the step: Complementary effects of
an implicit task on eye and head movements in real-life gaze allocation. Experi-
mental Brain Research, 223, 233–249.
Thomas, B. J., Hawkins, M. M., & Nalepka, P. (2017). Perceiver as polar planime-
ter: Direct perception of jumping, reaching, and jump-reaching affordances for
the self and others. Psychological Research, 82, 655–674.
Thomas, B. J., & Riley, M. A. (2014). Remembered affordances reflect the funda-
mentally action-relevant, context-specific nature of visual perception. Journal of
Experimental Psychology: Human Perception and Performance, 40, 2361–2371.
Thomas, B. J., & Riley, M. A. (2015). The selection and usage of information for
perceiving and remembering intended and unintended object properties. Journal
of Experimental Psychology: Human Perception and Performance, 41, 807–815.
doi:10.1037/xhp0000050.
References 325
Thomas, B. J., Wagman, J. B., Hawkins, M., Havens, M., & Riley, M. A. (2017).
The independent perceptual calibration of action-neutral and-referential environ-
mental properties. Perception, 46, 586–604.
Thompson, E. (2007). Mind in life: Biology, phenomenology, and the sciences of
mind. Cambridge, MA: Harvard University Press.
Tinsley, J. M., Molodtsov, M. I., Prevedel, R., Wartmann, D., Espigule-Pons, J.,
Lauwers, M., & Vaziri, A. (2016). Direct detection of a single photon by
humans. Nature Communications, 7(12172), 1–9.
Todd, J. T. (1982). Visual information about rigid and nonrigid motion: A geo-
metric analysis. Journal of Experimental Psychology: Human Perception and
Performance, 8, 238–252.
Todd, J. T., Christensen, J. T., & Guckes, K. C. (2010). Are discrimination thresh-
olds a valid measure of variance for judgments of slant from texture? Journal of
Vision, 10(2), 1–18.
Todd, J. T., Oomes, A. H. J., Koenderink, J. J., & Kappers, A. M. L. (2001). On
the affine structure of perceptual space. Psychological Science, 12, 191–196.
Todd, J. T., Thaler, L., & Dijkstra, T. M. H. (2005). The effects of field of view on
the perception of 3D slant from texture. Vision Research, 45, 1501–1517.
Torenvliet, G. (2003). We can’t afford it! Interactions, 10, 13–17.
Toye, R. C. (1986). The effect of viewing position on the perceived layout of space.
Perception & Psychophysics, 40, 85–92.
Tufte, E. R. (1983). The visual display of quantitative information. Cheshire, CT:
Graphics Press.
Turner, A., Doxa, M., O’Sullivan, D., & Penn, A. (2001). From isovists to visibility
graphs: A methodology for the analysis of architectural space. Environment and
Planning B: Planning and Design, 28, 103–121.
Turvey, M. T. (1992). Affordances and prospective control: An outline of the onto-
logy. Ecological Psychology, 4, 173–187.
Turvey, M. T. (2004). Space (and its perception): The first and final frontier. Eco-
logical Psychology, 16, 25–29.
Turvey, M. T. (2013). Ecological perspective on perception-action: What kind of
science does it entail? In W. Prinz, M. Beisert, & A. Herwig (Eds.), Action science:
Foundations of an emerging discipline (pp. 139–170). Cambridge, MA: MIT Press.
Turvey, M. T. (2015). Quantum-like issues at nature’s ecological scale (the scale of
organisms and their environments). Mind and Matter, 13, 7–44.
Turvey, M. T. (2019). Lectures on perception: An ecological perspective. New
York: Routledge.
Turvey, M. T., & Fonseca, S. T. (2014). The medium of haptic perception: A
tensegrity hypothesis. Journal of Motor Behavior, 46, 143–187.
Turvey, M. T., Romaniak-Gross, C., Isenhower, R. W., Arzamarski, R., Harrison,
S., & Carello, C. (2009). Human odometer is gait-symmetry specific. Proceed-
ings of the Royal Society of London B: Biological Sciences, 276, 4309–4314.
Turvey, M. T., Shaw, R. E., Reed, E. S., & Mace, W. M. (1981). Ecological laws
of perceiving and acting: In reply to Fodor and Pylyshyn (1981). Cognition, 9,
237–304.
Ulano, M. (2009). Moving pictures that talk: Part 1: How is it possible? CAS
Journal. Retrieved July 3, 2018 from www.filmsound.org/ulano/talkies.htm.
Van Braeckel, K., Butcher, P. R., Geuze, R. H., Van Duijn, M. A. J., Bos, A., &
Bouma, A. (2008). Less efficient elementary visuomotor processes in 7- to
326 References
10-year-old preterm-born children without cerebral palsy: An indication of
impaired dorsal stream processes. Neuropsychology, 22, 755–764.
Van der Meer, A. L. H. (1997a). Keeping the arm in the limelight: Advanced visual
control of arm movements in neonates. European Journal of Paediatric
Neurology, 4, 103–108.
Van der Meer, A. L. H. (1997b). Visual guidance of passing under a barrier. Early
Development and Parenting, 6, 149–157.
Van der Meer, A. L., Fallet, G., & Van der Weel, F. R. (2008). Perception of struc-
tured optic flow and random visual motion in infants and adults: A high-density
EEG study. Experimental Brain Research, 186, 493–502.
Van der Meer, A. L. H., Holden, G., & Van der Weel, F. R. (2005). Coordination
of sucking, swallowing, and breathing in healthy newborns. Journal of Pediatrics
and Neonatology, 2, 69–72.
Van der Meer, A. L. H., Ramstad, M., & Van der Weel, F. R. (2008). Choosing
the shortest way to mum: Auditory guided rotation in 6- to 9-month-old infants.
Infant Behavior and Development, 31, 207–216.
Van der Meer, A. L. H., Svantesson, M. & Van der Weel, F. R. (2012). Longitud-
inal study of looming in infants with high-density EEG. Developmental Neuro-
science, 34, 48–501. doi:10.1159/000345154.
Van der Meer, A. L. H., & Van der Weel, F. R. (1995). Move yourself, baby!
Perceptuo-motor development from a continuous perspective. In P. Rochat (Ed.),
The self in infancy: Theory and research (pp. 257–275). Amsterdam: Elsevier
Science Publishers.
Van der Meer, A. L. H., & Van der Weel, F. R. (2011). Auditory guided arm and
whole body movements in young infants. In P. Strumello (Ed.), Advances in
sound localization (pp. 297–314). Vienna: InTech.
Van der Meer, A. L. H., & Van der Weel, F. R. (2013). Combining findings from
gaze and electroencephalography recordings to study timing in a visual tracking
task. NeuroReport, 24, 968–972.
Van der Meer, A. L. H., Van der Weel, F. R., & Lee, D. N. (1994). Prospective
control in catching by infants. Perception, 23, 287–302.
Van der Meer, A. L. H., Van der Weel, F. R., & Lee, D. N. (1995a). The functional
significance of arm movements in neonates. Science, 267, 693–695.
Van der Meer, A. L. H., Van der Weel, F. R., & Lee, D. N. (1996). Lifting weights
in neonates: Developing visual control of reaching. Scandinavian Journal of
Psychology, 37, 424–436.
Van der Meer, A. L. H., Van der Weel, F. R., Lee, D. N., Laing, I. A., & Lin, J.-P.
(1995b). Development of prospective control of catching moving objects in
preterm at-risk infants. Developmental Medicine and Child Neurology, 37,
145–158.
Van der Velden, H. A. (1946). The number of quanta necessary for the perception
of light in the human eye. Ophthalmologica, 111, 321–331.
Van der Weel, F. R., Craig, C. M., & Van der Meer, A. L. H. (2007). The rate of
change of tau. In G.-J. Pepping & M. A. Grealy (Eds.), Closing the gap: The sci-
entific writings of David N. Lee (pp. 305–365). Mahwah, NJ: Lawrence Erlbaum
Associates.
Van der Weel, F. R. & Van der Meer, A. L. H. (2009). Seeing it coming: Infants’
brain responses to looming danger. Naturwissenschaften, 96, 1385–1391.
doi:10.1007/s00114-009-0585-y.
References 327
Van der Weel, F. R., & Van der Meer, A. L. H. (2019). Infants’ brain responses to
looming danger: Degeneracy of neural connectivity patterns. In press in the
special issue on “Gibsonian neuroscience” for Ecological Psychology.
Van der Weel, F. R., Van der Meer, A. L. H., & Lee, D. N. (1991). Effect of task
on movement control in cerebral palsy: Implications for assessment and therapy.
Developmental Medicine and Child Neurology, 33, 419–426.
Van der Weel, F. R., Van der Meer, A. L. H., & Lee, D. N. (1996). Measuring dys-
function of basic movement control in cerebral palsy. Human Movement Science,
15, 253–283.
Van Dijk, L., & Rietveld, E. (2017). Foregrounding sociomaterial practice in our
understanding of affordances: The skilled intentionality framework. Frontiers in
Psychology, 7, 1969.
Van Dijk, L., Withagen, R., & Bongers, R. M. (2015). Information without
content: A Gibsonian reply to enactivists’ worries. Cognition, 134, 210–214.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive
science and human experience. Cambridge, MA: MIT Press.
Vaz, D. V., Silva, P. L., Mancini, M. C., Carello, C., & Kinsella-Shaw, J. (2017).
Towards an ecologically grounded functional practice in rehabilitation. Human
Movement Science, 52, 117–132.
Vicente, K. J. (2002). Ecological interface design: Progress and challenges. Human
Factors, 44, 62–78.
Vicente, K. J., Kada-Bekhaled, K., Hillel, G., Cassano, A., & Orser, B. A. (2003). Pro-
gramming errors contribute to death from patient-controlled analgesia: Case report
and estimate of probability. Canadian Journal of Anesthesiology, 50, 328–332.
Vicente, K. J., & Rasmussen, J. (1990). The ecology of human-machine systems II:
Mediating direct perception in complex work domains. Ecological Psychology,
2, 207–249.
Vilhelmsen, K., Agyei, S. B., Van der Weel, F. R., & Van der Meer, A. L. H.
(2018). A high-density EEG study of differentiation between two speeds and
directions of simulated optic flow in adults and infants. Psychophysiology.
doi:10.1111/psyp.13281.
Vilhelmsen, K., Van der Weel, F. R., & Van der Meer, A. L. H. (2015). A high-
density EEG study of differences between three high speeds of simulated forward
motion from optic flow in adult participants. Frontiers in Systems Neuroscience,
9, 146.
Villard, S. J., Flanagan, M. B., Albanese, G. M., & Stoffregen, T. A. (2008). Pos-
tural instability and motion sickness in a virtual moving room. Human Factors,
50, 332–345.
Viswanathan, G. M., Da Luz, M. G., Raposo, E. P., & Stanley, H. E. (2011). The
physics of foraging: An introduction to random searches and biological encoun-
ters. Cambridge: Cambridge University Press.
Volcic, R., Fantoni, C., Caudek, C., Assad, J. A., & Domini, F. (2013). Visuomo-
tor adaptation changes stereoscopic depth perception and tactile discrimination.
Journal of Neuroscience, 33, 17081–17088.
Von Hofsten, C. (1983). Catching skills in infancy. Journal of Experimental Psych-
ology: Human Perception and Performance, 9, 75–85.
Von Hofsten, C. (2003). On the development of perception and action. In K. J.
Connolly & J. Valsiner (Eds.), Handbook of developmental psychology (pp.
114–140). London: Sage.
328 References
Von Hofsten, C., Feng, Q., & Spelke, E. (2000). Object representation and predic-
tive action in infancy. Developmental Science, 32, 193–205.
Von Hornbostel, E. M. (1927). The unity of the senses. Psyche, 7, 83–89.
Von Uëxkull, J. (1992). A stroll through the worlds of animals and men: A picture
book of invisible worlds. Semiotica, 89, 319–391. (Original work published 1934.)
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological
processes. Cambridge, MA: Harvard University Press.
Wade, N. J., & Finger, S. (2001). The eye as an optical instrument: From camera
obscura to Helmholtz’s perspective. Perception, 30, 1157–1177.
Wagman, J. B. (2010). What is responsible for the emergence of order and pattern
in psychological systems? Journal of Theoretical and Philosophical Psychology,
30, 32–50.
Wagman, J. B., & Abney, D. H. (2012). Transfer of recalibration from audition to
touch: Modality independence as a special case of anatomical independence.
Journal of Experimental Psychology: Human Perception and Performance, 38,
589–602.
Wagman, J. B., & Abney, D. H. (2013). Is calibration of the perception of length
modality-independent? Attention, Perception, & Psychophysics, 75, 824–829.
Wagman, J. B., Bai, J., & Smith, P. J. (2016). Nesting in perception of affordances
for stepping and leaping. Attention, Perception, & Psychophysics, 78, 1771–1780.
Wagman, J. B., Caputo, S. E., & Stoffregen, T. A. (2016a). Hierarchical nesting of
affordances in a tool use task. Journal of Experimental Psychology: Human Per-
ception & Performance, 42, 1627–1642.
Wagman, J. B., & Carello, C. (2001). Affordances and inertial constraints on tool
use. Ecological Psychology, 13, 173–195.
Wagman, J. B., & Chemero, A. (2014). The end of the debate over extended cogni-
tion. In T. Solymosi & J. Shook (Eds.), Neuroscience, neurophilosophy, and
pragmatism: Understanding brains at work in the world (pp. 105–124). Basing-
stoke: Palgrave Macmillan.
Wagman, J. B., & Day, B. M. (2014). Changes in context and perception of
maximum reaching height. Perception, 43, 129–144.
Wagman, J. B., & Hajnal, A. (2014a). Task specificity and anatomical independ-
ence in perception of properties by means of a wielded object. Journal of Experi-
mental Psychology: Human Perception and Performance, 40, 2372–2391.
Wagman, J. B., & Hajnal, A. (2014b). Task specificity and anatomical independ-
ence in perception of properties by means of a wielded object. Journal of Experi-
mental Psychology: Human Perception and Performance, 40, 2372–2391.
Wagman, J. B., & Hajnal, A. (2016). Use your head! Perception of action possibil-
ities by means of an object attached to the head. Experimental Brain Research,
234, 829–836.
Wagman, J. B., Langley, M. D., & Farmer-Dougan, V. (2017). Doggone
affordances: Canine perception of affordances for reaching. Psychonomic Bul-
letin & Review, 24, 1097–1103.
Wagman, J. B., & Malek, E. A. (2008). Perception of affordances for walking
under a barrier from proximal and distal points of observation. Ecological
Psychology, 20, 65–83.
Wagman, J. B., & Malek, E. A. (2009). Geometric, kinetic-kinematic, and inten-
tional constraints influence willingness to pass under a barrier. Experimental
Psychology, 56, 409–417.
References 329
Wagman, J. B., & Miller, D. B. (2003). Nested reciprocities: The organism-
environment system in perception-action and development. Developmental Psy-
chobiology, 42, 317–334.
Wagman, J. B., & Morgan, L. L. (2010). Nested prospectivity in perception: Per-
ceived maximum reaching height reflects anticipated changes in reaching ability.
Psychonomic Bulletin & Review, 17, 905–909.
Wagman, J. B., Shockley, K., Riley, M. A., & Turvey, M. T. (2001). Attunement,
calibration, and exploration in fast haptic perceptual learning. Journal of Motor
Behavior, 33, 323–327.
Wagman, J. B., Stoffregen, T. A., Bai, J., & Schloesser, D. S. (2018). Perceiving
nested affordances for another person’s actions. The Quarterly Journal of
Experimental Psychology, 7, 790–799.
Wagman, J. B., Thomas, B. J., McBride, D. M., & Day, B. M. (2013). Perception
of maximum reaching height when the means of reaching are no longer in view.
Ecological Psychology, 25, 63–80.
Wagner, M. (1985). The metric of visual space. Perception & Psychophysics, 38,
483–495.
Wagner, M. (2006). The geometries of visual space: New York: Psychology Press.
Wallach, H., & O’Connell, D. N. (1953). The kinetic depth effect. Journal of
Experimental Psychology, 45, 205–217.
Wallach, H., & O’Leary, A. (1982). Slope of regard as a distance cue. Perception
& Psychophysics, 31, 145–148.
Waller, D., & Richardson, A. R. (2008). Correcting distance estimates by interact-
ing with immersive virtual environments: Effects of task and available sensory
information. Journal of Experimental Psychology: Applied, 14, 61–72.
Walter, H., Wagman, J. B., Stergiou, N., Erkmen, N., & Stoffregen, T. A. (2017).
Dynamic perception of dynamic affordances: Walking on a ship at sea. Experi-
mental Brain Research, 235, 517–524.
Wang, S., Baillargeon, R., & Paterson, S. (2005). Detecting continuity violations in
infancy: A new account and new evidence for covering and tube events. Cogni-
tion, 95, 129–173.
Warren, W. H. (1984). Perceiving affordances: Visual guidance of stair climbing.
Journal of Experimental Psychology: Human Perception and Performance, 10,
683–703.
Warren, W. H. (1995). Constructing an econiche. In J. Flach, P. Hancock, J. Caird,
& K. Vicente (Eds.), Global perspectives on the ecology of human-machine
systems, Hillsdale, NJ: Lawrence Erlbaum Associates.
Warren, W. H. (1998). Visually controlled locomotion: 40 years later. Ecological
Psychology, 10(3–4), 177–219.
Warren, W. H. (2005). Direct perception: The view from here. Philosophical
Topics, 33, 335–361.
Warren, W. H. (2006). The dynamics of perception and action. Psychological
Review, 113, 358–389.
Warren, W. H. (2007). Action-scaled information. In G. J. Pepping & M. L. Grealy
(Eds.), Closing the gap: The scientific writings of David N. Lee (pp. 253–268).
Mahwah, NJ: Erlbaum.
Warren, W. H., Kay, B. A., & Yilmaz, E. H. (1996). Visual control of posture
during walking: Functional specificity. Journal of Experimental Psychology:
Human Perception and Performance, 22, 818–838.
330 References
Warren, W. H., Kay, B. A., Zosh, W. D., Duchon, A. P., & Sahuc, S. (2001). Optic
flow is used to control human walking. Nature Neuroscience, 4, 213–216.
Warren, W. H., Rothman, D. B., Schnapp, B. H., & Ericson, J. D. (2017). Worm-
holes in virtual space: From cognitive maps to cognitive graphs. Cognition, 166,
152–163.
Warren, W. H., & Shaw, R. E. (1985a). Events and encounters as units of analysis
for ecological psychology. In W. H. Warren & R. E. Shaw (Eds.), Persistence
and change: Proceedings of the First International Conference on Event Percep-
tion. (pp. 1–27). Hillsdale, NJ: Lawrence Erlbaum Associates.
Warren, W. H., & Shaw, R. E. (Eds.) (1985b). Persistence and change: Proceedings
of the First International Conference on Event Perception. Hillsdale, NJ Law-
rence Erlbaum Associates.
Warren, W. H., & Whang, S. (1987). Visual guidance of walking through aper-
tures: Body-scaled information for affordances. Journal of Experimental Psych-
ology: Human Perception and Performance, 13, 371–383.
Weast, J. A., Shockley, K., & Riley, M. A. (2011). The influence of athletic experi-
ence and kinematic information on skill-relevant affordance perception. The
Quarterly Journal of Experimental Psychology, 64, 689–706.
Weast, J. A., Walton, A., Chandler, B. C., Shockley, K., & Riley, M. A. (2014).
Essential kinematic information, athletic experience, and affordance perception
for others. Psychonomic Bulletin & Review, 21, 823–829.
Welch, R. B., Choe, C. S., & Neinrich, D. R. (1974). Evidence for a three-
component model of prism adaptation. Journal of Experimental Psychology,
103, 700–705.
Weschler, L. (2009). Seeing is forgetting the name of the thing one sees. Berkeley,
CA: University of California Press. (original work published 1982).
Weyl, H. (1952). Symmetry. Princeton, NJ: Princeton University Press.
Wheeler, J. (2000). Geons, black holes, and quantum foam (Rev. ed.). New York:
W. W. Norton & Company.
White, E., Shockley, K., & Riley, M. A. (2013). Multimodally specified energy
expenditure and action-based distance judgments. Psychonomic Bulletin &
Review, 20, 1371–1377.
Willats, J. (1990). The draughtsman’s contract: How an artist creates an image. In
H. Barlow, C. Blakemore, & M. Weston-Smith (Eds.), Images and understanding
(pp. 235–255). Cambridge: Cambridge University Press.
Willemsen, P., Colton, M. B., Creem-Regehr, S. H., & Thompson, W. B. (2009).
The effects of head-mounted display mechanical properties and field of view on
distance judgments in virtual environments. ACM Transactions on Applied Per-
ception (TAP), 6(2), 8.
Withagen, R. (2004). The pickup of nonspecifying variables does not entail indirect
perception. Ecological Psychology, 16, 237–253.
Withagen, R., Araújo, D., & de Poel, H. J. (2017). Inviting affordances and agency.
New Ideas in Psychology, 45, 11–18.
Withagen, R., & Chemero, A. (2009). Naturalizing perception: Developing the
Gibsonian approach to perception along evolutionary lines. Theory & Psych-
ology, 19, 363–389.
Withagen, R., de Poel, H. J., Araújo, D., & Pepping, G. J. (2012). Affordances can
invite behavior: Reconsidering the relationship between affordances and agency.
New Ideas in Psychology, 30, 250–258.
References 331
Withagen, R., & Michaels, C. F. (2005a). The role of feedback information for
calibration and attunement in perceiving length by dynamic touch. Journal of
Experimental Psychology: Human Perception and Performance, 31, 1379–1390.
Withagen, R., & Michaels, C. F. (2005b). On ecological conceptualizations of per-
ceptual systems and action systems. Theory & Psychology, 15, 603–620.
Withagen, R., & van Wermeskerken, M. (2009). Individual differences in learning
to perceive length by dynamic touch: Evidence for variation in perceptual learn-
ing capacities. Perception & Psychophysics, 71, 64–75.
Withagen, R., & van Wermeskerken, M. (2010). The role of affordances in the
evolutionary process reconsidered: A niche construction perspective. Theory &
Psychology, 20, 489–510.
Witt, J. K. (2011). Action’s effect on perception. Current Directions in Psycho-
logical Science, 20, 201–206.
Witt, J. K., Linkenauger, S. A., Bakdash, J. Z., & Proffitt, D. R. (2008). Putting to
a bigger hole: Golf performance relates to perceived size. Psychonomic Bulletin
& Review, 15, 581–585.
Witt, J. K., Proffitt, D. R., & Epstein, W. (2010). When and how are spatial per-
ceptions scaled? Journal of Experimental Psychology: Human Perception and
Performance, 36, 1153–1160.
Witt, J. K., & Riley, M. A. (2014). Discovering your inner Gibson: Reconciling
action-specific and ecological approaches to perception–action. Psychonomic
Bulletin & Review, 21, 1353–1370.
Wraga, M. J. (1999). The role of eye height in perceiving affordances and object
dimensions. Perception and Psychophysics, 61, 490–507.
Wraga, M. J., & Proffitt, D. R. (2000). Mapping the zone of eye-height utility for
seated and standing observers. Perception, 29, 1361–1383.
Wu, B., He, Z. J., & Ooi, T. L. (2007). Inaccurate representation of the ground
surface beyond a texture boundary. Perception, 36, 703–721.
Wu, B., Ooi, T. L., & He, Z. J. (2004). Perceiving distance accurately by a direc-
tional process of integrating ground information. Nature, 428, 73–77.
Wunsch, K., Henning, A., Aschersleben, G., & Weigelt, M. (2013). A systematic
review of the end-state comfort effect in normally developing children and in
children with developmental disorders. Journal of Motor Learning and Develop-
ment, 1, 59–76.
Xiao, B., Bi, W., Jia, X, Wei, H., & Adelson, E. H. (2016). Can you see what you
feel? Color and folding properties affect visual–tactile material discrimination of
fabrics. Journal of Vision, 16(3), 34, doi:10.1167/16.3.34.
Yarbus, A. L. (1967). Eye movements and vision. New York: Plenum.
Yasuda, M., Wagman, J. B., & Higuchi, T. (2014). Can perception of aperture passa-
bility be improved immediately after practice in actual passage? Dissociation between
walking and wheelchair use. Experimental Brain Research, 232, 753–764.
Yellott, J. I. Jr., Wandell, B. A., & Cornsweet, T. N. (2011). The beginnings of
visual perception: The retinal image and its initial encoding. In Handbook of
physiology: The nervous system. New York: Wiley Online Library.
Yonas, A., Goldsmith, L. T., & Hallstrom, J. L. (1978). Development of sensitivity
to information provided by cast shadows in pictures. Perception, 7, 333–341.
Yoonesi, A., & Baker, C. L. (2013). Depth perception from dynamic occlusion in
motion parallax: Roles of expansion-compression versus accretion-deletion.
Journal of Vision, 13, 1–6.
332 References
Yu, C., & Smith, L. B. (2013). Joint attention without gaze following: Human
infants and their parents coordinate visual attention to objects through eye-hand
coordination. PLoS ONE, 8, e79659. doi:10.1371/journal.pone.0079659.
Yu, Y., Bardy, B. G., & Stoffregen, T. A. (2010). Influences of head and torso
movement before and during affordance perception. Journal of Motor Behavior,
43, 45–54.
Zaal, F. T. J. M., & Bootsma, R. J. (2011). Virtual reality as a tool for the study of
perception-action: The case of running to catch fly balls. Presence: Teleoperators
& Virtual Environments, 20, 93–103.
Zaal, F. T. J. M., & Michaels, C. F. (2003). The information for catching fly balls:
Judging and intercepting virtual balls in a CAVE. Journal of Experimental Psych-
ology: Human Perception and Performance, 29, 537–555.
Zajonc, A. (1993). Catching the light: The entwined history of light and mind.
New York: Bantam Books.
Zhao, H., & Warren, W. H. (2015). On-line and model-based approaches to the
visual control of action. Vision Research, 110, 190–202.
Zhu, Q., & Bingham, G. P. (2010). Learning to perceive the affordance for long-
distance throwing: Smart mechanism or function learning? Journal of Experi-
mental Psychology: Human Perception & Performance, 36, 862–875.
Zhu, Q., & Bingham, G. P. (2011). Human readiness to throw: The size–weight
illusion is not an illusion when picking the best objects to throw. Evolution and
Human Behavior, 32, 288–293.
Index