Professional Documents
Culture Documents
Physics Avoidance and Other Essays in Conceptual Strategy 1St Edition Mark Wilson Full Chapter PDF
Physics Avoidance and Other Essays in Conceptual Strategy 1St Edition Mark Wilson Full Chapter PDF
Physics Avoidance and Other Essays in Conceptual Strategy 1St Edition Mark Wilson Full Chapter PDF
https://ebookmass.com/product/conceptual-physics-12th-edition-
ebook-pdf-version/
https://ebookmass.com/product/conceptual-physics-13th-edition-
global-edition-paul-hewitt/
https://ebookmass.com/product/conceptual-physics-pearson-
international-edition-paul-g-hewitt/
My Withered Legs and Other Essays Sandra Gail Lambert
https://ebookmass.com/product/my-withered-legs-and-other-essays-
sandra-gail-lambert/
https://ebookmass.com/product/the-physics-of-everyday-phenomena-
a-conceptual-introduction-to-physics-10th-edition-w-thomas-
griffith/
https://ebookmass.com/product/a-plea-for-natural-philosophy-and-
other-essays-penelope-maddy/
https://ebookmass.com/product/inference-explanation-and-other-
frustrations-essays-in-the-philosophy-of-science-john-earman-
editor/
https://ebookmass.com/product/bangladesh-in-bondage-tarique-
rahman-sqc-lb-and-other-essays-q-m-jalal-khan/
Physics Avoidance
Physics Avoidance
Essays in Conceptual Strategy
MARK WILSON
1
OUP CORRECTED PROOF – FINAL, 28/9/2017, SPi
3
Great Clarendon Street, Oxford, OX2 6DP,
United Kingdom
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide. Oxford is a registered trade mark of
Oxford University Press in the UK and in certain other countries
© Mark Wilson 2017
The moral rights of the author have been asserted
First Edition published in 2017
Impression: 1
All rights reserved. No part of this publication may be reproduced, stored in
a retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permitted
by law, by licence or under terms agreed with the appropriate reprographics
rights organization. Enquiries concerning reproduction outside the scope of the
above should be sent to the Rights Department, Oxford University Press, at the
address above
You must not circulate this work in any other form
and you must impose this same condition on any acquirer
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America
British Library Cataloguing in Publication Data
Data available
Library of Congress Control Number: 2017942155
ISBN 978–0–19–880347–8
Printed and bound by
CPI Group (UK) Ltd, Croydon, CR0 4YY
Links to third party websites are provided by Oxford in good faith and
for information only. Oxford disclaims any responsibility for the materials
contained in any third party website referenced in this work.
To Winston and Kathleen
1
Rudolf Magnus, Goethe as a Scientist, Heinz Norden, trans. (New York: Henry Shuman, 1949), p. 55–6.
SUMM ARY CONTENTS
2. PHYSICS AVOIDANCE 51
2. PHYSICS AVOIDANCE 51
Appendix 1: Initial/Boundary Condition Mimics 90
Appendix 2: Constraints and the Physics of Mechanism 94
Despite its title, this book does not “avoid physics” in the sense of ignoring the topic
altogether. Rather it concerns itself with various stratagems for evading lengthy and
unreliable patterns of inference that we might otherwise confront in science. Here is our
basic prototype of “avoidance.” Wobbly Jack and Jill are perched on the top of their
proverbial hill, from which they will soon tumble. Where are they likely to land? In
theory, we might painfully calculate their downward trajectories bump by bump until
we reach a final landing place, but doing so requires an accurate assessment of hillside
topography, the elastic coefficients
pertinent to children, etc., as well
as a formidable computer. Such
reasoning pathways are notori-
ously prone to error, and great
care needs to be taken to insure
that one of our children will not
become launched into computa-
tional interstellar space through a
round-off error when they hit a
rock. In contrast, consider the pol-
icy of looking for the lowest points
of ground at the foot of the hill;
avoiding physics that is where they will both even-
tually tumble, we can assuredly decide. Although we cannot fully augur at which of
these lowest points they will be found, the predictions we can otherwise obtain through
avoidance reasoning are extremely reliable and easy to obtain. In the standard jargon of
applied mathematics, we have eschewed calculation of an evolutionary development
(their detailed but hard to compute trajectories down the hill) in favor of an estimation
of eventual equilibrium position (the locations where they can remain stationary). The
character of the reasoning employed differs considerably between the two cases, and I’ll
say that these two modes exemplify different forms of explanatory architecture.
In these contexts, the phrase physics avoidance signals the exploitation of a strategic
opportunity that allows us to resolve important physical questions in a practical manner.
My central purpose is to alert readers to the characteristic symptomatology of some of
these distinctive reasoning patterns.
Contemporary philosophical thinking blurs many of these architectural ingredients
together, wrapping distinct edifices within a poorly diagnosed envelope that I call Theory
T thinking (in tribute to the numerous T’s and T’’s that populate these abstracted
discussions). As a result, substantially different forms of explanatory motif get stuffed
xii Preface and Acknowledgments
1
Insofar as I can determine, these same T-laden propensities have been perpetuated within the sinews of possible
world thinking, as championed by the late David Lewis. See Essay 6 for more on this theme.
Preface and Acknowledgments xiii
the restrictive tenets of which I complain obtain their ersatz credentials. Sometimes the
best way to combat pseudo-scientific prejudice is simply to evoke concrete scientific fact.
That, essentially, is my project in these essays. Unfortunately, the practices that most
effectively reveal the deficiencies of Theory T thinking are subtle in their internal
contours, and the sledding within some of these essays may sometimes get a bit
bumpy. Some of these expository jolts merely reflect poor writing on my part, but
other intricacies appear simply because nature has made it so—the paths of reasoning it
tolerates with respect to our physical surroundings are crooked and cannot be made
straight. Indeed, a significant part of my narrative task lies in explaining why easy things
often rely upon tricky rationales. Without qualms, baffled readers should skim lightly
over any turgid passages that impede their advance, for I think that my central morals
can be adequately grasped without mastering every kink in every example that I present.
A recurring thread running throughout the book is the necessity of approaching
conceptual innovation in a manner that is biologically plausible and computationally
feasible. We are, after all, creatures of relatively small brain, and the physical world casts
descriptive opportunities our way that are varied in strategic character with little regard
for our inferential limitations. Often the key to adaptive
success is to find ways to re-engineer a familiar reasoning
scheme A into a fresh routine that addresses task B in a
swift and relatively painless manner. The results often
provide significant computational advance cloaked in con-
ceptual muddle, for the strategies that underpin policy
B can be quite different from those that rationalize
A. We can discover a worthy routine while misdiagnosing
its underlying rationale, just as we can nimbly execute card
tricks, without understanding why they succeed (the latter
often requires deep mathematics).
Many of these basic issues are outlined in a preliminary
manner within Essay 1, but many readers may prefer
jumping directly into the waters of any of the subsequent
essays, for immediate swimming is often more appealing
than long disquisitions on dry land. Every essay in this parlor tricks with reasoning
volume began life as a self-sustained lecture, and every one can be read, ad libitum, apart
from the others. Doing so entails some degree of repetition and cross-referencing across
chapters, but the results are preferable, I think, to my previous literary production
(Wandering Significance2), which blossomed into a pink enormity through an overly
earnest attempt to thread a linear narrative through very variegated territory. Virtually
nothing appearing here has been published before.3
That is not to say that the themes explored across these essays do not align with one
another fairly tightly, and most of them are organized around one of the many puzzles
2
Mark Wilson, Wandering Significance (Oxford: Oxford University Press, 2006).
3
Portions of Essay 5 appeared in “Stitching Together a Language for Science,” Analytic Philosophy 56(4) (2015).
xiv Preface and Acknowledgments
that emerge when we attempt to explicate how the loose bundle of descriptive tech-
niques called “classical physics” applies so effectively to the macroscopic world about us.
In my initial plottings, I had expected to frame this collection around a long survey
article entitled “What is ‘Classical
Physics’ Anyway?”, portions of
which have been published else-
where.4 I eventually decided that
this preface wasn’t necessary. How-
ever, it is within the applicational
contexts of traditional classical
mechanics that I first recognized
the importance of asymptotic con-
nections across characteristic scales
(Essays 1 and 5) and the conceptual
confusions that often follow in their
wake.
In brief, here is a summary of
contents. roadmap to the book ahead
Essay 1 (“Pragmatics’ Place at the Table”) argues for the importance of investigative
context as an important vehicle for reasoning compression, supporting the varied
explanatory architectures that allow us to capture the physical world in tractable
terms. This essay recapitulates the central argument of Wandering Significance in
variant terms.
Essay 2 (“Physics Avoidance”) distinguishes some fundamental classes of explanatory
architecture following the discernments of applied mathematics. In my opinion, descrip-
tive philosophy of science would be greatly improved if these diagnostic tools were
more widely employed. These distinctions will frequently reappear in other essays.
Essay 3 (“From the Bending of Beams to the Problem of Free Will”) outlines how
Leibniz’s strange metaphysical views stem from mathematical concerns intimately
linked to the methodological considerations of Essay 5. His insights have proved helpful
to me in other essays and have guided me to a better appreciation of what “metaphysics”
should be about.
Essay 4 (“Two Cheers for Anti-Atomism”) is another historically oriented chapter
focusing upon Pierre Duhem’s methodological insights with respect to classificatory
words such as “temperature.” This essay is long and detailed because Duhem directs our
attention to a complicated range of subtle facts about the applicational range of thermal
vocabulary. To its detriment, contemporary philosophical thinking with respect to
linguistic reference seems largely unaware of considerations of this type, so providing
a somewhat lengthy résumé of Duhem’s concerns strikes me as a worthy expositional
project. The conclusions reached are strongly complementary with the themes of
4
Robert Batterman, ed., Oxford Handbook of the Philosophy of Physics (New York: Oxford University Press,
2013).
Preface and Acknowledgments xv
other chapters. Again, this essay is intended as a tribute to another major influence upon
my thinking.
Essay 5 (“The Greediness of Scales”) outlines the working architectures of modern
multiscalar modeling techniques, in a manner that extends the discussion of investigative
context from Essay 1. Concrete organizational proposals of a multiscalar character help
us recognize the distortions and vagaries endemic within standard Theory T thinking.
Essay 6 (“Believers in the Land of Glory”) criticizes contemporary metaphysics’ tacit
reliance upon the coarse categories of Theory T thinking, which has misdirected
philosophical attention away from the puzzles of applied mathematical technique that
originally concerned Leibniz and other historical authorities.
Essay7 (“Is There Life in Possible Worlds?”) examines the topsy-turvy manner in
which thinking about causation in contemporary “possible world” terms inverts the
practical considerations that rationalize appeals to counterfactual possibility within
effective science. The essay also emphasizes the role of reliability within any “science”
worthy of the name. A possible link to Wittgenstein’s later thought is sketched as well.
Essay 8 (“Semantic Mimicry”) supplies my central exemplar of how “easy things rely
upon tricky supportive rationales.” It concentrates upon the long-vexed problems of
describing a continuous material in a coherent way. This is probably the most technic-
ally demanding of the essays.
Essay 9 (“A Second Pilgrim’s Progress”) criticizes contemporary formulations of
“naturalism” from the perspective of applied mathematics, especially in regard to
mistaken presumptions about the role that set theory plays within its deliberations.
Although these assorted essays emphasize the many varieties of strategic architecture
that science and common sense utilize in addressing nature’s descriptive challenges,
I write as neither a pluralist nor an anti-realist.5 Instead, I see myself as reviving older
strains of skeptical concern with respect to our abilities to reason thoroughly about our
surroundings in mathematical ways. John Locke writes:
Reason, though it penetrates into the depths of the seas and earth, elevates our thoughts as
high as the stars and leads us through the vast spaces and large rooms of this mighty fabric,
yet it comes far short of the real extent of even corporeal being, and there are many
instances where it fails us.6
5
e.g. in the vein of Nancy Cartwright, How The Laws of Physics Lie (Oxford: Oxford University Press, 1983).
Although Cartwright’s writings correctly highlight inferential patterns to which philosophers should pay greater attention,
she generally supplies them with a gloss diametrically opposite to that I would favor. For example, she correctly observes
that productive inferential pathways leading to useful empirical conclusions often hinge upon the order in which various
limits are taken. But she then characterizes these behaviors as the typical “put up jobs” that physicists supply when they
deceptively pretend that they have captured the “dappled world” within a unified physical framework. In contrast,
I regard those same limit-sensitive behaviors as characteristic of the asymptotic techniques for which applied mathemat-
icians have developed compelling rationales for their frequent appearances within inferential practice. The notion that
there is anything “deceptive” or “idealized” about their invocation strikes me as altogether misguided. This is not to say
that many mysteries about how asymptotic techniques store data still remain. But these merely strike me as agenda items
for further research, not grounds for philosophical despair.
6
John Locke, An Essay Concerning Human Understanding (Oxford: Oxford University Press, 1979), p. 681.
xvi Preface and Acknowledgments
At bottom, mathematics’ abilities to capture nature’s unfolding processes within its own
conceptual terms rest upon its capacities for supplying algorithms that can fill out pieces
of graph paper in a numerical manner, in hope of paralleling natural process in a helpful
fashion. Prosecuted naïvely, we already know that such efforts are doomed to failure,
because, inter alia, nature presents us with lots of mani-
folds that can’t be smoothly mapped to flat pieces of paper
(vide the problems of graphing a goose’s flight over a
curvaceous earth discussed in Essay 9). In this fashion,
our most basic modes of effective mathematical reasoning
must fall “short of the real extent” of natural process. But
this is hardly the end of the story, for clever mathemat-
icians have developed many sophisticated strategies for
stringing together patchworks of numerical approxima-
tion that allow us to feel that we have nonetheless “under-
stood nature” in a fully satisfactory manner, despite the
algorithmic limitations upon our concrete reasoning cap-
acities. Central to most varieties of this liberation from
John Locke
computational shortsightedness are the tools of set theory
and the differential calculus. But these appeals frequently introduce descriptive exag-
gerations of the sort highlighted as “the greediness of scales” problem in Essay 5. At
present, mathematicians repair these descriptive lapses through corrective doses of
asymptotic interconnection, following Oliver Heaviside’s maxim:
It is said that every bane has its antidote and some amateur biologists have declared that
the antidote is to be found near its bane.7
In this vein, applied mathematics has developed wonderfully illuminating explanations
for why our lot as limited creatures of nature is doomed to perpetual challenge, as well
as suggesting a wide array of clever reasoning architectures to work around these
obstacles. This evaluation of our computational predicament is not pluralist in the
usual philosopher’s sense; it merely supplies straightforward scientific explanations of
the diverse descriptive demands that make more impatient thinkers presume that we
should become pluralists or anti-realists. So the methodological moral I extract is: be
more patient and do not fall prey to extreme philosophical remedies.
But how far can we advance through these devious methods of conceptual repair and
what will the fabric of affected science look like in the future? I don’t know, and I doubt
that anyone else alive can supply reliable appraisals of this prognostic cast. Other
philosophers, however, believe that they have somehow established an Archimedean
perch from which this futuristic feat can be accomplished. Insofar as I can determine,
these illusions of confidence largely trace to our unfortunate heritage of Theory
T thinking, a simplistic methodology in which mathematical science allegedly supplies
7
Oliver Heaviside, Electromagnetic Theory, Vol. II (New York, Chelsea, 1971), pp. 31–2. The original context is
explained in Essay 5.
Preface and Acknowledgments xvii
“processes” that parallel nature’s own in a tidily isomorphic fashion. In one fell swoop,
many of the detailed descriptive concerns that animated the great metaphysical writers
of the past get removed from the docket of philosophical attention. I hope that these
essays may assist in reversing these premature exclusions.
When Locke wondered about our capacity for understanding “the real extent of
corporeal being,” he worried skeptically about our ability to capture the varied phe-
nomena encountered amongst ordinary solids like wood and steel (cohesion, elasticity,
fracture, transmission of coherent work) within a unified and coherent conceptual
package. Most of my own examples focus upon the same congeries of descriptive
concern, but with the advantages of modern hindsight. Material science has made
great advances with respect to the behaviors of solids in the centuries since Locke
wrote, and important aspects of these historical developments have involved deep and
subtle revisions in how scientists think about the concepts and the capacities of applied
mathematics. We can now render “corporeal being” a far greater degree of representa-
tional justice than Locke anticipated, but we achieve this feat by catching the relevant
behaviors within more complex forms of conceptual netting than he could have possibly
envisioned. Insofar as I can see, a considerable percentage of contemporary philosophers
remain oblivious to these modernizing alterations and attempt to cobble by with
classical conceptions of “concept” very much like Locke’s own.8 A suitable corrective
is simply to return to Locke’s original physical concerns and ponder them afresh in light
of what we now know. Such, essentially, is the task of these essays.
Yet, as Locke and the other great British essayists and poets of his time emphasized,
we forever remain limited creatures of nature and can only provisionally evaluate our
capacities for understanding the world around us based upon the practical advances we
find ourselves able to make. Philosophers cannot serve as prophets any more effectively
than our fellow captives of nature, and we should not pretend otherwise. Neither
conceptual analysis nor transcendental musings nor a priori intuition can free us from
these empirical limitations. At best, we can assemble provisional progress reports with
respect to how matters presently appear. But doing so competently requires that we
attend to the complexities of explanatory landscape in a more detailed manner than is
common within philosophy today.
Such musings have led to an interest in the multiscalar techniques of modern
computing, and I was fortunate to have my friend Bob Batterman, independently
studying these topics, rejoin me on the Pittsburgh philosophy faculty a few years ago.
The fruits of unceasing conversations with Bob appear on every page of this book, even
when it is not evident to the untrained eye. I am further grateful to Bob for securing a
grant from the John Templeton Foundation that supported me as I struggled to
complete this fulsome manuscript.
A second central influence arrived with another import to our Allegheny banks, Jim
Woodward, whose insightful work on causation has greatly enhanced my recognition of
8
See Wandering Significance, Chapter 3, for more on the “classical picture of concepts.”
xviii Preface and Acknowledgments
the linguistic adjustments that open unexpected pathways from one form of computa-
tional architecture to another. Essay 6’s musings on “the early a priori” start here.
I have presented much of this material in a wide range of forums over several years,
and I fear that I cannot recall all of the kind folks who have offered me helpful advice.
But some of them are: James Allen, Jeremy Avigad, Gordon Belot, Robert Brandom,
Julia Bursten, A. W. Carus, Kathleen Cook, Josh Eisenthal, Sébastien Gandon, Brice
Halimi, Jeremy Heis, Jennifer Jhun, Jeff King, Peter Koellner, Michael Liston, John
MacFarlane, Alan Nelson, Jill North, Meghan Page, Kenneth Pearce, Christopher
Pincock, Mark Richard, Tom Ricketts, Robert Schwartz, Erica Shumener, Sheldon
Smith, Katie Tabb, and Jim Weatherall. As usual, I again thank my most faithful
respondents, Michael Friedman, Anil Gupta, Pen Maddy, George Wilson, as well as
my indulgent editor, Peter Momtchiloff.
Pittsburgh, 2016
1
PRAGMATICS’ PLACE
AT THE TABLE
Lessons from Multiscalar Science
[O]ur common stock of words embodies all the distinctions men have found worth
drawing, and the connections they have found worth drawing, in the lifetimes of
many generations: these surely are likely to be more numerous, more sound, since they
have stood up to the long test of the survival of the fittest and more subtle, at least in
all ordinary and reasonably practical matters, than any that you or I are likely to
think up in our arm-chairs of an afternoon—the most favored alternative method.
J. L. Austin1
(i)
1
“A Plea for Excuses” in Philosophical Papers (Oxford: Oxford University Press, 1961), p. 130.
2 Pragmatics’ Place at the Table
Warning! This essay will shift rather dramatically between the uncomfortably lofty and
the disconcertingly nitty-gritty, for it will eventually discuss the behavior of granite and
steel in more detail than is customary within a philosophical essay.2
Let us begin with some of the uncomfortably lofty issues. Consider Wittgenstein’s
old slogan: “Meaning is use.” Jaunty aphorisms such as this serve as philosophy’s form of
Rorschach test: their explication reveals more about the psychology of their interpreter
than they encode in their own right. With that understanding, I’ll develop my own gloss
on its content here.
But let me first explain why I have positioned the present essay first in this collection.
At heart, I consider myself a philosopher of language, interested in the question, “Why
will an evolving descriptive practice sometimes fall into strange forms of conceptual
confusion, despite the fact that its practitioners have not recognized that they have made
any obvious kind of blunder?” The richest traditions of twentieth-century common sense
philosophy (epitomized by J. L. Austin) maintained that these anomalies arise because
effective real-life usage is controlled by subtle contextual factors that we often overlook.
But the unsystematic character of the discussions provided by Austin’s school strike
most modern readers as too haphazard to qualify as a plausible account of linguistic
practice. How could one ever learn a language as irregularly framed as that?
It turns out, however, that contextual dependencies of the sort highlighted by the
common sense philosophers are also active within a properly operating physical science.
Recent progress in the multiscalar modeling of complex materials (such as steel or
granite) stems from the realization that scientific success can be achieved only if the
descriptive vocabularies utilized are subjected to a carefully monitored set of contextual
restrictions, not wholly unlike the specialized “usages” of the common sense school.
These modern computational recipes factor a complicated problem into a nested set of
discrete investigations, which supply a collection of localized contexts that embedded
terminologies can subsequently exploit with great efficiency. I believe that the divisions
of linguistic labor exemplified within these modeling schemes supply us with excellent
illustrations of how the loose appeals to “usage” characteristic of the ordinary language
school can be formalized in a precise and systematically learnable manner. The purpose
of this chapter is to explicate the basic nature of these structural advantages.
But let us first rehearse the considerations that have lead many contemporary
philosophers to reject the appeals to “use” characteristic of the old British school. In
this vein, consider Jerry Fodor’s brusque objection to the notion that pragmatically
inflected usage can play any significant role within the story of language:
Thought about the world is prior to thought about how to change the world. Accordingly,
knowing that is prior to knowing how. Descartes was right and Ryle was wrong. Why,
after all these years, does one still have to repeat these things?3
2
John MacFarlane once introduced me as follows: “There are some at the University of Pittsburgh who remain loyal
to the heritage of Wilfrid Sellars, but Wilson is loyal to its heritage of steel.”
3
Jerry A. Fodor, LOT 2: The Language of Thought Revisited (Oxford: Oxford University Press, 2008), p. 14. A more
exacting treatment of allied themes can be found in Jeffrey King and Jason Stanley, “Semantics, Pragmatics and the Role of
Semantic Content” in Zóltan Gendler Szabó, ed., Semantics versus Pragmatics (New York: Oxford University Press, 2005).
Pragmatics’ Place at the Table 3
4
The suggestions of the so-called free enrichment school strike me as helpful, but as too irregularly framed to address
our parsing concerns fully.
4 Pragmatics’ Place at the Table
Unlike Mrs. Antony, most of us monitor our ongoing progress in purposive reasoning
as we think things through (my talk of “registers” captures the mental machinery we
employ to keep track of our investigative intentions). We communicate these projects to
others through locutions such as: “Let’s see what happens if we give an opponent the
benefit of the doubt,” “Let’s see what happens if we enclose the triangle between two
parallel lines,” “Let’s first catalog the possible escape routes,” etc. And we terminate
these same stretches of investigative search by remarks such as “We’ve now found that
our opponent’s views lead inexorably to a contradiction, hence . . . ,” “Accordingly, we
learn that these two areas are forced to be equal,” “All of these routes ultimately trace to
a hacienda on old US 99, so let’s send some troopers out there,” etc. Such are the guiding
locutions of language management that are absent within Mrs. Antony’s aimless dis-
quisitions. We learn the rudiments of these communica-
tive structures at the same time as we acquire the other
significant aspects of language. When Mother teaches us
the meaning of “steel,” she simultaneously instructs us in
strategies for reasoning about such materials in a rudi-
mentary way, including the mastery of discourse direct-
ives such as “Let’s consider what happens when we look
at the metal under a microscope.” I claim that many
aspects of classificatory reference can be adequately
understood only through recognizing the profound
semantic contributions supplied by these compartmental- parental instruction
ized registrations of strategic context.
Allied forms of mental structuring have been actively studied within cognitive
science, sometimes with important carry-overs into linguistics proper, but Fodor’s
generative concerns do not appear to have been directly confronted in these terms.
But the present essay will not make its case for “use” in this exact vein, but will argue
instead from the perspective of effective language design: if we lacked tools for keep
track of our shifting investigative purposes, we would find ourselves unable to reason
about many common forms of everyday circumstance, due to their intractable com-
plexities.5 We are not supernatural intellects; we forever remain the evolved descend-
ants of humble hunter-gatherers, who must cobble together and redirect our modest
computational inheritance in the pursuit of more sophisticated objectives. Philosophers
often proceed on the presumption that we possess bigger brains and inferential skills
than we do, able to juggle descriptive parameters and computational processes far
beyond our actual capacities. But with a limited stock of words and smallish brains,
we must forever seek roundabout strategies that allow us to handle the extremely large
range of challenges that we confront within science and everyday practice (such
workaround tactics are called “strategies of physics avoidance” in Essay 2). The multi-
scalar methods we shall survey in this essay beautifully illustrate how descriptive
5
Some philosophers like to argue by transcendental deduction but I prefer brute engineering necessity.
Pragmatics’ Place at the Table 5
(ii)
Before we look into a typical scientific example, let us first remind ourselves of how
swiftly we shift our investigative focus within our everyday reasonings. Suppose that we
wish to rob a certain bank in Montrose, California. General wisdom in the design of
geographical information systems recommends that we divide our problem into inves-
tigative stages to be addressed sequentially: First ask, “What is the shortest route to the
6 Pragmatics’ Place at the Table
6
It is worth remarking that we humans seldom execute this optimality search as a computer would, but prefer
drawing a conventional map on a piece of paper and scanning it with our eyes. Why? My guess is that we have inherited
excellent “geometrical fact estimators” from our hunter-gatherer ancestors originally devised for computing preferred
paths across a landscape. We have learned to redirect these skills with respect to shortest route estimation by substituting
a conventional map for a natural environment. Homo sapiens’ remarkable capacities for the plastic redirection of pre-set
routines reoccur within Lesson VI below. The general consideration that possibility spaces need to be kept small and
targeted comprises the central theme of Essay 7.
Pragmatics’ Place at the Table 7
7
In an allied manner, we rarely notice the edits in a Hollywood movie like The Incredible Shrinking Man despite the fact
that they frequently shift perspective radically.
8
6,291,988 bytes versus 428,587 bytes in the representations I originally consulted. By the time you see them, dear
reader, the photos will have become subjected to further forms of compression and may have once again become equal!
8 Pragmatics’ Place at the Table
process. Compressive schemes instead ask an initial series of broad questions Q1, which
then incite a follow-up round of finer grained questions Q2 depending upon the response
to Q1, and so on through a nested array of further queries Q3, Q4, etc.9 To keep track of
these interdependencies, we often list the questions posed within a segregated front-end
register (Q1, Q2, Q3, . . . ), followed by an enumeration of their respective answers
(A1, A2, A3, . . . ).
Representational tactics of this general nature can be called multiple register schemes,
in analogy to the practice of writing polyphonic music in separate staves, such as
the three clefs in which organ music is written. Here the lowest pedal clef captures
the slowly altering harmonic environment (tonic, dominant, relative minor, . . . )
against which the
faster moving melo-
dies of the two upper
registers unfold. Hav-
ing established a firm
chordal context in this
divided manner, the
multiple register scheme
higher registers can
vary rather freely in their melodic contents, trusting to the fact that the parallel bass
line articulates the harmonic environment that allows auditors to keep track of the
piece’s musical movements.
In our how-to-rob-a-bank circumstances, we plotted out our schemes in two distinct
stages (topographic versus piloting) and observed that questions like “Where’s
Cazzette’s Children’s Store?” are answered differently according to the investigative
“pedal register” currently active. More formally, experts in Geographical Information
Systems (GIS) have learned that storing geographical data differently within distinct
registers greatly shortens computational endeavor.10
But my favorite exemplars of the manner in which we exploit contextual registers
within our everyday thinking are provided by the directive instructions we employ to
convey shifts in investigative focus, such as:
Let’s now see what happens if we give our opponent the benefit of the doubt
or
In order to reach general conclusions about politics, let’s let “Hugo” and “Jill” respectively
designate typical Republican and Democratic voters.
9
More sophisticated “wavelet” approaches to data compression employ nested sequences of contextually adapted
questions in more sophisticated manners than conventional JPEGs (which are largely based upon the Fast Fourier
Transform).
10
I. Heywood, S. Cornelius, Sarah Carver, and S. Carver, An Introduction to Geographical Information Systems
(Engelwood Cliffs: Prentice Hall, 2006).
Pragmatics’ Place at the Table 9
In doing so, we locate our unfolding ruminations within a shifting array of contextual
controls that might be called “investigative moods.” In the examples at hand, we alert
our audience to the fact that we plan to argue by reductio or that we intend to
substitute generic names for explicit quantifications. These announcements of investi-
gative mood reduce syntactic complexity11 and allow for significant reasoning compres-
sion as well—the technique gets a lot of pesky quantifiers, modal operators, and scope
restrictions out of the way so that the central combinatorial aspects of the reasoning can
move ahead in a simpler manner. Indeed, the Fitch lines employed within a standard
natural deduction system in elementary logic mark these stretches of investigative
mood explicitly (for interested readers, I outline the technique in Appendix 2). This
network of contextual markers keeps track our shifting reasoning tactics in the same
manner as the pedal clef of an organ score captures the shifting harmonic underpinnings
of the music. I believe that much of our everyday reasoning relies upon various forms
of these “pedal register” compressions. Present-day philosophy of language could
become more supple if its practitioners more warmly appreciated the substantive
reductions in syntactic complexity achievable through various policies of contextual
localization.
Accordingly, pragmatics (in the robust sense of practical objectives) gains its “place at
the table” through the manners in which our registrations of investigative strategy
enhance our other linguistic capacities, with respect to both data and reasoning com-
pression. To be sure, these considerations will scarcely exhaust all of the ways in which
the rough slogan “meaning is use” captures valuable insights into the workings of
language. Pace Fodor, however, the strategic factors upon which we will focus are
totally systematic in their contours, and their rudiments can be acquired within our
earliest days of linguistic instruction.
(iii)
Let us now turn to the extraordinary tactical compressions that have been achieved
within modern engineering through the deft employment of reasoning architectures
of a multiscalar design. Approached straightforwardly, complex physical scenarios
frequently involve far too many descriptive variables, arranged across a wide spectrum
of size and time scales, to submit to straightforward reasoning procedures. Clever
forms of “physics avoidance” workaround are needed to reduce these registration
11
Drawing upon Appendix 2, “Hugo and Jill agree in their liking for Mary” is a lot shorter than “Every male
Republican who has married some female Democrat that he likes is such that both of them will probably like whatever
presidential candidate appears in the second election cycle from now who has been endorsed by some previous Democrat
president admired by the female, provided that the candidate endorsed by that same president in the electoral cycle before
has not run a second time.”
10 Pragmatics’ Place at the Table
12
“They’re the rollingest babies on the New York Central Line”—Son House. Incredible Shrinking Steel
12 Pragmatics’ Place at the Table
13
As discussed in Essay 4, Pierre Duhem attempted to develop a thermomechanical framework that could handle a
wide range of complex materials in a single-level fashion, but he recognized that hysteresis effects represented a particular
challenge to this program. Indeed, modern modelers have obtained better results through the use of multiscalar schemes
as sketched here (although often employing a greater number of RVE sub-models than I cite).
From a mathematical point of view, we must continually reset boundary conditions as we traverse this energetic
cascade in a manner appropriate to the RVE sub-model before us. At the locomotive level, our initial specification of load
can be quite crude: 175 tons over a span of 100 feet. But more refined and localized estimates are required at the RVE
stages below this level. Multiscalar techniques extract these new conditions from local estimates calculated within the
RVE modeling at a scale length higher. The upshot is often a hierarchy of internally linked equilibrium boundary value
problems, with no reliance upon initial values anywhere.
14
The phrase comes from J. T. Oden. See Essay 5 for references.
Pragmatics’ Place at the Table 13
attempts to describe the molecular structure of granite in detail and will quickly flounder
upon the shoals of scalar tyranny. In contrast, a multiscalar scheme begins at the highest
scale level by computing internal stresses across the entire granite block by employing a
simple continuum physics modeling using the two elastic parameters E and G. It
reassesses these initial estimates only as the lower RVE sub-models dictate. The upshot
is a computational methodology that is neither purely “bottom-up” nor purely “top-
down” in its descriptive policies.
To obtain their initial stress estimates, these schemes generally employ a finite
element modeling of the type discussed in Essay 8, in which the submerged block is
decomposed into a network of little cells at the RVE scale size of the granite grain
(details don’t matter here). It then scours these results for the cells that display the
highest levels of local stress and shifts to a more realistic sub-model in which the same
little cell is treated as a laminate comprised of different minerals in a manner appropriate
to granite grain. In the previous stage of modeling this region was described as a
smeared out continuous blob lacking interior structure. But what will happen to our
little RVE laminate if it is placed within a stress environment of the same strength as the
region received within our original E and G modeling? Well, it will either shear
elastically in a manner that conforms to our original upper-scale prediction, or it will
recrystallize into gneiss. If the first happens, we can conclude that our initial assignment
of internal stresses is consistent with its smaller-scale details. If not, we must send a
corrective message back to our higher-scale modeling, demanding that our original
E and G modeling equations be replaced in the affected region with a new set of
formulas that utilize the anisotropic parameters appropriate to gneiss. After these
upper-scale corrections have been made, we must recompute the entire block’s distri-
bution of stress in light of these adjustments. We obtain a new macroscopic allocation of
stresses that must be checked once again for consistency with our localized laminate sub-
models. These successive stages of macroscopic estimation ! microscopic correction
may need to cycle through many iterations before an overall self-consistent modeling
assignment is reached (where the answers to all consistency tests come out “it’s okay”).
The accompanying diagram illustrates the basic computational architecture behind this
investigative technique. The comparison between distinct RVE results involves a
technique called homogenization that will emerge as vitally important in the consider-
ations ahead.
As previously noted, these divisions of linguistic labor policies support the tremendous
advances in reasoning compression that have revolutionized our capacity to simulate
complex materials on a computer. From whence do these advantageous compressions
spring? Partially, from the top-down driven character of our computations:15 we calculate
15
Strictly speaking, our procedure qualifies as neither a purist top-down nor a purist bottom-up methodology, but as a
hybrid compromise, in light of its repeated checks for self-consistency. Traditionally, top-down (Cauchy, Stokes, Green)
and bottom up (Navier) approaches to elasticity were regarded as competing (vide the nineteenth-century controversies
over rari- and multi-constant treatments recounted in A. E. H. Love, A Treatise on the Mathematical Theory of Elasticity
(New York: Dover, 1944)). Our multiscalar techniques bridge this divide in an enlightening manner.
Pragmatics’ Place at the Table 15
detailed effects upon a small scale only after some warning symptom (such as high stress
values) triggers an alarm. But the reasoning efficiencies also derive from the fact that our
RVE modelings are all of equilibrium character, a methodology that allows us to ignore
many forms of temporary small-scale effect (exploitation of equilibrium represents one of
Essay 2’s chief exemplars of “physics avoidance”). At first blush, policies of successive
approximation may appear computationally extravagant due to the many repetitions
required before the scheme stabilizes upon acceptable answers. But this appearance is
deceptive; the compressive advantages of equilibrium reasoning far overshadow these
mild computational costs.
To reap these considerable advantages, we must keep track of localized investigative
context. As noted previously, syntactic claims often appear within distinct portions of
our calculations that flat out contradict one another if interpreted in a naïve manner.
And these divergencies persist even after our calculations stabilize upon a final answer.
Through their reliance upon differential equations (see Essay 5), our various RVE sub-
models generally describe the same interior regions in syntactically inconsistent ways.
For example, on a macroscopic RVE scale, a small region R of our rock will be described
as “uniformly granite” (in contrast to “partially converted to gneiss”) whereas our
laminate modeling may characterize this same R as not uniform at all, but as an
16 Pragmatics’ Place at the Table
16
The structural logic behind this interscalar descriptive interdependence can be more easily appreciated if we
consider the details of homogenization more closely. Our lower-scale RVE sub-models are generic: they supply
prototypes for the grain structure within granite without claiming any specific knowledge that feldspar is truly found
within the target rock’s region R, rather than quartz, hornblende, or mica. In reasoning within this sub-model, we ask, “If
region R starts with a certain statistical array of minerals, will their relative positions substantially shift within a higher
stress regime S?” But the estimate of S upon which this appraisal rests stems from our higher-scale examination. So a
characterization of feldspar content is “correct” within a lower RVE sub-model only if S “correctly” characterizes the
rock’s distribution of stresses with respect to a higher-scale evaluation.
Pragmatics’ Place at the Table 17
(iv)
Lesson I: Effective descriptive policies encode significant descriptive information within their
computational architectures.
An intriguing feature of a well-selected multiscale modeling lies in the fact that good deal
of descriptive content is encoded within the architecture of the enveloping computa-
tional scheme (that is, within the rules whereby data is shuttled from one local
registration to another), rather than becoming overtly captured within any of its
component sub-models (which only report upon the “dominant events” witnessed at
the pertinent scale size). To this end, the
RVE divisions within a multiscalar modeling
are often calibrated to correspond to the
descriptive opportunities offered within
nature itself. What do I mean by a descriptive
opportunity? (a phrase that will reverberate
often in the pages ahead). Answer: physical
circumstances whose dominant ranges of
variation can be adequately captured in a
smallish number of descriptive parameters
and where questions of significant interest
can be addressed through feasible calcula-
tion. All of the characteristic behaviors we
assigned to steel or granite at different RVE
levels supply “opportunities” of this charac-
ter; a multiscalar modelling needs to link mimicking physical hierarchy
them together in a fruitful manner.
Let me borrow a simple exemplar of a “descriptive opportunity” from Essay 8.
Computing the local stresses induced by a heavy locomotive within a girder bridge is
a complicated task, even if we ignore the lower-scale hysteresis. Nineteenth-century
structural engineers realized that, with respect to various bridges of a specialized design
(a so-called determinate structure), the stress problem can be reduced to a very simple
reckoning that considers only the forces and moments acting at each joint. But the
availability of this reduced variable opportunity depends upon a delicate balancing act
within the structure. Notice that the
right-hand side of the determinate
bridge illustrated is free to slide hori-
zontally (symbolized by the little
wheels placed beneath that end). If
we alter these arrangements by
attaching the frame firmly to its
pier, our “determinate bridge oppor-
tunity” vanishes, and calculating the
stresses within our firmly anchored a reasoning opportunity
18 Pragmatics’ Place at the Table
bridge becomes considerably more difficult and highly prone to computational error. Such
uncertainties make it unwise to build indeterminate bridges unless considerable care is
taken, so we don’t find many of these along our railroads and highways.
Why does this happen? Let’s examine the problem schematically, from a mathematical
point of view. With respect to a generic bridge, we theoretically possess equations that
express our macroscopic variables m1, m2, . . . in terms of the microscopic stresses s1, s2, . . .
m1 = f1 (s1, s2, . . . , sn)
:
m27 = f27 (s1, s2, . . . , sn)
But in the special circumstances of our first bridge, these equations can be solved for the
stresses:
s1 = gn (m1, m2, . . . , m27)
:
sn = gn (m1, m2, . . . , m27)
But fixing the right-hand pier of our bridge amounts
to adding an extra equation to our collection that
spoils its solvability (the equation set is said to be
over-constrained). From this perspective, a computa-
tional opportunity resembles the baby bear’s well-
adjusted porridge: the natural conditions required to
support easy-to-compute answers must join together
in a perfectly matched “sweet spot.”
Collective behaviors that emerge when the bound-
finding a sweet spot
aries of a target system protect its internal regions
from environmental complexities often give rise to exceptional descriptive opportun-
ities. A classic illustration (much discussed in the essays ahead) is provided by the violin
string: by restricting its movements tightly at bridge and nut, we trap internal energy
within distinct overtone units (fundamental, octave, 12th, etc.). This energetic entrap-
ment then supports an excellent vocabulary for describing the string’s complex wig-
glings in computationally
manageable terms (we
merely need to evaluate
the energy allocated to
partial overtone). But if
excessive play is permitted
at bridge or nut, this prof-
endpoint induced opportunity itable descriptive oppor-
tunity vanishes.
Through their misapplications of the mathematician’s term “boundary condition,”
philosophers of science have failed to recognize the special alignments between interiors
and boundaries that generate significant descriptive opportunities of this character. In
Pragmatics’ Place at the Table 19
general, a multiscalar reasoning scheme will succeed only if its component parts capture
the special descriptive opportunities that nature itself directly provides. In this manner,
the specific RVE levels cited within a multiscalar modeling typically mimic the hier-
archal layers of “dominant behavior” that can be empirically identified within the target
material, simply through inspecting its complexities under a microscope. If we don’t
imitate nature in this manner, our reasoning schemes are likely to generate mere
computational mush.
Accordingly, in framing a good architecture for answering questions about nature, we
should anchor our syntax to the physical locales that supply the best opportunities for
ready computation and work outward from these locales to conquer the rest of the
nearby terrain, rather as climbers initially anchor their ropes to the firmest portions of
rock outcropping. In this sense, the very plotting of a computational scheme can reflect a
fair amount of descriptive information about a target system whose contents do not
otherwise appear within the localized statements generated within its sundry sub-
models. In this fashion, an integrated architecture can prove greater than the sum of
its descriptive parts.
(v)
Lesson II: Reasoning complexity can be greatly reduced by concentrating upon dominant
effects within each sub-model.
The astonishing reasoning compressions character-
istic of multiscalar schemes trace to the fact that
each of its RVE sub-models is assigned the com-
paratively circumscribed duty of capturing only the
central physical processes normally witnessed at its
characteristic scale length, leaving the task of cap-
turing rare or minor effects to other RVE choices.
Within these divisions of descriptive labor, none of
the component layers of sub-modeling attempt to
“get everything right”; each localized model ren- dominant behavior
ders descriptive justice only to the dominant
behaviors it normally encounters. Each sub-modeling declares, “I’m only trying to
develop a narrative appropriate to the events usually witnessed on my length scale,
which I articulate in a language especially calibrated to such occurrences. But I’m happy
to accept correction missives from my companion sub-models who focus upon other
choices of RVE scale.” In a moment, we’ll look more closely at how each localized
modeling receives these corrective missives from other units.
In our granite/gneiss diagram above, the various feedback arrows linking its sub-
model calculations together implement this concentrate-upon-dominant-behaviors
stratagem. The full scheme doesn’t neglect minor effects altogether, but it defers their
20 Pragmatics’ Place at the Table
that emerge when we investigate our colliding balls on a more refined temporal scale.
On the rare occasions when such internal details become important, we must, speaking
in the jargon of the physicist, “open up the suppressed degrees of freedom” within the
balls and allow their surfaces to flex geometrically in very complicated ways. But plainly
we shouldn’t wish to explore this Pandora’s box of difficult and unreliable computations
unless they are absolutely required. In normal billiard ball circumstances we almost
never need to do this (due to their carefully engineered toughness). With baseballs, it’s
another story.
For reasons that elude me, academic philosophers have manifested little interest in
the practical concerns of how we actually manage to reason about the universe before us
in a tractable manner. Instead, they often brush aside our sophisticated architectures as
unworthy of notice, because “Oh, we don’t really need to worry about such complex-
ities because, in principle, we might always describe our material in single level,
molecular terms.” Well, in actuality, we can’t, courtesy of strong tyranny of scales
limitations. How should we evaluate these blithe dismissals?
There are two aspects to this question. First, these “in principle” appeals strongly rely
upon simplistic Theory T assumptions with respect to the syntactic format that future
“fundamental science” will someday assume, for it is clear that no “theory” developed to
date has satisfied those precepts. I regard such presumptions as rashly speculative, but
will defer further discussion to Essay 6. Second, one of our chief obligations as
philosophers is to describe the working methodologies of real-life science correctly,
before we launch into whimsies concerning futuristic completions. Otherwise, we court
the dangers wittily characterized by Margaret Cavendish long ago:
[M]ost men in these latter times busy themselves more with other worlds than with this they
live in, which to me seems strange, unless they could find some art that would carry them
into those celestial worlds, which I doubt will never be.17
We shall later find that many celebrated conceptual puzzles owe their origins to the
complicated physics avoidance policies that we exploit within our most effective
descriptive practices. If we ignore the operations of these subtle stratagems, we deprive
ourselves of the diagnostic tools required to unravel many classic philosophical
difficulties.
I sometimes wish that word processors might administer a mild electric shock to
authors whenever they type the phrase “in principle,” for it’s generally a signal that
important issues have been lightly brushed aside. Many academic philosophers have
become firmly convinced that they “know, in principle, how science works” without
having, in actuality, looked at much of it. The point is not that great gobs of scientific
17
Margaret Cavendish, Observations upon Experimental Philosophy (Cambridge: Cambridge University Press,
2001), p. 4.
22 Pragmatics’ Place at the Table
knowledge will significantly improve our philosophizing (this is not true at all), but
unchallenged quantities of faulty folklore can significantly inhibit it.
Operating together in harness, Fodor-type assumptions with respect to “semantics”
and unmonitored characterizations of “how fundamental science works” have generated
an intellectual landscape in which the vital utilities of pragmatically inflected linguistic
usage are rashly dismissed as “uninteresting.”
(vi)
Lesson III: Sub-models should exchange data through homogenized instructions, not via
unfiltered exchanges of localized description.
This observation may represent the most important moral
to be extracted from our studies. One of the most inter-
esting aspects of multiscalar methodology resides in the
manner in which various scales send corrective messages to
one another, as when our grain-level sub-model informs
the higher-scale modeling that it should adjust its elasticity
parameters. The official name for this communication
technique is homogenization, but how does it work?
Once again we will want these homogenizations to
mimic the real-life relationships encountered in nature
between RVE scales. communication through
In normal circumstances, how do the events transpiring homogenization
on one characteristic scale “see” the events occurring on a
lower RVE level? Answer: in a blurry way, as a mixture of coherent changes mixed
together with a fair amount of signal noise. Within the range of everyday experience, the
macroscopic behaviors of granite are determined largely by its two elastic parameters (E
and G), providing building engineers with an admirable descriptive opportunity that
they exploit ably in planning a stone construction. However, when pressures and
temperatures become extreme, these standard rules will fail due to minute recrystal-
lization events arising on the scale of the component grain. From a macroscopic vantage
point, we “see” these alterations only as a change in gross behavior: the metamorphosed
rock now responds to pushes and pulls according to more complicated rules (that we can
capture with a wider array of elastic parameters).
These are the interscalar relationships we want to mimic within our multiscalar
computations. If our laminate sub-model flexes in a manner that is consistent with our
upper scale E and G expectations, fine; no correction in our upper-scale reasoning is
required. But if our sub-model recrystallizes within its new stress requirement, our
upper-scale calculations must take this revised datum into consideration. So the correct-
ive messages we send across scales should constitute reports on whether normal
dominant behavior patterns have broken down or not.
Pragmatics’ Place at the Table 23
18
Philosophers often characterize these filtering techniques loosely as undifferentiated “averaging” and wrongly
conceive of them as quasi-logical operations, rather than as direct models of real-world interscalar influence.
24 Pragmatics’ Place at the Table
macroscale modeling with which we start (this is the same difficulty that we explicated
with respect to “uniformly granite” and “mixed feldspar and quartz” above). These
attributions are dependent upon one another, and we should not regard the lower-scale
RVE descriptions as “accurate” and their homogenized surrogates as “approximate.”
These entangled modes of description work together to produce a greatly improved
form of collective modeling effort (in union there is strength).
We tacitly implement similar divisions of descriptive labor within our everyday
thinking about material behavior on different length scales, without engaging in sophis-
ticated homogenization calculations. Instead, we intuitively estimate what lower-scale
events should “look like” when viewed from a higher-scale perspective. In this vein, we
remark that lower-scale recrystallization within our rock will “appear as” a behavioral
change in isotropy upon a higher scale and that trapped collections of dislocations will
“appear as” an increased brittleness in a piece of steel. In fact, we commonly employ
both causal and semantic vocabulary to capture these important forms of interscalar
relationship. We say that the usual rules of granite behavior no longer “remain true”
when lower-scale recrystallization sets in and that heavy locomotive traffic “causes” the
dislocations to pile up on a minute scale. And so on. In these employments, the words
“true” and “cause” both function as significant devices of language management. These
considerations bring us to our next lesson.
(vii)
Lesson IV: Usages that employ contextual registers aligned with investigative purpose require
special linguistic locutions to help listeners follow its altering contexts correctly and to allow for
the learning of novel patterns of cognitive architecture.
As the investigative architectures within our reasonings become more elaborate, we
require special vocabularies that allow us to articulate their structural subdivisions and
to teach these computational policies to novices. Words that function in this manner
I have called devices of language management, for we employ terminologies of this
character when we need to indicate where a specific stretch of reasoning pattern begins
and where it ends.19 With respect to our JPEG example, each question Qi within its
front-end registration governs a fixed span of back-end answers Ai. In a logical setting
(these examples come from Appendix 2), we mark our shifts in investigative policy with
imperatives that either mark the beginning of a search:
(a) Let’s consider a typical Republican, call him “Hugo”
(b) There are two cases we must consider: will Karen run again or will she not?
19
As we’ll later see, few words purely perform these functions; words like “cause” carry a good deal of context-
sensitive descriptive information as well.
Pragmatics’ Place at the Table 25
or its termination:
(c) Summing up, Hugo and Jill are likely to agree in their presidential likes and
dislikes.
In an analogous manner, the varying stages within a stretch of multiscalar reasoning can
be highlighted as follows:
(d) Let us now convert our recrystallization data into terms that our higher order
model can understand.
(e) Let us repeat these “checking for consistency” steps over and over until we
reach stable results across the scales.
Note that most of these management imperatives are neither “true” nor “false” in any
straightforward sense, but are merely “appropriate” (or not) to the reasoning tasks at
hand. But we must employ structural markers of these types when we introduce an
unfamiliar reasoning routine to a friend.
As noted at the end of section (vi), the utilities of instruction (d) can also be captured
in “causal” or “semantic” vocabulary, viz.:
(d’) Let us now ask what upper scale effects will be caused by the lower scale
recrystallization.
(d”) Let us now consider what becomes true on a higher length scale as granite
recrystallizes.
Note that we also employ these same management tools to keep different pools of
descriptive endeavor cleanly separated. We warn our pupils, “Never assume that data
established within one branch of an argument by cases argument continues to hold in
the other cases.” Or: “The stresses computed within a laminate sub-modeling can’t be
entered freely into an upper-scale treatment without undergoing significant homogen-
ization filtering.” As these management tools enter the story of language in significant
ways, conventional philosophical distinctions between “semantics” and “pragmatics”
break down, for many sentences will not obtain firm truth-values until their placement
within some enveloping scheme of practical endeavor becomes settled (I’ll supply
examples in our next lesson). In this fashion, “use” (in the sense of “reasoning to
practical purpose”) sometimes precedes “semantics” (in the sense of “adequate attach-
ment to exterior circumstance”), Fodor’s blunt asseverations to the contrary.20
20
Historically, the old-fashioned term for what I call an “investigative architecture” is heuristics: the art of assembling
reasoning ingredients towards a collective purpose. For many years the merits of investigative heuristics have been
neglected within philosophically motivated studies of logical reasoning (although not in the study of computer-assisted
proof) for various formal reasons. In the beginning this was not so; the central reasoning structures within Boole’s original
symbolic logic were fashioned with heuristic considerations strongly in view (see Wandering Significance, pp. 522–8).
These contextual dependencies were expunged from later expositions of logic largely because they made certain forms of
mathematical investigation cumbersome (e.g. checking for correctness by Padua’s principle; such matters are more
efficiently addressed in an axiomatic format). But the summum bonum of the systematic mathematician does not always
coincide with the virtues of efficient reasoning. As a result, investigative heuristics remain a vital structural aspect of real
life linguistic practice. Observe that the query “What’s the logic of that?” is generally a request for heuristics.
26 Pragmatics’ Place at the Table
Matters are complicated further by the fact that terms like “cause” and “true” serve
both “managerial” and “data reportage” purposes, depending upon their contextual
positioning within a discourse.21 These multitasking behaviors have generated a good
deal of conceptual confusion within philosophy, as we’ll see when we investigate
“cause”’s tergiversating behaviors in other essays. Conventional divisions between
“semantics” and “pragmatics” do not strike me as capturing the subtleties of real-life
linguistic usage adequately, partially through their neglect of the shaping hands of
shifting investigative strategy.
(viii)
Lesson V: Descriptive terms may not acquire firm referential connections to the physical world
until their employment has been situated within a specific investigative context.
As long as localized word employments do not leech from one RVE level to another, it
serves as a source of great linguistic efficiency if words are allowed to adapt to local data
registration requirements in a context-sensitive manner. Substantial data compression can
be achieved through semantic adaptation, where migrating terms are allowed to register
physical information in manners suited to the altered RVE regimes in which the words are
now located, rather than imitating their performances within former habitats. We have
already observed that the continuum physics models employed at most levels within a
multiscalar modeling generally classify the tiny portions of matter before them in incon-
sistent ways. But the resulting syntactic clashes don’t generate inferential difficulties
because the various scales communicate with one another only in homogenized manners.
This scale-based isolation permits a considerable reduction in descriptive vocabulary due
to the fact that a word like “force” can adjust its precise referential focus to suit the RVE
unit in which it presently finds itself. Considered upon a conventional laboratory scale
21
Both words act as a Moses that first guides its entourage into the pastures of an investigative architecture and then
takes up residence therein, enjoying the life of a prosperous dairy farmer. “Cause” ’s inconstant behaviors in this mode are
discussed in Essay 2, but allied remarks apply to “true” as well.
As noted earlier, a common ordinary reasoning analog to the applied mathematician’s “homogenization techniques” is
that of evaluating which sentences X will hold true at higher RVE level H if facts Y obtain at lower-scale level
L. Multiscalar modelings commonly employ the same classical mechanics’ vocabularies with contextually divergent
references at different scale levels. Accordingly, the claim “if X holds at scale L, then the sentence ‘X’ will be true at level
H” often fails—it is a non-trivial matter how “X” should be properly homogenized when its application is lifted to a new
RVE level. In such circumstances, the Tarski scheme “ ‘X’ is true if and only if X” will not represent a “truism” in either of
its inferential directions; such transitions can prove outright mistaken when homogenization barriers are transversed. Anil
Gupta’s well-known “revision theory” has long stressed “true” ’s role in serving as a critical evaluator of a body of syntactic
assertion; I believe that the significant “language management” tasks the word serves with respect to a multiscalar
architecture underscore Gupta’s central lessons significantly (Anil Gupta and Nuel Belnap, The Revision Theory of Truth
(Cambridge: MIT, 2003)). Gupta further observes that the two conditionals within the “if and only if” of the Tarski
formula represent perfectly ordinary material conditionals, but that they serve to lift and lower assertions from one
contextual level to another. Una Stolnić’s “One’s Modus Ponens . . . : Modality, Coherence and Logic” (Philosophy and
Phenomenological Research, forthcoming) draws related morals based upon the recognition that the discourse parsing of
conditionals requires a systematic understanding of contextual signals.
Another random document with
no related content on Scribd:
sydämeensä. Hän tahtoo varoittaa ja tuntee voivansa itsekin vielä
olla varoituksen tarpeessa.
Eduard Charlottalle.
*****