Download as pdf or txt
Download as pdf or txt
You are on page 1of 282

Q0091_9781786343154_tp.

indd 1 18/5/17 11:22 AM


b2530   International Strategic Relations and China’s National Security: World at the Crossroads

This page intentionally left blank

b2530_FM.indd 6 01-Sep-16 11:03:06 AM


Q0091_9781786343154_tp.indd 2 18/5/17 11:22 AM
Published by
World Scientific Publishing Europe Ltd.
57 Shelton Street, Covent Garden, London WC2H 9HE
Head office: 5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601

Library of Congress Cataloging-in-Publication Data


Names: Doria, Francisco Antonio, editor.
Title: The limits of mathematical modeling in the social sciences :
the significance of Gödel’s incompleteness phenomenon / edited by
Francisco Antonio Doria (Universidade Federal do Rio de Janeiro, Brazil).
Description: New Jersey : World Scientific, 2017.
Identifiers: LCCN 2016056723 | ISBN 9781786343154 (hc : alk. paper)
Subjects: LCSH: Gödel’s theorem. | Incompleteness theorems. |
Social prediction--Mathematical models.
Classification: LCC QA9.65 .L55 2017 | DDC 300.1/5118--dc23
LC record available at https://lccn.loc.gov/2016056723

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library.

Copyright © 2017 by World Scientific Publishing Europe Ltd.


All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy
is not required from the publisher.

Desk Editors: V. Vishnu Mohan/Mary Simpson

Typeset by Stallion Press


Email: enquiries@stallionpress.com

Printed in Singapore

Vishnu Mohan - Q0091 - The Limits of Mathematical Modelling.indd 1 18-05-17 1:44:01 PM


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page v

Dedicated to Newton Carneiro Affonso da Costa, friend and


mentor.

v
b2530   International Strategic Relations and China’s National Security: World at the Crossroads

This page intentionally left blank

b2530_FM.indd 6 01-Sep-16 11:03:06 AM


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page vii

Introduction

The synthesis of the calculus of n-variables and of n-dimensional geometry


is the basis of what Seldon once called
“my little algebra of humanity”. . .
(Encyclopedia Galactica)

(apud I. Asimov, Second Foundation)

Hari Seldon is a fictional character created by Isaac Asimov for his


Foundation series of romances. A highly unlikely character, we may
add: a politician with mathematical skills, who developed the dis-
cipline of psycho-history, or the mathematical description of social
processes and of the evolution of history.
Asimov later tells us a bit more about the workings of psycho-
history: it’s basically a nonlinear dynamical system, and, in later
books from the Foundation series, he incorporates into Seldon’s cre-
ation some stuff from chaos theory.
Chaos theory? Back to the real world, then. In 1963 E. Lorenz
published his famous paper on deterministic chaos, “Deterministic
Non-Periodic Flow.” The title itself is an understatement, or perhaps
a discreet disguise: for Lorenz discusses a system of three autonomous
differential equations with just two second-degree nonlinearities —
which exhibits a very complex, apparently chaotic, behavior, which
hinders the prediction of the time-evolution of our system.
Social systems have long been modeled by autonomous dif-
ferential equations with low-degree nonlinearities, as in competi-
tion models conceived out of the Lotka–Volterra equations. Those

vii
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page viii

viii The Limits of Mathematical Modeling in the Social Sciences

particular equations are nonlinear but predictable, that is, with


Lotka–Volterra systems at hand, we know that nonlinearities aren’t
sufficient for a dynamical system to exhibit chaotic behavior. (But
they are a necessary condition.) So, there arose the question of
an algorithmic procedure to anticipate whether a nonlinear system
would exhibit chaotic behavior.
The nonexistence of such an algorithm was proved, under very
general conditions, by da Costa and Doria in 1991. So, not only nonlin-
ear systems may turn out to be unpredictable; we cannot even say (for
arbitrary systems and conditions) whether they will be unpredictable
or not! Moreover there will be systems which are chaotic in one model
for mathematics, and perfectly predictable in another model.
Here the Gödel phenomenon enters the picture. If we could make
calculations with infinite precision, we would be able to predict with all
required sharpness the future behavior of chaotic systems. But there
will be situations where not one of these calculations are possible.
So, we have a higher-order difficulty here — one that wasn’t antici-
pated by our fictional sage, Hari Seldon. Chaotic systems are determin-
istic: therefore, given infinitely precise calculations, all future behavior
can be predicted. However, when the Gödel phenomenon enters the
game, no calculations might be possible because we have no algorithms
to perform them; no algorithms exist (and the standard concept for
mathematical proof subsumes an algorithmic construction).
A. Lewis and V. Velupillai, among others, were pioneers in the
discussion of undecidability and incompleteness in mathematical eco-
nomics. Now we know that, for example, equilibrium prices in effi-
cient markets are noncomputable, in the general case (this follows
from a 1995 result by Lewis). In 1998, Tsuji, da Costa, and Doria
entered the fray, with their result on the incompleteness of game
theory — which is valid for a wide range of axiomatizations of Nash
games and analogous theories.
The reason for such a vast presence of Gödel-like phenomena in
mathematical economics is simple: they follow from a Rice-like theo-
rem, which can be proved for the language of classical analysis, which
is the language used by mathematical economists when exhibiting
their wares.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page ix

Introduction ix

So, undecidability and incompleteness creep up everywhere in


economics and in the social sciences, and seem to hinder the predic-
tive character of mathematics in those theories.
Of course, one can make predictions with the help of mathemat-
ical tools. A very general theorem in that respect can be proved as
a spinoff out of the Tubular Neighborhood Theorem (which asserts,
roughly, that a neighborhood without singularities of a smooth vec-
torfield ‘looks’ like an n-dimensional tube). So, chaotic deterministic
systems may have nicely behaved neighborhoods.
Also, undecidability doesn’t mean total ignorance about a math-
ematical object: it only forbids the existence of a general algorithm —
a general recipe — for specific problems; ad hoc solutions may be
possible, of course. But science is made of general laws, of general
computational procedures, and our question is: are such general laws
possible in the (mathematical) social sciences? If so, how useful are
these general procedures? Do they tell us something really interesting
about the social world, as much as a system’s Lagrangian tells us a
lot about the system’s physical behavior?
The chapters collected in the present volume attempt to evaluate
the impact of Gödel incompleteness and algorithmic undecidability
in the mathematics of economics and of the social sciences.
The opening chapter is a long text by F. A. Doria, which gives an
overview of the consequences of the already mentioned extension of
Rice’s theorem in the language of analysis to phenomena described
with the help of that same language.
The next chapter is a wonderful piece by G. J. Chaitin. Chaitin,
like a masterful magician, cradles the development of mathematics
in a metaphysical bed, while explaining to us how undecidability —
uncomputability — causes mathematics to be Gödel-incomplete.
Now keep in mind that mathematics in the social sciences has
one task: to compute the future, to predict future events out of data
from the present. Can we do it? The remaining chapters in the book
tackle this issue.
First we have a chapter by G. Becchio. Becchio presents to us a
few ideas of Karl Menger, which relate his own theoretical vision with
the (then recent) development of logic, by Peano, and then Russell
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page x

x The Limits of Mathematical Modeling in the Social Sciences

and the Polish school, which ventures into nonclassical logics. Becchio
is almost exhaustive in her treatment of the subject, which is quite
unexpected, as most of orthodox economics today supports itself in
a quite strict classical logical language.
Then R. Koppl enters the field. Koppl examines crashes and tur-
moil in the economic landscape of today and discusses their pre-
dictability in the light of the Gödel phenomenon. The interesting
aspect of Koppl’s chapter is the fact that he boldly relates our con-
crete economic scenario with the rarefied vistas of metamathematics
that bear on the language of theoretical economics. And I invite the
reader to enjoy the beautiful conclusion of his chapter, an elegant
pastiche of a tale by Jorge Luis Borges.
D. J. Dean and E. Elliott have contributed two chapters to this
volume. Both are detailed surveys of the usages of mathematics in
the social sciences. The first one looks at our main question from the
viewpoint of complex systems, a “traditional” way of dealing with it.
The second one considers our main problem, that is, the predictabil-
ity of events in the social sciences given the Gödel phenomenon.
(I won’t say more because there is an easter egg in his texts, and I
don’t want to advance it.)
Finally, S. Al-Suwailem closes our book with the question: Is
Economics a Science? In addressing this question, he explores an
interesting link between logical consistency and financial instability
via conservation laws. He argues that ignoring the meta-mathematics
of economic models might lead to misuse and, perhaps, invalidation
of these models. For the models to be reliable, and economics to
become a proper science, economic theory must circumvent the logical
paradoxes arising from Neoclassical assumptions. Gödel’s Theorem —
surprisingly? — then would pave the way for real science. I must also
thank Dr. S. Al-Suwailen for his prompt and efficient help with the
editorial chores of this book.

Francisco Antonio Doria


October 2016
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page xi

About the Authors

Giandomenica Becchio is Assistant Professor in Economics (Uni-


versity of Torino). Her research field is history of economics and
methodology. She has authored several publications on the Austrian
school (Carl Menger, Karl Menger, and F. Hayek), Karl Polanyi,
liberalism and economics, and the feminist economics. Recently, she
has published a book on the relationship between neoliberalism and
neoclassical economics (Routledge).

Gregory J. Chaitin is best known for his development of the algo-


rithmic information theory and his discovery and proof of an incom-
pleteness theorem (the Gödel–Chaitin theorem, 1971). Other results
are the construction of the Omega number, which codes the halting
problem in computer science, and the discovery of a family of expo-
nential Diophantine equations coded by a numerical parameter n
some of whose properties are randomly distributed among the values
of n. His recent work has to do with metabiology.
Chaitin is presently an Invited Professor at the Alberto Luiz
Coimbra Institute, a graduate school of engineering in Rio.

Denis J. Dean is Professor of Geography and Geospatial Infor-


mation Science and Dean of the School of Economic, Political and
Policy Sciences and the University of Texas at Dallas. His research is
focused on the development of new spatial optimization approaches
and techniques, frequently applying concepts and ideas from machine
learning to spatial optimization problems.

xi
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page xii

xii The Limits of Mathematical Modeling in the Social Sciences

Euel Elliott is Professor of Public Policy and Political Economy in


the School of Economic, Political and Policy Sciences at the Univer-
sity of Texas at Dallas. He also serves as Associate Dean for Under-
graduate Education for the School. His research interests include
electoral behavior and various areas of public policy, including envi-
ronmental, science, and technology policy. He is particularly inter-
ested in issues relating to complex adaptive systems and their rela-
tionship to economic, political, and social dynamics.
Roger Koppl is Professor of Finance in the Whitman School of
Management of Syracuse University and a faculty fellow in the Uni-
versity’s Forensic and National Security Sciences Institute. He has
held visiting positions with Copenhagen Business School, New York
University, George Mason University, University of Vassa, and the
Max Planck Institute of Economics. Koppl’s Erdös number is 3.
Sami Al-Suwailem is currently Head of Financial Product Develop-
ment Centre at the Islamic Research and Training Institute (IRTI),
the Islamic Development Bank Group, Jeddah, Saudi Arabia. He
holds a Ph.D. in economics from Washington University St. Louis,
MO, USA.
He is the first recipient of Washington University’s Hyman
Minsky Prize for Distinguished Performance in Macroeconomics.
Al-Suwailem is a member of the Brazilian Academy of Philosophy.
Newton Carneiro Affonso da Costa is a Brazilian mathemati-
cian whose career was centered in the development and application of
nonclassical logics; his major work has been the creation and devel-
opment of paraconsistent logics. He also proved (with F. A. Doria)
several undecidability and incompleteness theorems in physics and in
mathematical economics. He is Professor Emeritus of Mathematics
at Unicamp (São Paulo, Brazil), an invited professor of philosophy
at the Federal University of Santa Catarina and a member of the
Brazilian Academy of Philosophy.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page xiii

About the Authors xiii

Francisco Antonio Doria is Professor Emeritus of Communica-


tions, at Rio’s Federal University and a member of the Brazilian
Academy of Philosophy.
N. da Costa and Doria proved in 1990 that chaos theory is unde-
cidable; they also showed (in 1994) that the so-called Arnold decision
problems for dynamical systems were undecidable. A similar result
has been proved by those authors for the theory of Nash games.
b2530   International Strategic Relations and China’s National Security: World at the Crossroads

This page intentionally left blank

b2530_FM.indd 6 01-Sep-16 11:03:06 AM


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page xv

Contents

Introduction vii

About the Authors xi

Chapter 1 Axiomatics, the Social Sciences,


and the Gödel Phenomenon: A Toolkit 1
Francisco Antonio Doria

Chapter 2 The Perfect Language 91


Gregory J. Chaitin

Chapter 3 Logics and Formal Ethics in Karl Menger 111


Giandomenica Becchio

Chapter 4 Shocked Disbelief 133


Roger Koppl

Chapter 5 Complex Systems, Decision-Making


and Computability: A Social Science
Perspective 159
Denis J. Dean and Euel Elliott

xv
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-fm page xvi

xvi The Limits of Mathematical Modeling in the Social Sciences

Chapter 6 Is Classical Logic Enough? Applications


of Nonstandard Logic to the Social Sciences 183
Denis J. Dean and Euel Elliott

Chapter 7 Is Economics a Science? A Gödelian


Perspective 207
Sami Al-Suwailem

Index 267
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 1

Chapter 1

Axiomatics, the Social Sciences, and


the Gödel Phenomenon: A Toolkit∗

Francisco Antonio Doria


Advanced Studies Research Group, HCTE
Fuzzy Sets Laboratory
Mathematical Economics Group, Production Engineering Program
COPPE, UFRJ, P.O. Box 68507, 21945–972 Rio RJ, Brazil
fadoria63@gmail.com

1. Introduction
Gödel published his remarkable incompleteness theorems in 1931 (see
Gödel, 1931). Gödel’s reasoning was immediately recognized as cor-
rect, even if surprising, and several researchers then asked for its
scope: since Gödel’s argument exhibited an undecidable sentence that
didn’t quite reflect everyday mathematical fact or facts (see below),
there was some hope that undecidable sentences might be circum-
scribed to a very pathological realm within arithmetic theory or its
extensions.
Alas, this proved not to be true.


Partially supported by CNPq, Philosophy Section; the author is a member of
the Brazilian Academy of Philosophy. This text was supported in part by CNPq,
Philosophy Section, grant no. 4339819902073398. It is part of the research efforts
of the Advanced Studies Group, Production Engineering Program, at Coppe–
UFRJ and of the Logic Group, HCTE–UFRJ. We thank Profs. A. V. Assumpção,
R. Bartholo, C. A. Cosenza, S. Fuks (in memoriam), S. Jurkiewicz, R. Kubrusly,
M. Gomes, and F. Zamberlan for support.

1
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 2

2 The Limits of Mathematical Modeling in the Social Sciences

A road map for this chapter


We present here a kind of primer on Gödel incompleteness, both in
pure mathematics and in some domains of applied mathematics. It
will be seen that undecidability and incompleteness exist everywhere
in “reasonable” theories, so to say, and that they may affect innocent-
looking, quite ordinary mathematical questions.
Moreover, it is possible that some big open questions might turn
out to be undecidable — might turn out to be Gödelian specimens,
we may say again — in strong axiomatic frameworks. Among those
are Goldbach’s conjecture and the P vs. N P question (which may
turn out to be independent even from a very rich axiom system such
as ZFC (see Ben-David and Halevy, s/d) and da Costa and Doria,
2016).
Since we require an axiomatic background in order to have incom-
pleteness, we will discuss techniques here for the axiomatization of
scientific theories that require mathematics as its main language. The
axiomatization techniques we describe are based on a suggestion by
P. Suppes, and provide a quite straightforward procedure to axiom-
atize theories in physics and also in mathematical economics, math-
ematical ecology, and so on. Out of that, we will construct examples
of relevant undecidable sentences in those theories.
Actually this chapter is a survey of a few results that have been
explored by N. da Costa and the author for over 30 years.

Gödel’s 1931 paper on the incompleteness


of arithmetics and related systems
Gödel’s great 1931 paper, “On formally undecidable sentences of
Principia Mathematica and related systems, I” (see Gödel, 1931)
has two main results. Suppose that the axiomatic theory we consider
contains “enough arithmetic”1 :

1
Lots of handwaving here! But anyway, we require arithmetic in classical predicate
logic, plus the trichotomy axiom.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 3

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 3

(1) If the axiomatic theory we consider is consistent, then there is


in it a sentence which can neither be proved nor be disproved
(always within the axiomatic framework we have chosen). How-
ever, that sentence can be easily seen to be true in the standard
model for arithmetic.
(2) If the same theory is consistent, then we cannot (within it) prove
a sentence that asserts the theory’s consistency.

Gödel’s undecidable sentence is weird and doesn’t seem to have


an everyday mathematical meaning. It is constructed as follows:

• He first shows that the sentence “ξ cannot be proved” can be


formalized within his axiomatic framework.
• Then he diagonalizes it. The sentence he obtains is interpreted as
“I cannot be proved.” The sentence is true, but — as said —
doesn’t have an obvious everyday mathematical meaning.

The second incompleteness theorem can be seen as a kind of


corollary to the first one. Briefly, for a theory based on classical
first-order logic, it is consistent if and only if it doesn’t prove a con-
tradiction. It proves a contradiction if and only if it proves all its
sentences. However, an incomplete theory doesn’t prove at least two
of its sentences, say ξ and ¬ξ. (For a more detailed discussion see
Chaitin et al. (2011).)

A summary of the chapter


We give in the present chapter an intuitive presentation of the ideas
involved, list some of the questions that were shown to be undecidable
with the authors’ techniques and apply them in detail to systems in
economics and ecology. Those applications show that Gödel incom-
pleteness may be one of the chief hindrances (besides nonlinearity) in
the prediction of the future behavior of those systems in our current
formal representations for social phenomena.
So, our knowledge about society may also have computational,
predictive limits imposed by the Gödel phenomenon.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 4

4 The Limits of Mathematical Modeling in the Social Sciences

We discuss two techniques to construct examples of the Gödel


phenomenon in scientific disciplines that have mathematics as their
main language. We call the first technique — or the first collection of
tricks — Playing Games with the Theta Function. The second tech-
nique is discussed afterwards and originally arose out of computer
science; it may be called The Monster in the Belly of Complexity
Theory.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 5

Part I. Playing games with the theta function


We have here a long, detailed, conceptual discussion on undecidabil-
ity and incompleteness. Then we get to hard facts. Section 11 presents
the concepts and ideas involved in our incompleteness proofs. That
presentation sketches previous contributions to those questions and
relates them to our work. Section 16 describes in nontechnical detail
our contribution to several open questions in mathematical physics
and related areas. Section 17 applies our results to economics and to
ecological models. In Section 15, we review the formal background
for our results, that is, the theory of Suppes predicates and state
without proof our main undecidability and incompleteness theorems,
while section 20 gives details about the ecological models we use in
Section 17.
Well, and the θ function? Wait, please.
The present chapter may be looked upon as lending strong
support to Suppes’ contention that there is no essential difference
between “a piece of pure mathematics and a piece of theoretical sci-
ence.” We show that Gödel-like phenomena occur everywhere and
in rather intuitive contexts within the language of classical analysis.
They therefore necessarily occur within any theory where the under-
lying language is that of classical analysis. As Suppes (1988) remarks:
It is difficult to predict the future of axiomatic methods in the
empirical sciences. There are signs, at least, that the large gap
that presently separates the methods used in physics from those
in mathematics is beginning to close. In any case, axiomatic meth-
ods are now widely used in foundational investigation of particular
sciences, as well as in the pursuit of certain general questions of
methodology, especially those concerning probability and induc-
tion. The use of such methods permits us to bring to the philosophy
of science the standards of rigor and clarity that are very much an
accepted part of the closely related discipline of logic.2

2
I would also like to point out that a related, albeit more refined approach to the
axiomatization of the empirical sciences can be found in the book of Balzer et al.
(1987).

5
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 6

6 The Limits of Mathematical Modeling in the Social Sciences

The main point in our exposition is: physics, both classical and
quantum, is here seen as an outcome, or as an extension of classical
mechanics.3 The Lagrangian and Hamiltonian formalisms, for sys-
tems of particles and then for fields, are seen as a basic, underlying
construct that specializes to the several theories considered. A course
in theoretical physics usually starts from an exposition of the
Lagrangian and Hamiltonian (the so-called analytico-canonical) for-
malisms, how they lead to a general formal treatment of field
theories, and then one applies those formalisms to electromag-
netic theory, to Schrödinger’s quantum mechanics — which is
obtained out of geometrical optics and the eikonal equation, which
in turn arise from Hamilton–Jacobi theory — and gravitation and
gauge fields, which grow out of the techniques used in the for-
malism of electromagnetic theory. Here we use a variant of this
approach.
We stress that this chapter is intended as an overview of the
results obtained by N. da Costa and the author in the search
of undecidability and incompleteness — the so-called Gödel phe-
nomenon — in physics and in other mathematized domains. We
present here the main “abstract” details (the construction of the
many θ functions, which code the halting function in computer sci-
ence and beyond) and then their use in the construction of sev-
eral examples of, let us say, Gödelian behavior in physics and
beyond.

A note on sources for this chapter


Our sources are the texts listed in the bibliography, and we will
liberally use several papers of ours, and quote from them. To quote a
few of them, we take results from a widely circulated preprint (Doria,
2014) as well as three published texts (Chaitin et al., 2011; da Costa
and Doria, 1994b, 2007).

3
This is the actual way most courses in theoretical physics are taught.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 7

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 7

2. Axiom systems: mathematics


We will restrict here our attention to two axiom systems for math-
ematics, Peano Arithmetic (PA) and Zermelo–Fraenkel set theory
with the Axiom of Choice (ZFC). PA will be mainly used when we
are dealing with the concept of computation and its consequences;
and we will require ZFC in order to axiomatize the kind of math-
ematics that is used by professional mathematicians. However our
arguments are valid for recursive extensions of these theories, that
is, extensions whose theorems are also recursively enumerable.

A first look at Gödel incompleteness


Axiomatic systems like Zermelo–Fraenkel set theory, or Peano arith-
metic, can be formulated as computer programs that list (i.e., recur-
sively enumerate) all theorems of the theory. These theories are
machines that produce theorems — the sentences which are valid
in the theory. So, suppose that S is one such axiomatic theory. Sup-
pose that S is able to talk about computer programs, that is, we can
talk about partial recursive functions in the language of S. We are
interested in the recursive functions that are total, that is, which are
defined for all natural numbers 0, 1, 2, . . ..
Then we try listing (i.e., we try to recursively enumerate) all
S-total recursive functions, that is, those recursive functions that S
can recognize as total, or better, which S can prove to be total. This is
the starting point of our argument, which stems from Kleene (1936):

• We need two preliminary suppositions: first, axiomatic system S


is supposed to be consistent (i.e., it doesn’t prove a contradiction
such as, e.g., 0 = 0). Also, S must be sound, that is, S doesn’t
prove sentences that are false in the standard interpretation for
arithmetic.
• Start the program that lists the theorems of S.
• Pick up those theorems that say: “function f is total and com-
putable.”
• Out of that, we can build another list, f0 , f1 , f2 , . . ., of S-total
computable functions (functions that are proved as such in S),
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 8

8 The Limits of Mathematical Modeling in the Social Sciences

together with their values:

f0 (0), f0 (1), f0 (2), f0 (3), . . .


f1 (0), f1 (1), f1 (2), f1 (3), . . .
f2 (0), f2 (1), f2 (2), f2 (3), . . .
f3 (0), f3 (1), f3 (2), f3 (3), . . .
..
.

• Now define a function F :

F (0) = f0 (0) + 1
F (1) = f1 (1) + 1
F (2) = f2 (2) + 1
..
.
• F is different from f0 at value 0, from f1 at 1, from f2 at 2, and
so on.
We can now conclude our reasoning. The f0 , f1 , f2 , . . . functions
are said to be provably total in our theory S, as they are proved to
be total functions and appear as such in the listing of the theory’s
theorems. However F cannot be provably total in S, since it differs
at least once from each function we have listed. Yet F is obviously
computable and total in the standard model for arithmetic, and given
programs for the computation of f0 , f1 , f2 , . . . we can compute F too.
So the sentence “F is total” cannot be proved in our theory.
Also, if we suppose that the theory is sound, that is, if it
doesn’t prove false facts, then the sentence “F isn’t total” cannot
be proved too, as F is clearly total in the so-called standard model
for arithmetic. Therefore, it is an undecidable sentence within our
theory S.
Ladies and gentlemen, “F is total” and “F isn’t total” are exam-
ples of the Gödel incompleteness phenomenon in S: they are sentences
that can neither be proved nor disproved within S. And because of
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 9

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 9

the soundness of our theory, “F is total” is, we may say, naı̈vely true
in the standard interpretation for the arithmetics of S.4
We call “F is total” and “F isn’t total” undecidable sentences
in S. This example is quite simple, and has an obvious mathematical
meaning: it talks about computer programs and their domains. So,
Gödel incompleteness does matter, after all.

A first example
We can present here a first example of incompleteness that directly
stems from the metamathematical properties of F . The BGS set
S = Mm , |x|F (n) + F (n), n = 0, 1, 2, . . . has the following two prop-
erties, among many others of interest (see the discussion in the Belly
sections for details):
• It is a set of Turing machines Mm bound by a clock that stops it
after |x|F (n) + F (n) computation steps, where x is the input to the
machine and |x| the binary length of x.
• S is a set of poly Turing machines in the standard model.
• The sentence “S is a set of Turing poly machines” is true of the
standard model for the arithmetic portion of theory S.
• “S is a set of Turing poly machines” is undecidable in theory S.

3. The Gödel phenomenon in physics and in other


mathematized sciences
Now, are there Gödel undecidable sentences in physics? In mathe-
matical economics? Yes.
In order to look for undecidable sentences in physics, one must
axiomatize the theories where our formal sentences are cradled. This
is the gist of a remarkable question formulated at the very end of the
19th century, Hilbert’s Sixth Problem. That seems to be a formidable
obstacle; let’s take a look at Hilbert’s formulation of it.

4
There are examples of theories like S where one cannot find a “natural” inter-
pretation like the one in our example.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 10

10 The Limits of Mathematical Modeling in the Social Sciences

Hilbert’s Sixth Problem


The Mathematical Treatment of the Axioms of Physics.
The investigations on the foundations of geometry suggest the
problem: to treat in the same manner, by means of axioms, those
physical sciences in which mathematics plays an important part; in
the first rank are the theory of probability and mechanics.
As to the axioms of the theory of probabilities, it seems to me to
be desirable that their logical investigation be accompanied by a
rigorous and satisfactory development of the method of mean values
in mathematical physics, and in particular in the kinetic theory of
gases.
Important investigations by physicists on the foundations of
mechanics are at hand; I refer to the writings of Mach. . . , Hertz. . . ,
Boltzmann. . . , and Volkman. . . It is therefore very desirable that
the discussion of the foundations of mechanics be taken up by
mathematicians also. Thus Boltzmann’s work on the principles
of mechanics suggests the problem of developing mathematically
the limiting processes, those merely indicated, which lead from the
atomistic view to the laws of continua. Conversely, one might try
to derive the laws of motion of rigid bodies by a limiting process
from a system of axioms depending upon the idea of continuously
varying conditions on a material filling all space continuously, these
conditions being defined by parameters. For the question as to the
equivalence of different systems of axioms is always of great theo-
retical interest.
If geometry is to serve as a model for the treatment of physical
axioms, we shall try first by a small number of axioms to include
as large a class as possible of physical phenomena, and then by
adjoining new axioms to arrive gradually at the more special theo-
ries. At the same time, Lie’s principle of subdivision can perhaps be
derived from the profound theory of infinite transformation groups.
The mathematician will have also to take account not only of those
theories coming close to reality, but also, as in geometry, of all
logically possible theories. We must be always alert to obtain a
complete survey of all conclusions derivable from the system of
axioms assumed.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 11

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 11

Further, the mathematician has the duty to test exactly in each


instance whether the new axioms are compatible with the previ-
ous ones. The physicist, as his theories develop, often finds himself
forced by the results of his experiments to make new hypotheses,
while he depends, with respect to the compatibility of the new
hypotheses with the old axioms, solely upon these experiments or
upon a certain physical intuition, a practice that in the rigorously
logical building up of a theory is not admissible. The desired proof
of the compatibility of all assumptions seems to me also of impor-
tance, because the effort to obtain such a proof always forces us
most effectively to an exact formulation of the axioms.

There are two main questions in Hilbert’s Sixth Problem:


• To give an axiom system for the whole of physics;
• To show that the axiom systems we formulate are consistent.
We already know that, if PA is included in our axiomatics, then
by Gödel’s second incompleteness theorem we cannot prove the con-
sistence of our axiomatized theories for physics.
But how about incompleteness? There is a folklore conjecture
that tries to connect quantum physics and incompleteness; one won-
ders whether there is a relation between Gödel incompleteness and
Heisenberg’s uncertainty principle. If one had a strict equivalence
here, a consequence of that conjecture would be the nonexistence
of metamathematical phenomena such as undecidability and incom-
pleteness at the classical level, in physics.
Yet we show here that such a conjecture is false. Classical
physics — which is usually taken to be the realm of determinism —
is as marred by the Gödel phenomenon as its quantum counterpart
(da Costa and Doria, 1991a,b).

4. Physics as an archetype for the mathematized


sciences
Physics stands out as a kind of road map, or better, as a kind of
archetype, for the development of a mathematical backbone in sci-
ences that go from mathematical economics to mathematical ecology
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 12

12 The Limits of Mathematical Modeling in the Social Sciences

and to the theory of social systems. Actually, most of physics can be


axiomatized in a straightforward way, as there is a standard and
unified formalism that stands behind every physical theory since the
18th century.
I mean the so-called analytico-canonical formalism. Its starting
point is:
A theory in physics is a representation of the analytico-canonical
formalism. Its dynamical laws are derived from the variational prin-
ciples:
 b
δ Ldt = 0,
a

or

δ Ldσ = 0,
Domain

where L is a lagrangian and L is a lagrangian density.

This is Hamilton’s Principle; the integral acted upon by the vari-


ational operator δ is the Action Integral. Hamilton’s Principle has a
simple interpretation; it is a kind of least effort principle. Physical
systems move through the easiest path or paths. And effort is mea-
sured by L or L, which is, let us say, a kind of “free energy” available
for a system to use in its motion.

5. Axiomatics for physics: preliminary steps


What can we know about the world through a formal language?
Which are the limitations imposed on our empirical, everyday knowl-
edge when we try to describe the things around us with the help of
a formalized language?
We show that strong enough formal languages exhibit the Gödel
phenomenon, that is to say, they have undecidable sentences.5 But,

5
We stress that consistency of the underlying axiomatic apparatus must be
assumed throughout this chapter.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 13

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 13

again, can we find undecidable sentences that make meaningful asser-


tions about the world within reasonable axiomatics for the empirical
sciences?

“Meaningful” undecidable sentences


Let us now quote a specific query, the decision problem for chaotic
dynamical systems (Hirsch, 1985):
An interesting example of chaos — in several senses — is pro-
vided by the celebrated Lorenz System. [. . . ] This is an extreme
simplification of a system arising in hydrodynamics. By computer
simulation Lorenz found that trajectories seem to wander back and
forth between two particular stationary states, in a random, unpre-
dictable way. Trajectories which start out very close together even-
tually diverge, with no relationship between long run behaviors.

But this type of chaotic behavior has not been proved. As far as I
am aware, practically nothing has been proved about this particular
system. Guckenheimer and Williams proved that there do indeed
exist many systems which exhibit this kind of dynamics, in a rig-
orous sense; but it has not been proved that Lorenz’s system is one
of them. It is of no particular importance to answer this question;
but the lack of an answer is a sharp challenge to dynamicists, and
considering the attention paid to this system, it is something of a
scandal.

The Lorenz system is an example of (unverified) chaotic dynamics;


most trajectories do not tend to stationary or periodic orbits, and
this feature is persistent under small perturbations. Such systems
abound in models of hydrodynamics, mechanics and many biolog-
ical systems. On the other hand experience (and some theorems)
show that many interesting systems can be expected to be non-
chaotic: most chemical reactions go to completion; most ecological
systems do not oscillate unpredictably; the solar system behaves
fairly regularly. In purely mathematical systems we expect heat
equations to have convergent solutions, and similarly for a single
hyperbolic conservation law, a single reaction–diffusion equation,
or a gradient vectorfield.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 14

14 The Limits of Mathematical Modeling in the Social Sciences

A major challenge to mathematicians is to determine which dynam-


ical systems are chaotic or not. Ideally one should be able to tell
from the form of the differential equations. The Lorenz system illus-
trates how difficult this can be.

In 1990 da Costa and Doria showed that there is no general


algorithm for the solution of Hirsch’s decision problem, no matter
which definition for chaos is adopted (da Costa and Doria, 1991a,b).
That result led to several undecidability and incompleteness results
in dynamical systems theory; all stem from Gödel’s original incom-
pleteness theorem for arithmetic through another of the Hilbert prob-
lems, the 10th Problem. Actually such examples of incompleteness
and undecidability all stem from a very general Rice-like theorem
proved by da Costa and Doria (1991a). However, in order to have
incompleteness in physics, we must have an axiomatic framework.
How do we proceed?

6. Axiomatics for physics: guidelines


Let us now look closely at a few examples. From da Costa and Doria
(2007), we have the following data.

Axiomatics for classical mechanics: preliminary data


The first efforts toward an unification of mechanics are to be found in
Lagrange’s Traité de Mécanique Analytique (1811) and in Hamilton’s
results.

• Hertz is the author of the first unified, mathematically well-


developed presentation of classical mechanics in the late 1800s,
in a nearly contemporary mathematical language. His last book,
The Principles of Mechanics, published in 1894, advances many
ideas that will later resurface not just in 20th century analytical
mechanics, but also in general relativity (see Hertz, 1956).
• Half a century later, in 1949, we have two major developments
in the field: first C. Lanczos publishes The Variational Principles
of Mechanics, a brilliant mathematical essay (see Lanczos, 1977)
that for the first time, presents classical mechanics from the unified
viewpoint of differential geometry and Riemannian geometry.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 15

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 15

Concepts like kinetic energy or Coriolis force are made into geo-
metrical constructs (respectively, Riemannian metric and affine
connection); several formal parallels between mechanical formal-
ism and that of general relativity are established.
However, the style of Lanczos’ essay is still that of late 19th and
early 20th century mathematics, and is very much influenced by
the traditional, tensor-oriented, local coordinate domain oriented,
presentations of general relativity.
• Then: new and (loosely speaking) higher-order mathematical con-
structs appear when Steenrod’s results on fiber bundles and Ehres-
mann’s concepts of connection and connection forms on principal
fiber bundles are gradually applied to mechanics; those concepts
go back to the late 1930s and early 1940s, and make their way into
the mathematical formulations of mechanics in the late 1950s.
• Folklore has that the use of symplectic geometry in mechanics first
arose in 1960 when a top-ranking unnamed mathematician6 cir-
culated a letter among colleagues, which formulated Hamiltonian
mechanics as a theory of flows over symplectic manifolds, that is, a
Hamiltonian flow is a flow that keeps invariant the symplectic form
on a given symplectic manifold. The symplectic manifold was the
old phase space; invariance of the symplectic form directly led to
Hamilton’s equations, to Liouville’s theorem on the incompressibil-
ity of the phase fluid, and to the well-known Poincaré integrals —
and here the advantage of a compact formalism was made clear, as
the old, computational, very cumbersome proof for the Poincaré
invariants was substituted for an elegant two-line, strictly geomet-
rical proof.
High points in this direction are Sternberg’s lectures
(see Sternberg, 1964), MacLane’s monograph (see MacLane, 1968)
and then the great Abraham–Marsden (1978) treatise, Founda-
tions of Mechanics.
• Again one had at that moment a physical theory fully placed
within the domain of a rigorous (albeit intuitive) mathematical

6
Said to be Richard Palais.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 16

16 The Limits of Mathematical Modeling in the Social Sciences

framework, as in the case of electromagnetism, gauge field theory,


and general relativity. So, the path was open for an axiomatic
treatment.

For electromagnetism
The first conceptually unified view of electromagnetic theory is given
in Maxwell’s treatise, dated 1873 (for a facsimile of the 1891 edition
see Maxwell (1954)).

• Maxwell’s treatment was given a more compact notation by


J. Willard Gibbs with the help of vector notation.
• A sort of renewed presentation of Maxwell’s main conceptual lines
appears in the treatise by Sir James Jeans (1925).
• Then there is Stratton’s (1941) textbook with its well-known list
of difficult problems.
• And then Jackson’s (1962) book, still the main textbook in the
1970s and 1980s.

When one looks at the way electromagnetic theory is presented


in these books one sees that:

• The mathematical framework is calculus — the so-called advanced


calculus, plus some knowledge of ordinary and partial differential
equations — and linear algebra.
• Presentation of the theory’s kernel becomes more and more com-
pact; its climax is the use of covariant notation for the Maxwell
equations.
• However, covariant notation only appears as a development out of
the set of Maxwell equations in the traditional Gibbsian “gradient-
divergence-rotational” vector notation.
• Finally, the Maxwell equations are shown to be derived from a
variational principle, out of a Lagrangian density.

So, the main trend observed in the presentation of electromag-


netic theory is: the field equations for electromagnetic theory are in
each case presented as a small set of coordinate–independent equa-
tions with a very synthetic notation system. When we need to do
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 17

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 17

actual computations, we fall back into the framework of classical,


19th-century analysis, since for particular cases (actual, real-world,
situations), the field equations open up, in general, to complicated,
quite cumbersome differential equations to be solved by mostly tra-
ditional techniques.
A reliable reference for the early history of electromagnetism
(even if theoretically very heterodoxical) is O’Rahilly’s (1965) text.

General relativity and gravitation


The field equations for gravitation we use today, that is, the
Einstein field equations, are already born in a compact, coordinate-
independent form (1915/1916) (see Einstein, s/d). The Einstein
gravitational field equations can also be derived from a variational

principle where the Lagrangian density is −g R, where R is the
pseudo-Riemannian curvature scalar with respect to g, the four-
dimensional metric tensor with signature +2.

• We find in Einstein’s original presentation an explicit striving for a


different kind of unification, that of a conceptual unification of all
domains of physics. An unified formalism at that moment meant
that one derived all different fields from a single, unified, funda-
mental field. That basic field then “naturally” splits up into the
several component fields, very much like, or in the search of an
analogy to, the situation uncovered by Maxwell in electromag-
netism, where the electric and the magnetic fields are different
concrete aspects of the same underlying unified electromagnetic
field.
• This trend starts with Weyl’s (1968) theory in 1918 just after
Einstein’s introduction in 1915 of his gravitation theory, and cul-
minates in Einstein’s beautiful, elegant, but physically unsound
unified theory of the nonsymmetric field (1946; see Einstein, 1967).
• On the other, hand Weyl’s ideas lead to developments that appear
in the treatise by Corson (1953), and which arrive at the gauge
field equations, or Yang–Mills equations (1954), which were for
the first time examined in depth by Utiyama (1956).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 18

18 The Limits of Mathematical Modeling in the Social Sciences

• An apparently different approach appears in the Kaluza–Klein


unified theories. Originally unpromising and clumsy-looking, the
blueprint for these theories goes back to Kaluza (1921) and then
to Klein (1926); see Tonnelat (1965).
In its original form, the Kaluza–Klein theory is basically the
same as Einstein’s gravitation theory over a five-dimensional man-
ifold, with several artificial-looking constraints placed on the fifth
dimension; that extra dimension is associated with the electromag-
netic field.
• The unpleasantness of having to deal with extraneous conditions
that do not arise out of the theory itself was elegantly avoided
when A. Trautmann in the late 1960s and then later Cho (1975),
showed that the usual family of Kaluza–Klein-like theories arises
out of a simile of Einstein’s theory over a principal fiber bundle on
space time with a semi-simple Lie group G as the fiber.
Einstein’s Lagrangian density over the principal fiber bundle
endowed with its natural metric tensor splits up as Einstein’s usual
gravitational Lagrangian density with the so-called cosmological
term plus an interacting gauge field Lagrangian density; depend-
ing on the group G one gets electromagnetic theory, isospin theory,
and so on. The cosmological constant arises in the Cho–Trautmann
model out of the Lie group’s structure constants, and thus gives a
possible geometrical meaning to its interpretation as dark energy.
• Here, conceptual unification and formal unification go hand in
hand, but, in order to do so, we must add some higher-order objects
(principal fiber bundles and the associated spaces, plus connections
and connection forms) to get our more compact, unified treatment
of gravitation together with gauge fields, which subsume the elec-
tromagnetic field.

We are but a step away from a rigorous axiomatic treatment.

From classical to quantum mechanics


Quantum mechanics has always been snugly cradled in the classical
theory, at least when considered by theoretical and mathematical
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 19

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 19

physicists, far from the cloudy popular misconceptions that have


surrounded the domain since its inception in the late 1920s. The
Bohr–Sommerfeld quantization conditions in the first, “old,” quan-
tum theory, arise from the well-known Delaunay conditions in
celestial mechanics; so much for the old quantum theory. The
new, or Schrödinger–Heisenberg–Dirac quantum mechanics is nearly
empirical in its inception (see van der Waerden, 1968), but when
Schrödinger and Dirac appear on stage (see Dirac, 1967), we clearly
see that the theory’s conceptual roots and formalism arise out of
classical mechanics. Schrödinger’s wave equation is a kind of rein-
terpretation of the eikonal equation in geometrical optics, which, in
turn, is a consequence of the Hamilton–Jacobi equation; the Dirac
commutators and Heisenberg’s motion equations are new avatars of
well-known equations in the classical theory that involve Poisson
brackets. We can also look at the motion equations:
dG ∂G
= + {H, G}
dt ∂t
as the definition of a partial connection given by the Hamiltonian H
on a manifold.
A surprising technical development stems from the efforts by
Wightman to place quantum mechanics and the second-quantization
theories on a firm mathematical ground. The starting point here was
von Neumann’s view in the early 1930s that quantum mechanics
was a linear dynamical theory of operators on some Hilbert space.
The Murray and von Neumann theory, of what we now know as
von Neumann algebras (1936), later expanded to the theory of C ∗
algebras, allowed a group of researchers to frame several quantum-
theoretic constructions in a purely algebraic way. Its realization in
actual situations is given by a quantum state that induces a partic-
ular representation for the system (representation is here taken as
the meaning used in group theory). This is the so-called Gelfand–
Naimark–Segal construction (see Emch, 1972).
The C ∗ algebra approach covers many aspects of quantum
field theory and is again framed within a rigorous, albeit intuitive
mathematical background. It also exhibits some metamathematical
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 20

20 The Limits of Mathematical Modeling in the Social Sciences

phenomena, since the existence of some very general representations


for C ∗ algebras are dependent on the full axiom of choice.
To sum it up: physics has strived for conceptual unification during
the 20th century. This unification was attained in the domains we
just described through a least-effort principle (Hamilton’s Principle)
applied to some kind of basic field, the Lagrangian or Lagrangian
density, from which all known fields should be derived.
Most of physics is already placed on a firm mathematical ground,
so that a strict axiomatic treatment of the main physical theories is
possible. Still, there are mathematically uncertain procedures that
are part of the everyday activity of the theoretical physicist, like
Feynmann integration — but in this particular example, we can take
Feynmann’s technique as an algorithm for the generation of a series
of Feynmann diagrams, that is, as a strictly symbolic computational
procedure. Other theoretical physics constructs that do not have a
clear mathematical formulation (e.g., Boltzmann’s H–theorem) can
perhaps be approached in a similar way, as when we obtain formal
series expansions out of the entropy integral, while one waits for a
sound mathematical formulation of it.

7. Suppes predicates
Suppes predicates give us a simple way of axiomatizing empirical the-
ories within set theory; one simply defines a set-theoretic predicate
that formally characterizes the empirical theory:
In the first place, it may be well to say something more about the
slogan “To axiomatize a theory is to define a set–theoretical pred-
icate.” It may not be entirely clear what is meant by the phrase
“set–theoretical predicate.” Such a predicate is simply a predicate
that can be defined within set theory in a completely formal way.
For a set theory based only on the primitive predicate of mem-
bership, “∈” in the usual notation, this means that ultimately any
set-theoretical predicate can be defined solely in terms of member-
ship.
...
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 21

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 21

It is one of the theses of this book that there is no theoretical way of


drawing a sharp distinction between a piece of pure mathematics
and a piece of theoretical science. The set-theoretical definition
of the theory of mechanics, the theory of thermodynamics, and a
theory of learning, to give three rather disparate examples, are on
all fours with the definitions of the purely mathematical theories
of groups, rings, fields, etc.

(See Suppes, 1988)

Construction of Suppes predicates


References are da Costa and Chuaqui (1988) and Suppes (1967,
1988). In the present version, a Suppes predicate is a conjunction
of two pieces. The first one gives the mathematical setting for the
objects we are going to deal with, constructed through set-theoretic
operations out of known objects in the set-theoretic universe, while
the second component adds the dynamical equations that rule the
process (they act as a kind of postulate for our domain). For examples
see da Costa and Doria (1992a, b) and da Costa et al. (1990).

8. Axiomatics for physics: the main ideas


The usual formal7 treatment for physics goes as follows: one writes
down a Lagrangian or a Lagrangian density for the phenomena we
are interested in, and then use the variational principle as a kind
of algorithmic procedure to derive the Euler–Lagrange equations,
which give us the dynamics of the system. The variational principle
also allows us to obtain a conservation-law, symmetry dependent
interpretation of interaction as in the case of the introduction of
gauge fields out of symmetry conditions imposed on some field (see
Corson, 1953; Utiyama, 1956).

7
We will proceed in an informal way, and leave to the archetypical interested
reader the toil and trouble of translating everything that we have done into a
fully formal, rigorous treatment of our presentation.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 22

22 The Limits of Mathematical Modeling in the Social Sciences

We take a slightly different approach here. We describe the arena


where physics happens — phase space, spacetime, fibered spaces —
and add the dynamics through a Dirac-like equation.
Our results are not intended as a complete, all-encompassing,
axiomatics for the whole of physics: there are many interesting areas
in physics with uncertain mathematical procedures at the moment,
such as statistical mechanics or quantum field theory, and the present
framework may be adequate for them. But we may confidently say
that our axiomatization covers the whole of classical mechanics, clas-
sical field theory, and first-quantized quantum mechanics.
We follow the usual mathematical notation here. We use Suppes
predicates as our main tool. As said, a Suppes predicate is essentially
a set-theoretical conjunction with two parts:
• First, conjunct describes the mathematical objects we use in our
theory (spacetime, vectors, tensors, and so on).
• Then, the second conjunct gives the dynamics for the theory. It
may be given as a variational principle, or (as we may present it
here) as an explicit set of differential equations.
More precisely, the species of structures of essentially all main
classical physical theories can be formulated as particular dynamical
systems derived out of the triple P = X, G, µ, where X is a topo-
logical space, G is a topological group, and µ is a measure on a set
of finite rank over X ∪ G and it is easy to put it in the form of a
species of structures.
Thus we can say that the mathematical structures of physics
arise out of the geometry of a topological space X. More precisely,
physical objects are (roughly) the elements of X that
• exhibit invariance properties with respect to the action of G.
(Actually the main species of structures in “classical” theories can
be obtained out of two objects, a differentiable finite-dimensional
real Hausdorff manifold M and a finite-dimensional Lie group G.)
• are “generic” with respect to the measure µ for X.
This means, we deal with objects of probability 1. So, we only deal
with “typical” objects, not the “exceptional” ones.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 23

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 23

That condition isn’t always used, we must note, but anyway mea-
sure µ allows us to identify the exceptional situations in any con-
struction.
Let’s now give all due details:

Definition 8.1. The species of structures of a classical physical the-


ory is given by the 9-tuple
Σ = M, G, P, F, A, I, G, B, ∇ϕ = ι,
which is thus described:
(1) The Ground Structures. M, G, where M is a finite-dimensional
real differentiable manifold and G is a finite-dimensional Lie
group.
(2) The Intermediate Sets. A fixed principal fiber bundle P (M, G)
over M with G as its fiber plus several associated tensor and
exterior bundles.
(3) The Derived Field Spaces. Potential space A, field space F and
the current or source space I. A, F, and I are spaces (in gen-
eral, manifolds) of cross-sections of the bundles that appear as
intermediate sets in our construction.
(4) Axiomatic Restrictions on the Fields. The dynamical rule ∇ϕ = ι
and the relation ϕ = d(α)α between a field ϕ ∈ F and its poten-
tial α ∈ A, together with the corresponding boundary conditions
B. Here d(α) denotes a covariant exterior derivative with respect
to the connection form α, and ∇ a covariant Dirac-like operator.
As an alternative, we may give the dynamics through a varia-
tional principle. The advantage (in that case) is that motion (or
whatever is described by our theory’s equations) is seen to arise
out of a kind of “least effort principle.”8
(5) The Symmetry Group. G ⊆ Diff(M ) ⊗ G  , where Diff(M ) is the
group of diffeomorphisms of M and G  the group of gauge trans-
formations of the principal bundle P .

8
Not always so, as a variational principle only gives us an extremal behavior,
which can be a maximum or a minimum.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 24

24 The Limits of Mathematical Modeling in the Social Sciences

(6) The Space of Physically Distinguishable Fields. If K is one of the


F, A, or I field manifolds, then the space of physically distinct
fields is K/G.

(In more sophisticated analyses, we can replace our concept of theory


for a more refined one. Actually, in the theory of science, we proceed
as in the practice of science itself by the means of better and bet-
ter approximations. However, for the goals of the present work, our
concept of empirical theory is enough.)

9. Axiomatics for physics: examples


Again, we base our exposition in da Costa and Doria (2007). What we
understand as the classical portion of physics up to the level of first-
quantized theories easily fits into the previous scheme. We discuss in
detail several examples: Maxwellian theory, Hamiltonian mechanics,
general relativity, and classical gauge field theory.

Maxwell’s electromagnetic theory


Let M = R4 with some differential structure, exotic or standard.
Anyway, physics may be seen as a local phenomenon, and so the
global properties of the underlying spacetime may be unimportant.9
Thus consider the standard case. Let us endow M with the Carte-
sian coordination induced from its product structure, and let η =
diag(−1, +1, +1, +1) be the symmetric constant metric Minkowskian
tensor on M .
Then M is Minkowski spacetime, the physical arena where we
do special relativity theory. As it is well known, out of the linear
transformations that keep invariant tensor η, we obtain the well-
known relativistic contraction and dilation phenomena.
We use standard physics notation. If Fµν (x) are the components
of the electromagnetic field, that is, a differentiable covariant 2-tensor

9
Not always, as the exotic underlying structure of spacetime may be seen as
generated by some energy–momentum tensor.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 25

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 25

field on M , µ, ν = 0, 1, 2, 3, then Maxwell’s equations are:


∂µ F µν = j ν ,
∂µ Fνρ + ∂ρ Fµν + ∂ν Fρµ = 0.
The contravariant vectorfield whose components are given by the
set of four smooth functions j µ (x) on M is the current that serves
as source for Maxwell’s field Fµν . (We allow piecewise differentiable
functions to account for shock-wave-like solutions.)
It is known that Maxwell’s equations are equivalent to the Dirac-
like set
∇ϕ = ι,
where
ϕ = (1/2)Fµν γ µν ,
and
ι = jµ γ µ ,

∇ = γ ρ ∂ρ ,
(where the {γ µ : µ = 0, 1, 2, 3} are the Dirac gamma matrices with
respect to η, that is, they satisfy the anticommutation rules γ µ γ ν +
γ ν γ µ = 2η µν ). Those equation systems are to be understood together
with boundary conditions that specify a particular field tensor Fµν
“out of” the source j ν (see Doria, 1977). Here γ µν = (1/2)[γ µ , γ ν ],
where brackets denote the commutator.
The symmetry group of the Maxwell field equations is the
Lorentz–Poincaré group that acts on Minkowski space M and, in
an induced way on objects defined over M . However since we are
interested in complex solutions for the Maxwell system, we must find
a reasonable way of introducing complex objects in our formulation.
One may formalize the Maxwellian system as a gauge field. We sketch
the usual formulation: again we start from M = R4 , η, and con-
struct the trivial circle bundle P = M × S 1 over M , since Maxwell’s
field is the gauge field of the circle group S 1 (usually written in that
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 26

26 The Limits of Mathematical Modeling in the Social Sciences

respect as U (1)). We form the set E of bundles associated with P


whose fibers are finite-dimensional vectorspaces. The set of physical
fields in our theory is obtained out of some of the bundles in E: the set
of electromagnetic field tensors is a set of cross-sections of the bundle
F = Λ2 ⊗s1 (M ) of all s1 -valued 2-forms on M , where s1 is the group’s
Lie algebra. To be more precise, the set of all electromagnetic fields
is F ⊂ C k (F ), if we are dealing with C k cross-sections (actually a
submanifold in the usual C k topology due to the closure condition
dF = 0).
Finally we have two group actions on F: the first one is the
Lorentz–Poincaré action L which is part of the action of diffeomor-
phisms of M ; then we have the (here trivial) action of the group G 
of gauge transformations of P when acting on the field manifold F.
As it is well known, its action is not trivial in the non-Abelian case.
Anyway, it always has a nontrivial action on the space A of all gauge
potentials for the fields in F. Therefore, we take as our symmetry
group G the product L ⊗ G  of the (allowed) symmetries of M and
the symmetries of the principal bundle P .
We must also add the spaces A of potentials and of currents, I,
as structures derived from M and S 1 . Both spaces have the same
underlying topological structure; they differ in the way the group G 
of gauge transformations acts upon them. We obtain I = Λ1 ⊗ s1 (M )
and A = I = C k (I). Notice that I/G  = I while A/G  = A.
Therefore we can say that the 9-tuple

M, S 1 , P, F, A, G, I, B, ∇ϕ = ι

where M is Minkowski space, and B is a set of boundary conditions


for our field equations ∇ϕ = ι, represents the species of mathematical
structures of a Maxwellian electromagnetic field, where P , F, and G
are derived from M and S 1 . The Dirac-like equation

∇ϕ = ι

should be seen as an axiomatic restriction on our objects; the bound-


ary conditions B are (i) a set of derived species of structures from
M and S 1 , since, as we are dealing with Cauchy conditions, we must
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 27

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 27

specify a local or global spacelike hypersurface C in M to which (ii)


we add sentences of the form ∀x ∈ C f (x) = f0 (x), where f0 is a set
of (fixed) functions and the f are adequate restrictions of the field
functions and equations to C.

Consistency of the added axioms


Loosely speaking, it suffices to get a specific example of an electro-
magnetic field and see that it satisfies the preceding formal construc-
tions. That applies to the next examples too.

Hamiltonian mechanics
Hamiltonian mechanics is here seen as the dynamics of the “Hamil-
tonian fluid” (see Abraham and Marsden, 1978; Lanczos, 1977). Our
ground structure for mechanics starts out of basic sets, which are a
2n-dimensional real smooth manifold, and the real symplectic group
Sp(2n, R). Phase spaces in Hamiltonian mechanics are symplectic
manifolds: even-dimensional manifolds like M endowed with a sym-
plectic form, that is, a nondegenerate closed 2-form Ω on M . The
imposition of that form can be seen as the choice of a reduction of
the linear bundle L(M ) to a fixed principal bundle P (M, Sp(2n, R));
however, given one such reduction, it does not automatically follow
that the induced 2-form on M is a closed form.
All other objects are constructed in about the same way as in
the preceding example. However, we must show that we still have
here a Dirac-like equation as the dynamical axiom for the species of
structures of mechanics. Hamilton’s equations are
iX Ω = −dh,
where iX denotes the interior product with respect to the vector-
field X over M , and h is the Hamiltonian function. That equation is
(locally, at least) equivalent to
LX Ω = 0,
or
d(iX Ω) = 0,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 28

28 The Limits of Mathematical Modeling in the Social Sciences

where LX is the Lie derivative with respect to X. The condition


dϕ = 0, with ϕ = iX Ω, is the degenerate Dirac-like equation for
Hamiltonian mechanics. We don’t get a full Dirac-like operator ∇ =  d
because M , seen as a symplectic manifold, doesn’t have a canonical
metrical structure, so that we cannot define (through the Hodge dual)
a canonical divergence δ dual to d. The group that acts on M with its
symplectic form is the group of canonical transformations; it is a sub-
group of the group of diffeomorphisms of M so that symplectic forms
are mapped onto symplectic forms under a canonical transformation.
We can take as “potential space” the space of all Hamiltonians on
M (which is a rather simple function space), and as “field space” the
space of all “Hamiltonian fields” of the form iX Ω.
Of course, we can directly use a variational principle here for the
dynamics, and start the axiomatics out of the Lagrangian formula-
tion.10
Interpretations are immediate: h is the system’s Hamiltonian,
which (given some simple conditions) can be seen as the system’s
total energy. Invariance of the symplectic form by the Lie derivative
with respect to a Hamiltonian flow is equivalent both to Poincaré’s
integral invariant theorem and to Liouville’s theorem — just as a
flavor of the way our treatment handles well-known concepts and
results in mechanics.

General relativity
General relativity is a theory of gravitation that interprets this basic
force as originating in the pseudo-Riemannian structure of spacetime.
That is to say, in general relativity, we start from a spacetime mani-
fold (a four-dimensional, real, adequately smooth manifold),11 which
is endowed with an pseudo-Riemannian metric tensor. Gravitational
effects originate in that tensor.

10
There are several delicate points here, as the Lagrangian we start from should
be a hyperregular Lagrangian; see Abraham and Marsden (1978).
11
Exotic manifolds are allowed here.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 29

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 29

Given any four-dimensional, noncompact, real, differentiable


manifold M , we can endow it with an infinite set of different,
nonequivalent pseudo-Riemannian metric tensors with a Lorentzian
signature (that is, − + ++). That set is uncountable and has the
power of the continuum. (By nonequivalent metric tensors, we mean
the following: form the set of all such metric tensors and factor it
by the group of diffeomorphisms of M ; we get a set that has the
cardinality of the continuum. Each element of the quotient set is a
different gravitational field for M .)
Therefore, neither the underlying structure of M as a topologi-
cal manifold, nor its differentiable structure determines a particular
pseudo-Riemannian metric tensor, that is, a specific gravitational
field. From the strictly geometrical viewpoint, when we choose a
particular metric tensor g of Lorentzian signature, we determine a
g-dependent reduction of the general linear tensor bundle over M to
one of its pseudo-orthogonal bundles. The relation
g→
g-dependent reduction of the linear bundle to a pseudo-orthogonal
bundle is one-to-one.
We now follow our recipe:
• We take as basic sets a four-dimensional real differentiable mani-
fold of class C k , 1 ≤ k ≤ +∞, and the Lorentz pseudo-orthogonal
group O(3, 1).
• We form the principal linear bundle L(M ) over M ; that structure
is solely derived from M , as it arises from the covariance properties
of the tangent bundle over M . From L(M ), we fix a reduction of
the bundle group L(M ) → P (M, O(3, 1)), where P (M, O(3, 1)) is
the principal fiber bundle over M with the O(3, 1) group as its
fiber.
Those will be our derived sets. We, therefore, inductively define a
Lorentzian metric tensor g on M , and get the couple M, g, which
is spacetime.
(Notice that the general relativity spacetime arises quite naturally
out of the interplay between the theory’s “general covariance”
May 18, 2017 15:22 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 30

30 The Limits of Mathematical Modeling in the Social Sciences

aspects, which appear in L(M ), and — as we will see in the


next section — its “gauge-theoretic features”, which are clear in
P (M, O(3, 1)).)
• Field spaces are:
— The first is the set (actually a manifold, with a natural differ-
entiable structure) of all pseudo-Riemannian metric tensors,
M ⊂ C k (2 T∗ (M )),
where C k (2 T∗ (M )) is the bundle of all C k symmetric covariant
2-tensors over M .
— Also out of M and out of adequate associated bundles we get
A, the bundle of all Christoffel connections over M , and F, the
bundle of all Riemann–Christoffel curvature tensors over M .
• We need the space of source fields, I, that includes energy–
momentum tensors, and arise out of adequate associated tensor
bundles over M .
• G is the group of C k -diffeomorphisms of M .
• If K is any of the field spaces above, then K/G is the space of
physically distinct fields.
• Finally, the dynamics are given by Einstein’s equations (there is
also a Dirac-like formulation for those, first proposed by R. Penrose
in 1960 as a neutrino-like equation; see Doria (1975)).
The quotient K/G is the way we distinguish concrete, physically
diverse, fields, as for covariant theories, one has that any two fields
related by an element of G “are” the “same” field.

Classical gauge fields


The mathematics of classical gauge fields can be found in Utiyama
(1956). We follow here the preceding examples, and, in particular,
the treatment of general relativity:
• The basic sets are a spacetime M, g, and a finite dimensional,
semi-simple, compact Lie group G.
• The derived set is a fixed principal bundle P (M, G) over M with
G as the fiber.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 31

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 31

• The group of gauge transformations G is the subgroup of all dif-


feomorphisms of P (M, G) that reduce to a diffeomorphism on M
and to the group action on the fiber.
• If (G) is the Lie algebra of G, we get:
— Connection-form space, or the space of potentials, noted A, is
the space of all C k -cross-sections of the bundle of (G)-valued
1-forms on M .
— Curvature space, or the space of fields F, is the space of all C k -
cross-sections of (G)-valued 2-forms on M , such that F ∈ F
is the field with potential A ∈ A.
— Source space I coincides with A, but is acted upon in a different
way by the group G of gauge transformations. (Currents in I are
tensorial 1-forms, while gauge-potentials in A are transformed
via an inhomogeneous transformation.)
• The space of physically different fields is K/G, where K is any of
the above field spaces.
• Dynamics are given by the usual gauge-field equations, which are
a nonlinear version of the electromagnetic field equations. There is
also a Dirac-like equation for gauge fields (see Doria et al., 1986).
Or, again, we can start from a variational principle.

To sum it up with the help of the schema presented at the begin-


ning of the section, we can say that the structure of a physical theory
is an ordered pair F, G, where F is an infinite-dimensional space
of fields, and G is an infinite-dimensional group that acts upon field
space. To get the Suppes predicate, we must add the information
about the dynamical equations D(φ) = 0, φ ∈ F, for the fields φ.
Notice that general relativity can be seen as a kind of degenerate
gauge field theory, more precisely a gauge theory of the O(3, 1) group.

Quantum theory of the electron


The Dirac electron theory (and the general theory of particles
with any spin) can be easily formalized according to the preceding
schemata. One uses as geometrical background the setting for spe-
cial relativity; dynamics is given either by Dirac’s equation or Weyl’s
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 32

32 The Limits of Mathematical Modeling in the Social Sciences

equation, for the case of zero-mass particles. Higher spin fields are
dealt with the help either of the Bargmann–Wigner equations or their
algebraic counterpart (see Doria, 1977). The Schrödinger equation is
obtained from the Dirac set out of a — loosely speaking — “stan-
dard” limiting procedure, which can be formally represented by the
addition of new axioms to the corresponding Suppes predicate.

General field theory


Sometimes one may wish to discuss field theory in a very general,
motion-equation independent, way. We then use as geometrical back-
ground the construction of Minkowski space and take as dynami-
cal axioms the field-theoretic Euler–Lagrange equations, or, as we’ve
said, we can take the variational principle as a formal algorithm to
derive the dynamics of the system.

Summing it up
We will briefly mention a few results of our own (with da Costa) on
the axiomatics of physics.

Proposition 9.1. Classical mechanics, Schrödinger’s quantum


mechanics, electromagnetism, general relativity and gauge field theory
can all be axiomatized within Zermelo–Fraenkel set theory.

Therefore, all known results in theoretical physics become ZF


theorems, as long as they can be given rigorous formulations, let us
say, by the usual mathematical standards and without the help of
“very large” objects (that last condition is given within quotation
marks to mean that we exclude large cardinal extensions of ZF).

Proposition 9.2. There are “physically meaningful” undecidable


sentences in any consistent ZF axiomatization of physics.

By “physically meaningful”, we mean sentences that describe


actual situations in physics, e.g., for a particular Lagrangian L, the
formalized version of the sentence “L describes a harmonic oscillator”
is formally undecidable within ZF, or, as in our first example, “X is
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 33

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 33

a chaotic vectorfield.” We can say that any two ZF axiomatizations


of physics as sketched have the same undecidable sentences.
This means that, for a wide variety of constructions, choosing a
particular axiomatics for physics is just a matter of taste. It won’t
affect the results we can derive (or that we can’t derive) from our
axiomatic framework. For more details, see da Costa and Doria
(2007).

10. Beyond physics


We can extend the preceding techniques to several scientific domains.
For example, the bulk of economics, as presented, say, in Samuelson’s
Foundations of Economic Analysis (Samuelson, 1967), or some spe-
cific results, such as the Nash equilibrium theorem (da Costa and
Doria, 2005), easily fit within our construction — we can find in a
straightforward way a Suppes predicate for results in mathematical
economics (da Costa and Doria, 1991b). The same goes with math-
ematical biology (Lotka, 1956).
We have proceeded from start with a specific goal in mind: we
wished to follow Hilbert’s program in his 6th Problem, that is, we
proposed an axiomatization of physics that allows us to explore many
interesting mathematical consequences of those theories.
We now wish to obtain specific examples of Gödel sentences —
undecidable sentences — within the axiomatic versions of those the-
ories, and in a more general cadre, we wish to see the effects and con-
sequences of metamathematical results and techniques when applied
to those theories, or to their axiomatic versions.

11. The incompleteness of analysis


The first explicit constructions of actual unsolvable problems in anal-
ysis were only made in the 1960s by Scarpellini (in 1963) and, a
few years later, by Adler and Richardson (see da Costa and Doria,
2007). Richardson’s results are by far the most interesting, since they
amount to the construction of a functor from the theory of formal
systems into classical elementary analysis. His results were frequently
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 34

34 The Limits of Mathematical Modeling in the Social Sciences

quoted after they were published; however, their applications were


until recently restricted to computer science, to set computational
bounds for techniques in algebraic computation.
From here onward, we suppose that our formal constructions are
made within a first-order axiomatic theory T , which can be thought
to include Zermelo–Fraenkel (ZF) set theory plus at least some por-
tion of the Axiom of Choice.
In our version, the Richardson functor starts out of a polynomial
p in q indeterminates over the integers Z. Richardson tells us how to
explicitly and algorithmically construct two expressions out of that p:
• First, an expression for a q-variable function f (p), which includes
polynomial terms, sine functions, and the number π. f (p) satisfies
the following conditions:
1. The Diophantine equation p = 0 has no solutions over the inte-
gers if and only if, for all values of its variables, f (p) > 1.
2. p = 0 has a solution over the integers if and only if f (p) dips
beyond 1 and has zero values.
• Second, an expression for a 1-variable function, which can be
explicit constructed with elementary functions and which has a
similar behavior according to the existence of roots in p = 0.
So, the idea in Richardson’s transform is that there is a kind of
strip of finite width, which is never crossed by the values of f (p) if the
Diophantine equation p = 0 has no solutions. If it does have solutions,
that strip will be crossed at points depending on the Diophantine
roots. If we now add an expression for the absolute value function
| . . . | to our language, we can obtain a new function c(f ) with the
following behavior:
• c(f ) = 0 if and only if p = 0 has no solutions as a Diophantine
equation.
• c(f ) > 0 somewhere if and only if p = 0 does have integer solutions.
Richardson’s interest seems to have been restricted to the con-
struction of a few unsolvable problems in analysis. However, we
realized that his two maps amounted to a true-blood functor from
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 35

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 35

axiomatizable systems into classical analysis. In the case of our


axiomatic system T , we can represent its proofs by a Turing machine
MT that halts whenever one of its theorems ξ (or a decidable sen-
tence) is input; if we input an undecidable sentence ζ, MT (ζ) will
never halt over it.
If MT halts over ξ, then we can explicitly obtain a Diophantine
equation p(mξ , x1 , . . .) = 0, which has solutions (mξ is a Gödel num-
ber for ξ); if MT doesn’t halt over ζ, then p(mζ , . . .) = 0 has no
integer solutions. Therefore, with the help of Richardson’s maps, we
can code within the language of classical analysis the whole deductive
machinery of an axiomatizable formal system.
The whole thing turns out to be wide-ranging. Out of a sugges-
tion by Suppes we proved a general undecidability and incomplete-
ness theorem — a Rice-like theorem — within classical analysis (see
Proposition 15.1). Let P be an arbitrary nontrivial12 property in the
language of analysis. The blueprint for our undecidable sentences out
of those constructions is, informally:

Proposition 11.1. There is a term-expression ζ in the language of


analysis such that neither T  P ζ nor T  ¬P ζ.

There is a corresponding undecidability result, and the associated


decision problems can be made as high as one wishes in the arith-
metical hierarchy, and even beyond (da Costa and Doria, 1994a). See
below.

12. Generalized incompleteness


This is a technical section. Notation is standard. We follow here da
Costa and Doria (2007) and some previous papers where these ques-
tions have been introduced and discussed such as da Costa and Doria
(1991a, 2005). We deal (among other objects) with algorithmic func-
tions here. These are given by their programs coded in Gödel numbers
e (see Rogers, 1967). We will sometimes use Turing machines (noted

12
A property that isn’t satisfied by either all or none of the objects in its domain.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 36

36 The Limits of Mathematical Modeling in the Social Sciences

by sans-serif letters with the Gödel number as index Me ) or partial


recursive functions, noted {e}. Peano Arithmetic is noted PA. We
require Russell’s ι symbol: ιx P (x) is, roughly, the x such that P (x).
The standard interpretation for PA is: the variables x, y, . . . range
over the natural numbers, and 0 and 1 are seen as, respectively, zero
and one. PA is strong enough to formally include Turing machine
theory (see da Costa and Doria, 2005). Rigorously, for PA, we have:

Definition 12.1. A Turing machine of Gödel number e operating


on x with output y, {e}(x) = y is representable in PA if there is a
formula Fe (x, y) in the language of our arithmetic theory so that:

(1) PA  Fe (x, y) ∧ Fe (x, z) → y = z, and


(2) For natural numbers a, b, {e}(a) = b if and only if PA  Fe (a, b).

Proposition 12.2. Every Turing machine is representable in Peano


Arithmetic. Moreover there is an effective procedure that allows us to
obtain Fe from the Gödel number e.

A theory is arithmetically sound if it has a model with standard


arithmetic for its arithmetical segment.

A simple example of generalized incompleteness


The argument below is valid for all theories that contain enough
arithmetic, have a model where arithmetic is standard, and have a
recursively enumerable set of theorems.
Suppose that our theory S has Russell’s description symbol ι.
Let P be a predicate symbol so that for closed terms ξ, ζ such that
S  ξ = ζ, S  P (ξ) and S  ¬P (ζ) (we call such P , nontrivial
predicates). Then, for the term:

η = ιx [(x = ξ ∧ α) ∨ (x = ζ ∧ ¬α)],

where α is an undecidable sentence in S:

Proposition 12.3. S  P (η) and S  ¬P (η).


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 37

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 37

This settles our claim. From now on, we will consider theories S, T ,
like the one characterized above.
Our main tool here will be an explicit expression for the Halting
Function, that is, the function that settles the halting problem (see
Rogers, 1967). We have shown elsewhere that it can be constructed
within the language of classical analysis.

Proposition 12.4. If {e}(a) = b, for natural numbers a, b, then


we can algorithmically construct a polynomial pe over the natu-
ral numbers so that {e}(a) = b ↔ ∃x1 , x2 , . . . , xk ∈ ω pe (a, b,
x1 , x2 , . . . , xk ) = 0.

Proposition 12.5. a ∈ Re , where Re is a recursively enumer-


able set, if and only if there are e and p so that ∃x1 , x2 , . . . , xk ∈
ω (pe (a, x1 , x2 , . . . , xk ) = 0).

The Halting Function

Remark 12.6. Let Mm (a) ↓ mean: “Turing machine of Gödel num-


ber m stops over input a and gives some output.” Similarly Mm (a) ↑
means, “Turing machine of Gödel number m enters an infinite loop
over input a.” Then we can define the halting function θ:

• θ(m, a) = 1 if and only if Mm (a) ↓.


• θ(m, a) = 0 if and only if Mm (a) ↑.

θ(m, a) is the halting function for Mm over input a.

θ isn’t algorithmic, of course (see Rogers, 1967), that is, there is


no Turing machine that computes it.
Then, if σ is the sign function, σ(±x) = ±1 and σ(0) = 0:

Expressions for the Halting Function

Proposition 12.7 (The Halting Function). The halting function


θ(n, q) is explicitly given by:

θ(n, q) = σ(Gn,q ),
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 38

38 The Limits of Mathematical Modeling in the Social Sciences

 +∞
2
Gn,q = Cn,q (x)e−x dx,
−∞
Cm,q (x) = |Fm,q (x) − 1| − (Fm,q (x) − 1).
Fn,q (x) = κP pn,q .

Here pn,q is the two-parameter universal Diophantine polynomial and


κP an adequate Richardson transform.

Undecidability and incompleteness

Lemma 12.8. There is a Diophantine set D so that

m ∈ D ↔ ∃x1 , . . . , xn ∈ ω p(m, x1 , . . . , xn ) = 0,

p a Diophantine polynomial, and D is recursively enumerable but not


recursive.

Corollary 12.9. For an arbitrary m ∈ ω there is no general decision


procedure to check whether p(m, x1 , . . .) = 0 has a solution in the
positive integers.

Main undecidability and incompleteness result


Therefore, given such a p, and F = κP (p), where κP is an adequate
Richardson transform (see da Costa and Doria, 1991), we have the
following corollary.

Corollary 12.10. For an arbitrary m ∈ ω, there is no general deci-


sion procedure to check whether, for F and G adequate real-defined
and real-valued functions:

(1) There are real numbers x1 , . . . , xn such that F (m, x1 , . . . ,


xn ) = 0.
(2) There is a real number x so that G(m, x) < 1.
(3) Whether we have ∀x ∈ R θ(m, x) = 0 or ∀x ∈ R θ(m, x) = 1
over the reals.
(4) Whether for an arbitrary f (m, x) we have f (m, x) ≡ θ(m, x).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 39

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 39

Let B be a sufficiently large algebra of functions and let P (x) be


a nontrivial predicate. If ξ is any word in that language, we write ξ
for its complexity, as measured by the number of letters from ZFC’s
alphabet in ξ. We define the complexity of a proof CZFC (ξ) of ξ in
the language of ZFC to be the minimum length that a deduction of
ξ from the ZFC axioms can have, as measured by the total number
of letters in the expressions that belong to the proof.

Proposition 12.11. If ZFC is arithmetically sound, then:


(1) There is an h ∈ B so that neither ZFC  ¬P (h) nor ZFC  P (h),
but N |= P (h), where N makes ZFC arithmetically sound.
(2) There is a denumerable set of functions hm (x) ∈ B, m ∈ ω, such
that there is no general decision procedure to ascertain, for an
arbitrary m, whether P (hm ) or ¬P (hm ) is provable in ZFC.
(3) Given the set K = {m : ZFC  φ(m)},  and given an arbitrary
total recursive function g : ω → ω, there is an infinite number of
 > g(P (m)).
values for m so that CZFC (P (m)) 

Proof. Let θ be as above. Let f0 , g0 satisfy our conditions on P ,


that is, ZFC  P (f0 ) and ZFC  ¬P (g0 ). Then define:

h(m, x) = θ(m, x)f0 + (1 − θ(m, x))g0 .


This settles (2). Now let us specify θ so that the corresponding Dio-
phantine equation p = 0 is never solvable in the standard model for
arithmetic, while that fact cannot be proved in ZFC. We then form,
for such an indicator function,
h = θf0 + (1 − θ)g0 .

This settles (1). Finally, for (3), we notice that as K is recursively


enumerable but not recursive, it satisfies the conditions in the Gödel–
Ehrenfeucht–Mycielski theorem about the length of proofs.

13. Higher degrees


Here we will directly quote from (da Costa and Doria, 2007), and
give full details. Our main result in this section is:
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 40

40 The Limits of Mathematical Modeling in the Social Sciences

Proposition 13.1. If T is arithmetically sound, then we can explic-


itly and algorithmically construct in the language LT of T an expres-
sion for the characteristic function of a subset of ω of degree 0 .

Remark 13.2. We can obtain an expression in a recursive way, but


such an expression isn’t computable — every effort to compute it
falls into an infinite loop.
That expression depends on recursive functions defined on ω and
on elementary real-defined and real-valued functions plus the abso-
lute value function, a quotient and an integration, or perhaps an
infinite sum, as in the case of the β and θ functions associated to the
halting problem.

Proof. We could simply use Theorem 9-II in Rogers (1967, p. 132).


However, for the sake of clarity, we give a detailed albeit informal
proof. Actually, the degree of the set described by the characteristic
function whose expression we are going to obtain will depend on the
fixed oracle set A; so, our construction is a more general one.
Let us now review a few concepts. Let A ⊂ ω be a fixed infinite
subset of the integers.

Definition 13.3. The jump of A is noted A ; A = {x : φA x (x) ↓},


where φA
x is the A-partial recursive algorithm of index x.

In order to make things self-contained, we review here some ideas


about A-partial recursive functions.

From Turing machines to oracle Turing machines


(1) An oracle Turing machine φA x with oracle A can be visualized as
a two-tape machine where tape 1 is the usual computation tape,
while tape 2 contains a listing of A. When the machine enters
the oracle state s0 , it searches tape 2 for an answer to a question
of the form “does w ∈ A?” Only finitely many such questions
are asked during a converging computation; we can separate the
positive and negative answers into two disjoint finite sets Du (A)
and Dv∗ (A) with (respectively) the positive and negative answers
for those questions; notice that Du ⊂ A, while Dv∗ ⊂ ω − A. We
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 41

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 41

can view those sets as ordered k- and k∗ -ples; u and v are recur-
sive codings for them (see Rogers, 1967). The Du (A) and Dv∗ (A)
sets can be coded as follows: only finitely many elements of A are
queried during an actual converging computation with input y;
if k is the highest integer queried during one such computation,
and if dA ⊂ cA is an initial segment of the characteristic function
cA , we take as a standby for D and D∗ the initial segment dA
where the length l(dA ) = k + 1.
We can effectively list all oracle machines with respect to a fixed
A, so that, given a particular machine, we can compute its index
(or Gödel number) x, and given x we can recover the correspond-
ing machine.
(2) Given an A-partial recursive function φA x , we form the oracle
Turing machine that computes it. We then do the computation
φAx (y) = z that outputs z. The initial segment dy,A is obtained
during the computation.
(3) The oracle machine is equivalent to an ordinary two-tape Turing
machine that takes as input y, dy,A ; y is written on tape 1 while
dy,A is written on tape 2. When this new machine enters state s0
it proceeds as the oracle machine. (For an ordinary computation,
no converging computation enters s0 , and dy,A is empty.)
(4) The two-tape Turing machine can be made equivalent to a one-
tape machine, where some adequate coding places on the single
tape all the information about y, dy,A . When this third machine
enters s0 it scans dy,A .
(5) We can finally use the standard map τ that codes n-ples one-to-
one onto ω and add to the preceding machine a Turing machine
that decodes the single natural number τ (y, dy,A ) into its com-
ponents before proceeding to the computation.
Let w be the index for that last machine; we note the machine φw .
If x is the index for φA x , we write w = ρ(x), where ρ is the
effective one-to-one procedure above described that maps indices for
oracle machines into indices for Turing machines. Therefore,

φA
x (y) = φρ(x) (y, dy,A ).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 42

42 The Limits of Mathematical Modeling in the Social Sciences

Now let us now write down an universal polynomial


p(n, q, x1 , . . . , xn ). We can define the jump of A as follows:

A = {ρ(z) : ∃x1 , . . . , xn ∈ ω p(ρ(z), z, dz,A , x1 , . . . , xn ) = 0}.

With the help of the Richardson map described above, we can


now form a function modeled after the θ function that settles the
Halting Problem; it is the desired characteristic function:

c∅ (x) = θ(ρ(x), x, dx,∅ ).

(Actually we have proved more; we have obtained

cA (x) = θ(ρ(x), x, dx,A ),

with reference to an arbitrary A ⊂ ω.)


Finally, we write θ (2) (x) = c∅ (x).
We recall the following definition (see Rogers, 1967).

Definition 13.4. The complete Turing degrees 0, 0 , 0 , . . . , 0(p) ,


. . . , p < ω, are Turing equivalence classes generated by the sets
∅, ∅ , ∅ , . . . , ∅(p) , . . ..

Now let 0(n) be the nth complete Turing degree in the arithmeti-
cal hierarchy. Let τ (n, q) = m be the pairing function in recursive
function theory (see Rogers, 1967). For θ(m) = θ(τ (n, q)), we have
the following corollary.

Corollary 13.5 (Complete Degrees). If T is arithmetically


sound, for all p ∈ ω the expressions θ p (m) explicitly constructed below
represent characteristic functions in the complete degrees 0(p) .

Proof. From Proposition 13.1,


 (0)
θ = c∅ (m) = 0,
θ (1) (m) = c∅ (m) = θ(m),
 (n)
θ (m) = c∅(n) (m),

for cA as in Proposition 13.1.


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 43

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 43

Incompleteness theorems
We suppose, as already stated, that PA ⊂ T means that there is an
interpretation of PA in T .
The next results will

be needed when we consider our main exam-
ples. We recall that “−” (the truncated sum) is a primitive recursive
operation on ω:

• For a > b, a − b = a − b.

• For a < b, a − b = 0.
In the next result, Z is the set of integers. Let N be a model,
N |= T , and N makes T arithmetically sound.

Proposition 13.6. If T is arithmetically sound, then we can algo-


rithmically construct a polynomial expression

q(x1 , . . . , xn )

over Z such that

N |= ∀x1 , . . . , xn ∈ ω q(x1 , . . . , xn ) > 0},

but

T  ∀x1 , . . . , xn ∈ ω q(x1 , . . . , xn ) > 0

and

T  ∃x1 , . . . , xn ∈ ω q(x1 , . . . , xn ) = 0.

Proof. Let ξ ∈ LT be an undecidable sentence obtained for T with


the help of Gödel’s diagonalization; let nξ be its Gödel number and
let mT be the Gödel coding of proof techniques in T (of the Turing
machine that enumerates all the theorems of T ). For an universal
polynomial p(m, q, x1 , . . . , xn ) we have:

q(x1 , . . . , xn ) = (p(mT , nξ , x1 , . . . , xn ))2 .

Corollary 13.7. If PA is consistent then we can find within it a


polynomial p as in Proposition 13.6.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 44

44 The Limits of Mathematical Modeling in the Social Sciences

Now a weaker version of Proposition 13.6 is as follows.

Proposition 13.8. If T is arithmetically sound, there is a polyno-


mial expression over Z p(x1 , . . . , xn ) such that N |= ∀x1 , . . . , xn ∈
ω p(x1 , . . . , xn ) > 0, while
T  ∀x1 , . . . , xn ∈ ω p(x1 , . . . , xn ) > 0
and
T  ∃x1 , . . . , xn ∈ ω p(x1 , . . . , xn ) = 0.

Proof. If p(m, x1 , . . . , xn ), m = τ q, r, is an universal polyno-


mial with τ being Cantor’s pairing function (see Rogers, 1967),
then {m : ∃x1 . . . ∈ ω p(m, x1 , . . .) = 0} is recursively enumer-
able but not recursive. Therefore there must be an m0 such that
∀x1 . . . ∈ ω (p(m0 , x1 , . . .))2 > 0.

Proposition 13.9. If PA is consistent and N |= PA is standard,


and if P is nontrivial then there is a term-expression ζ ∈ LPA such
that N |= P (ζ) while PA  P (ζ) and PA  ¬P (ζ).

Proof. Put ζ = ξ + r(x1 , . . . , xn )ν, for r = 1 − (q + 1), q as in
Proposition 13.6 (or as p in Proposition 13.8).

Remark 13.10. Therefore, every nontrivial arithmetical P in theo-


ries from formalized arithmetic upward turns out to be undecidable.
We can generalize that result to encompass other theories T that
include arithmetic; see below.

14. θ functions and the arithmetical hierarchy


Definition 14.1. The sentences ξ, ζ ∈ LT are demonstrably equiva-
lent if and only if T  ξ ↔ ζ.

Definition 14.2. The sentence ξ ∈ LT is arithmetically expressible


if and only if there is an arithmetic sentence ζ such that T  ξ ↔ ζ.

Then, for N |= T , a model makes it arithmetically sound.


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 45

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 45

Proposition 14.3. If T is arithmetically sound, then for every m ∈


ω there is a sentence ξ ∈ T such that N |= ξ while for no k ≤ n there
is a Σk sentence in PA demonstrably equivalent to ξ.

Proof. The usual proof for PA is given in Rogers (1967, p. 321).


However we give here a slightly modified argument that imitates
Proposition 13.8. First notice that
(m)
∅(m+1) = {x : φ∅x (x)}

is recursively enumerable but not recursive in ∅(m) . Therefore, ∅(m+1)


isn’t recursively enumerable in ∅(m) , but contains a proper ∅(m) -
recursively enumerable set. Let us take a closer look at those sets.
We first need a lemma: form the theory T (m+1) whose axioms
are those for T plus a denumerably infinite set of statements of the
form “n0 ∈ ∅(n) ,” “n1 ∈ ∅(m) ,”. . . , that describe ∅(m) . Of course, this
theory doesn’t have a recursively enumerable set of theorems. Then,
(m)
Lemma 14.4. If T (n+1) is arithmetically sound, then φ∅x (x) ↓ if
and only if
T (m+1)  ∃x1 , . . . , xn ∈ ω p(ρ(z), z, dy,∅(m) , x1 , . . . , xn ) = 0.

Proof. Similar to the proof in the nonrelativized case; see Machtey


and Young (1979, p. 126 ff).
(m)
Therefore, we have that the oracle machines φ∅x (x) ↓ if and
only if
T (m+1)  ∃x1 , . . . , xn ∈ ω p(ρ(z), z, dy,∅(m) , x1 , . . . , xn ) = 0.

However, since ∅(m+1) isn’t recursively enumerable in ∅(m) , then


there will be an index m0 (∅(m) ) = ρ(z), z, dy,∅(m)  such that

N |= ∀x1 , . . . , xn [p(m0 , x1 , . . . , xn )]2 > 0,


while it cannot be proved neither disproved within T (m+1) . It is there-
fore demonstrably equivalent to a Πm+1 assertion.
Now let q(m0 (∅(m) ), x1 , . . .) = p(m0 (∅(m) ), x1 , . . .))2 be as in
Proposition 14.3. Then, we have the following corollary.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 46

46 The Limits of Mathematical Modeling in the Social Sciences

Corollary 14.5. If T is arithmetically sound, then, for


β (m+1) = σ(G(m0 (∅(n) )),
 +∞ 2
(n) C(m0 (∅(n) ), x)e−x
G(m0 (∅ )) = (n) ), x)
dx,
−∞ 1 + C(m0 (∅

C(m0 (∅(n) ), x) = λq(m0 (∅(n) ), x1 , . . . , xr ),

N |= β (m+1) = 0 but for all n ≤ m + 1, ¬{T (n)  β (m+1) = 0} and


¬{T (n)  ¬(β (m+1) = 0)}.

We have used here a variant of the construction of θ and β which


first appeared in da Costa and Doria (1991a). Then, we have the
following corollary.

Corollary 14.6. If T is arithmetically sound and if LT contains


expressions for the θ (m) functions as given in Proposition B.3, then
for any nontrivial arithmetical predicate P there is a ζ ∈ LT
such that the assertion P (ζ) is T -demonstrably equivalent to and
T -arithmetically expressible as a Πm+1 assertion, but not equivalent
to and expressible as any assertion with a lower rank in the arithmetic
hierarchy.

Proof. As in the proof of Proposition 13.9, we write:



ζ = ξ + [1 − (p(m0 (∅m ), x1 , . . . , xn ) + 1)]ν,
where p(. . .) is as in Proposition 14.3.
An extension of the preceding result is as follows.

Corollary 14.7. If T is arithmetically sound then, for any nontriv-


ial P there is a ζ ∈ LT such that P (ζ) is arithmetically expressible,
N |= P (ζ) but only demonstrably equivalent to a Π0n+1 assertion and
not to a lower one in the hierarchy.

Proof. Put
ζ = ξ + β (m+1) ν,
where one uses Corollary 14.5.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 47

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 47

Beyond arithmetic

Definition 14.8.
∅(ω) = {x, y : x ∈ ∅(y) },
for x, y ∈ ω.

Definition 14.9.
θ (ω) (m) = c∅(ω) (m),
where c∅(ω) (m) is obtained as in Proposition 13.1.

Definition 14.10.
∅(ω+1) = (∅(ω) ) .

Corollary 14.11. 0(ω+1) is the degree of ∅(ω+1) .

Corollary 14.12. θ (ω+1) (m) is the characteristic function of a


nonarithmetic subset of ω of degree 0(ω+1) .

Corollary 14.13. If T is arithmetically sound, then, for


β (ω+1) = σ(G(m0 (∅(ω) )),
 +∞ 2
C(m0 (∅(ω) ), x)e−x
G(m0 (∅(ω) )) = (ω) ), x)
dx,
−∞ 1 + C(m0 (∅

C(m0 (∅(ω) ), x) = λq(m0 (∅(ω) ), x1 , . . . , xr ),

N |= β (ω+1) = 0 but T  β (ω+1) = 0 and T  ¬(β (ω+1) = 0).

Proposition 14.14. If T is arithmetically sound then, given any


nontrivial predicate P,
(1) there is a family of terms ζm ∈ LT such that there is no general
algorithm to check, for every m ∈ ω, whether or not P (ζm );
(2) there is a term ζ ∈ LT such that M |= P (ζ) while T  P (ζ) and
T  ¬P (ζ);
(3) neither the ζm nor ζ are arithmetically expressible.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 48

48 The Limits of Mathematical Modeling in the Social Sciences

Proof. We take:
(1) ζm = xθ (ω+1) (m) + (1 − θ (ω+1) (m))y.
(2) ζ = x + yβ (ω+1) .
(3) Neither θ (ω+1) (m) nor β (ω+1) are arithmetically expressible.

Remark 14.15. We have thus produced out of every nontrivial pred-


icate in T intractable problems that cannot be reduced to arithmetic
problems. Actually, there are infinitely many such problems for every
ordinal α, as we ascend the set of infinite ordinals in T . Also, the gen-
eral nonarithmetic undecidable statement P (ζ) has been obtained
without the help of any kind of forcing construction.

For the way one proceeds with those extensions, we refer the
reader to references on the hyperarithmetical hierarchy (see Ash and
Knight, 2000; Rogers, 1967).

Corollary 14.16. There is an explicit expression for a function β


such that T  β = 0 ∨ β = 1, while neither T  β = 0 nor T  β = 1.

(Proof follows from the fact that the θ function has a recursively
enumerable set of nonzero values, with a nonrecursive complement,
and from the existence of a Diophantine equation, which has no roots
in the standard model M while that fact can neither be proved nor
disproved by T .)
Notice that since equality is undecidable in the language of anal-
ysis, there is no general algorithmic procedure to check whether a
given expression in that language equals, say, the θ n or the β.
In order to sum it up, I’ll restate now the chief undecidability
and incompleteness theorem.

15. Statement of the main undecidability


and incompleteness results
We suppose that our theories are formalized within a first-order clas-
sical language with equality and the description operator.
We follow the notation of da Costa and Doria (1991a); ω denotes
the set of natural numbers, Z is the set of integers, and R are the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 49

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 49

real numbers. Let T be a first-order consistent axiomatic theory that


contains formalized arithmetic N and such that T is strong enough
to include the concept of set plus the whole of classical elemen-
tary analysis and dynamical systems theory. (We can simply take
T = ZFC, where ZFC is Zermelo–Fraenkel set theory with the Axiom
of Choice.) Moreover, T has a model M where the arithmetic portion
of T is standard.

A general undecidability and incompleteness theorem


If LT is the formal language of T , we suppose that we can form
within T a recursive coding for LT so that it becomes a set LT of
formal expressions in an adequate interpretation of T . Objects in T
will be noted by lower case italic letters. Predicates in T will be noted
P, Q, . . . .
From time to time, we play with the distinction between an object
and the expression in LT that represents it. If x, y are objects in the
theory, ξ, ζ ∈ LT are term-expressions for those objects in the for-
mal language of T . In general, there is no one-to-one correspondence
between objects and expressions; thus, we may have different expres-
sions for the same functions: “cos 12 π” and “0” are both expressions
for the constant function 0. We note by x an expression for x in
LT . We say that a predicate P defined for a set X is nontrivial if
neither T  ∀x ∈ XP (x) nor T  ∀x ∈ X¬P (x). In what follows
B is a set of (expressions for) functions that includes elementary real
analysis.
Let P be any nontrivial predicate. Our main theorem is as follows.

Proposition 15.1.

(1) There is an expression ξ ∈ B so that T  ¬P (ξ) and T  P (ξ),


but M |= P (ξ).
(2) There is a denumerable set of expressions for functions ξm (x) ∈
B, m ∈ ω, such that there is no general decision procedure to
ascertain, for an arbitrary m, whether P (ξm ) or ¬P (ξm ) is prov-
able in T .
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 50

50 The Limits of Mathematical Modeling in the Social Sciences

(3) Given the set K = {m : T  P (m)},  and given an arbitrary


total recursive function g : ω → ω, there is an infinite number of
 > g(P (m)).
values for m so that CT (P (m)) 
(Here the m recursively code the set ξm of expressions in LT ; CT
 in T, and P m
is the shortest length of a proof for P m  is the
length of P m in LT .)
(4) There is a ζ ∈ LT such that the assertion P (ζ) is arithmeti-
cally expressible, M |= P (ζ) but only demonstrably equivalent
to a Πn+1 assertion and not to a lower one in the arithmetic
hierarchy.
(5) There are expressions ζm and ζ in LT which are not arithmeti-
cally expressible, and:
(a) there is a family of those expressions ζm ∈ LT such that there
is no general algorithm to check, for every m ∈ ω, whether
or not P (ζm ) in T ;
(b) there is an expression ζ ∈ LT as above such that M |= P (ζ)
while T  P (ζ) and T  ¬P (ζ).

(Recall that ζ is arithmetically expressible within T if we can


make it formally equivalent to an arithmetic expression with the
tools available in T .)

16. Questions settled with those techniques


We immediately noticed that our tools led to the negative solution
of several open problems in dynamical systems theory and related
areas. They are discussed below.

The integrability problem in classical mechanics


That’s an old question. We quote a mention of that problem (see
Lichtenberg and Lieberman, 1983):
Are there any general methods to test for the integrability of a given
Hamiltonian? The answer, for the moment, is no. We can turn the
question around, however, and ask if methods can be found to
construct potentials that give rise to integrable Hamiltonians. The
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 51

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 51

answer here is that a method exists, at least for a restricted class


of problems and so on.

We can divide the integrability question into three items:

• Given any Hamiltonian h, do we have an algorithm to decide


whether the associated flow Xh can be integrated by quadratures?
• Given an arbitrary Hamiltonian h such that Xh can be integrated
by quadratures, can we algorithmically find a canonical transfor-
mation that will do the trick?
• Can we algorithmically check whether an arbitrary set of functions
is a set of first integrals for a Hamiltonian system?

No, in all three cases. There is no general algorithm to decide, for


a given Hamiltonian, whether or not it is integrable. Also, there will
be sentences such as ξ = “h is integrable by quadratures,” where,
however, T  ξ and T  ¬ξ (see da Costa and Doria, 1991a).

The Hirsch problem: the decision problem for chaos


That problem was discussed above. Is there an algorithm to check for
chaos given the expressions of a dynamical system? No. There is no
such a general algorithm, and there will be systems that look chaotic
on a computer screen (that is to say, they are chaotic in a standard
model M, see section 15) but such that proving or disproving their
chaotic behavior is impossible in T (see da Costa and Doria, 1991a;
Hirsch, 1985).
That result applies to any nontrivial characterization for chaos
in dynamical systems.

Wolfram’s conjecture and Penrose’s thesis


Wolfram had long conjectured that simple phenomena in classical
physics might lead to undecidable questions (see Wolfram, 1984):
One may speculate that undecidability is common in all but the
most trivial physical theories. Even simply-formulated problems in
theoretical physics may be found to be provably insoluble.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 52

52 The Limits of Mathematical Modeling in the Social Sciences

On the other side, Penrose asserted as a kind of thesis that clas-


sical physics offers no examples of noncomputable phenomena. We
proved Wolfram’s conjecture and thus gave a counterexample to Pen-
rose’s thesis (see da Costa and Doria, 1991b; Penrose, 1989; Stewart,
1991).

Arnol’d’s problems
Arnol’d formulated in the 1974 AMS Symposium on the Hilbert
Problems (see Arnol’d, 1976) a question dealing with algorithmic
decision procedures for polynomial dynamical systems over Z (see
Arnol’d, 1976):
Is the stability problem for stationary points algorithmically decid-
able? The well-known Lyapounov theorem solves the problem in
the absence of eigenvalues with zero real parts. In more compli-
cated cases, where the stability depends on higher order terms in
the Taylor series, there exists no algebraic criterion.
Let a vector field be given by polynomials of a fixed degree, with
rational coefficients. Does an algorithm exist, allowing to decide,
whether the stationary point is stable?
A similar problem: Does there exist an algorithm to decide, whether
a plane polynomial vector field has a limit cycle?

For those questions, there is no general algorithm available since


T contains the sine function, the absolute value function and π; also
the corresponding theory is incomplete (see da Costa and Doria,
1994a). In the polynomial case again, there is no algorithm to decide
whether a fixed point at the origin is stable or not (see da Costa and
Doria, 1993a,b).

Problems in mathematical economics


Lewis and Inagaki (1991b) pointed out that our results entail the
incompleteness of the theory of Hamiltonian models in economics.
They also entail the incompleteness of the theory of Arrow–Debreu
equilibria and (what is at first sight surprising) the incompleteness
of the theory of finite games with Nash equilibria (see da Costa and
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 53

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 53

Doria, 1994a; Tsuji et al., 1998). Those two last questions are dis-
cussed below in Section 17.

“Smooth” problems equivalent to hard


number-theoretic problems
Common wisdom among mathematicians has that number-theoretic
problems are in general much more difficult than “smooth” problems.
We showed that that is definitely not the case. We gave an explicit
example of a dynamical system where the proof that there will be
chaos is equivalent to the proof of Fermat’s last theorem (or the proof
of Riemann’s hypothesis, or the decision of the P vs. N P question).
We also proved that (given some conditions) those “nasty” problems
are dense in the space of all dynamical systems (see da Costa et al.,
1993).

Simple problems worse than any number-theoretic


problem
The language of analysis is much richer than the language of arith-
metic, as we can express the halting function in analysis. Also we
can explicitly construct “natural”-looking and quite simple problems
with our techniques that lie beyond the arithmetical hierarchy (see da
Costa and Doria, 1994a). For example, we can explicitly define a pro-
cedure to obtain an expression θ (ω) (n) for a characteristic function in
0(ω) . That function equals either 0 or 1, but the actual computation
of θ (ω) ’s values reaches beyond the arithmetic hierarchy.

Generic, faceless objects


One of the features of the main set-theoretic forcing constructions
is that we add “generic,” faceless sets to our formal theories. How-
ever, there are no explicit expressions for those objects. With the
help of our techniques, we exhibited an expression for a “faceless”
Hamiltonian (see da Costa and Doria, 1994a): the only thing we can
prove about it is that it definitely is a Hamiltonian, and nothing
more.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 54

54 The Limits of Mathematical Modeling in the Social Sciences

Undecidable sentences as bifurcation points


in a formal theory
Our techniques allow a simple coding of undecidable sentences in T
into a bifurcating vectorfield (see da Costa and Doria, 1993a,b). One
such example goes (roughly) as follows: given the extended theories
T + = T + P (ξ), T − = T + ¬P (ξ), we have that T +  P if and only if
a certain vectorfield X undergoes a Hopf bifurcation, while T −  ¬P
if and only if X doesn’t undergo a Hopf bifurcation; moreover, we
can algorithmically obtain an expression for that X.

17. Undecidability and incompleteness


in the social sciences
Our main examples in this section have to do with economics (com-
petitive markets) and the theory of social structures as modeled
by population dynamics equations. We are especially interested in
the difficulties of forecasting in economics and in the applied social
sciences.
Whenever we describe social phenomena by dynamical sys-
tems, uncertainties in forecasting are usually supposed to be due
to the nonlinearities in the systems considered, that is to say, they
are related to the sensitivity those systems exhibit when small
changes are made in the initial conditions. Linear systems do not
have that kind of behavior and so are supposed to be strictly
deterministic.
Our results contradict that belief. We show that equilibrium
prices in competitive markets are in general noncomputable, and so
fall outside the scope of the techniques available in the usual formal
modeling tools; competitive market equilibrium is, however, equiva-
lent to determining a minmax solution for a noncooperative game,
which is a linear problem. So, there are also obstacles to forecasting
when one deals with linear systems. The remaining two examples in
this section have to do with the ecology of populations (“do we have
cycles or chaotic behavior?”) and a model for class structures in a
population (“will the middle class survive?”).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 55

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 55

Undecidability and incompleteness in the theory


of finite games
We start from the usual mathematical definitions in game theory.
Definition 17.1. A noncooperative game is given by the von Neu-
mann triplet Γ = N, Si , ui , with i = 1, 2, . . . N , where N is the
number of players, Si is the strategy set of player i and ui is the
real-valued utility function ui : Si → R, where each si ∈ Si .
Definition 17.2. A strategy vector s∗ = s∗1 , . . ., s∗k ∈ Sk is a Nash
equilibrium vector for a finite noncooperative game Γ if for all strate-
gies and all i,
ui (s∗ ) = ui (s∗1 , . . . , s∗k , . . .) ≥ ui (s∗1 , . . . , sk , . . .),
for sk = s∗k .
The main result goes as follows: we suppose that game theory
has been formalized through a Suppes predicate within our theory
T ; therefore, when we talk about games in T , we discuss objects that
can be formally proved to equal explicitly defined games in T .
Proposition 17.3.
(1) Given any nontrivial property P of finite noncooperative games,
there is an infinite denumerable family of finite games Γm such
that for those m with T  “P (Γm ),” for an arbitrary total recur-
sive function g : ω → ω, there is an infinite number of values
for m such that the shortest length of a proof of P Γm in T,
CT (P Γm ) > g(P Γm ).
(2) Given any nontrivial property P of finite noncooperative games,
there is one of those games Γ such that T  “P (Γ)” if and only
if T  “Fermat’s Conjecture.”
(3) There is a noncooperative game Γ where each strategy set Si is
finite but such that we cannot compute its Nash equilibria.
(4) There is a noncooperative game Γ where each strategy set Si
is finite and such that the computation of its equilibria is
T -arithmetically expressible as a Πm+1 problem, but not to any
Σk problem, k ≤ m.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 56

56 The Limits of Mathematical Modeling in the Social Sciences

(5) There is a noncooperative game Γ where each strategy set Si is


finite and such that the computation of its equilibria isn’t arith-
metically expressible.

So nasty things may crop up in game theory, even if that the-


ory turns out to be linear; suffices to embed it into the language of
analysis.
To take a closer look: when it comes to the theory of finite games,
the situation looks very neat at first. If we can describe the game by
tables, it should be decidable; if not, it may be undecidable. However,
what do we mean when we say that we are “describing a game by
a table”? A table of payoffs and outcomes is easily handled when
there are just a few participants in the game, but in an actual market
situation when we have thousands of players, we may rather naturally
relax the condition of an explicit presentation of the table of payoffs
in favor of the following arrangement:
• There is a partial recursive function with a finite set of values
that lists the participants in the game. (The game’s players are
recursively presented.)
• The payoff matrix is given by a partial recursive function.
In our case, the payoff matrix u is given by
u = u θ + u (1 − θ),
where θ is the function we have previously introduced (or one of its
higher-degree variants θ n ), and u and u are noncoincident payoff
matrices with different Nash solutions.
The first condition in the preceding itemization means simply
that we have a procedure that lists all players in the game (presum-
ably out of a larger universe); the second condition means that we
can summarize within the bounds of an algebraic expression all gains
and losses in the game. Why don’t we require that the players and
the payoff matrix be given by total recursive functions? Because for
most complicated situations, we can’t algorithmically check that a
given partial recursive function is a total function; therefore, if we
add that requirement to a situation where thousands of variables
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 57

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 57

are to be handled, we would add an idealized condition that nobody


would be able to decide if needed.
Our results here satisfy the above set of conditions. The unde-
cidable payoff functions can easily (even if clumsily) be translated as
Turing machines that once started in their computation will run for-
ever (or until they physical counterparts break down, or are stopped
by an external agent). So, our payoff functions are represented by
partial recursive functions as required.

Markets in equilibrium may have noncomputable


prices
Those results have an immediate consequence for a question of both
historical and practical importance: the controversy on economic
planning between L. von Mises and O. Lange (see, on that contro-
versy, Seligman (1971, I, p. 115ff)).
The central problem of economic planning is an allocation prob-
lem. Very frequently, allocation is to be done on the basis of maxi-
mizing (or minimizing) simple functions over finite sets. We proved
that trouble is to be expected even when the problem of planning is
reduced to the problem of determining equilibria in finite noncooper-
ative Nash games, which is formally equivalent to the determination
of equilibrium prices in a competitive market.
So, the main argument by Lange in favor of a planned economy
(by the way, an argument also shared by von Mises) clearly breaks
down. Lange thought that given the (possibly many) equations defin-
ing an economy, a huge and immensely powerful computer would
always be able to figure out the equilibrium prices, therefore allow-
ing (at least theoretically) the existence of an efficient global policy
maker. However, our results (as well as the weaker previous results
by Lewis (1991a); Lewis and Inagaki (1991b)) disprove Lange’s con-
jecture.
Therefore, those that argue that “the market knows better” may
find a strong theoretical support in our conclusions (or in Lewis’
already quoted results), since the equilibrium point is reached (at
least according to theory) while we cannot, in general, compute it
beforehand.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 58

58 The Limits of Mathematical Modeling in the Social Sciences

The axiomatic background is made explicit for clarity. The


incompleteness phenomenon means that within a (consistent) pre-
scribed axiomatic framework, certain facts cannot be proved.
Assuredly, if we add stronger axioms to our system, a few of those
unprovable facts may be proved. Yet the stronger axioms may also
be debatable on philosophical grounds, so that the proof of a desired
fact from the enriched system eventually turns out to be technically
correct but philosophically (and perhaps empirically) doubtful. For
details see da Costa and Doria (1994a) and Tsuji et al. (1998).

Oscillating populations or a chaotic demography?


The Lotka–Volterra equations (LV) describe two interacting popu-
lations, namely a “preyed” population x and a “predator” popula-
tion y. If left by themselves, the x blow up exponentially, while the
y decay exponentially. When x and y interact, they start to move in
nonlinear cycles. The LV model describes in a reasonable way a num-
ber of ecological cycles (see Goel et al., 1971; Nicolis and Prigogine,
1977).
Now consider the following situation: let x, y and x , y   be
two uncoupled LV systems. We show in Section 20 that they can be
described by a Hamiltonian system Ξ, and that there is a perturbed
system

Ξ = Ξ + Υ

where all populations are (in general) coupled and Ξ has a Smale
horseshoe. So Ξ is chaotic in its behavior.
We slightly modify Ξ and get

Ξ∗ = Ξ + βΥ,

where β is given in Corollary 14.16. Then the assertions “Ξ∗ describes


two sets of oscillating, nonchaotic, uncoupled populations” and “Ξ∗
describes four coupled, chaotic populations” are both undecidable in
our formal theory T .
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 59

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 59

Will the middle class survive?


Section 20, equation (6), describes a beautiful model (elaborated by
Prigogine) for the interaction of three coupled populations, x, y, z,
where z is a small-sized “upper” or “warrior” class, y is a “lower”
or “working” class, and x is a “middle class.” The model predicts
two stable equilibria: in the first, the “middle class” vanishes; in the
second, the “warrior” class vanishes while “workers” and the “middle
class” thrive. The “middle class” will grow to a steady state value if
κ(N0 − y0 − z0 ) − δ − ρz0 > 0,
where ρ is a measure of the aggressiveness of the “upper class” against
the “middle class,” and N0 measures the total wealth of the ecolog-
ical niche; y0 , z0 are constant values obtained out of the system’s
parameters, like δ (see Section 20).
We now write ρ∗ = βρ + (1 − β)ρ , where ρ  ρ , and ρ doesn’t
satisfy the preceding condition, while ρ satisfies it. Then, out of
κ(N0 − y0 − z0 ) − δ − ρ∗ z0 > 0,
the sentences “The middle class will thrive” and “The middle class
will fade away” are both undecidable in T .
More details in Section 20.

18. Forcing and our techniques


Let us now restate a few ideas from one of our papers (see da
Costa and Doria, 1994b). Our techniques certainly look very different
from the well-known forcing constructions that have led to so many
undecidable statements in mathematics. Thus, which is the relation
between our techniques for the construction of undecidable state-
ments and the Cohen–Solovay kind of forcing? In order to answer this
question, we must conceive a theory as a Turing machine that accepts
strings of symbols — well-formed formulas — and stops whenever
those strings of symbols are theorems of the theory. If not, it never
halts and enters an infinite loop.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 60

60 The Limits of Mathematical Modeling in the Social Sciences

Now consider Zermelo–Fraenkel axiomatic set theory, ZF. If MZF


is the corresponding proof machine for ZF, and if CH is the Contin-
uum Hypothesis, we know that MZF (CH) never halts. Accordingly,
there is a Diophantine polynomial pZF (CH, x1 , . . .) that has no roots
over Z, but since CH is independent of the axioms of ZF, there can be
no proof (within ZF) of the statement “ pZF (CH, x1 , . . .) = 0 has no
roots over Z.” (If there were one such proof, we would then be able
to decide CH in ZF.) With the help of our techniques, we can obtain
a two-step function θZF (m) such that, if mCH is a Gödel number
for CH, then both ZF  θZF (mCH ) = 0 and ZF  θZF (mCH ) = 1.
Therefore, every undecidable statement constructed with the help of
forcing within ZF (or even within weaker theories, provided that they
include elementary arithmetic) gives rise to undecidable statements
according to the present tools.
Moreover, the converse isn’t true, that is, there are some (actu-
ally, infinitely many) undecidable statements that can be constructed
according to the present techniques, but such that no forcing state-
ment will be mapped on them if we follow the preceding procedure.
Finite objects are (set-theoretic) forcing-absolute, but we have seen
that we can construct undecidable statements about finite objects
in ZF say, again through the θ function. If mFin is the Gödel-coding
for one of those statements, then “ θZF (mFin ) = 0” cannot be proved
in ZF. So, there is a (metamathematical) algorithmic procedure that
goes from every undecidable statement in ZF onto undecidable state-
ments about the θZF function; and yet forcing statements are only
a portion of that map, since there is much more in it (see da Costa
and Doria, 1992a).

19. Evaluation of the results


There are two possible opposing views concerning Gödel-like unde-
cidable statements in mathematics.
Following the first view, Gödel-like undecidable sentences are
usually seen as warning posts that indicate blocked routes in
axiomatic systems; according to that viewpoint, they mean that one
can’t go farther along some direction. They were already known to
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 61

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 61

appear in lost backalleys; yet, as told here, the authors had long
striven to show that the incompleteness phenomenon is part of the
actual practice in any axiomatized science, and their endeavor proved
a fruitful one when they showed that simple questions in dynamical
systems theory (“Given a dynamical system, can we check whether
it is chaotic? Can we prove that it is chaotic?”) led to undecidability
and incompleteness (see da Costa and Doria, 1991a). We call that
the “negative” viewpoint, since it is usually supposed to imply that
incompleteness means that there is an essential deficit in our knowl-
edge when it is obtained through some formal system.
Assuredly incompleteness means that we can’t compute some
result. But should we take that fact as some kind of absolute obstacle
to our knowledge of the world through formal languages?
The second point of view is the “optimistic” one; it is the position
adopted by the authors. Undecidable sentences are seen as degrees of
freedom, as bifurcation points in our theories. They reveal some kind
of inner freedom in the possibilities we have when trying to describe
the world within a formal system. They show the existence of open
possibilities, choices available in the formalism; they cannot be looked
upon as limitations to our knowledge. That point of view is reinforced
when one considers that there is an actual functor that goes from the
theory of formal systems into the theory of bifurcating dynamical
systems, as described for instance in da Costa and Doria (1993b).
Very much as if the whole of mathematics were to be redrawn onto
a small spot over its own belly.

20. More on population dynamics


We sketch here the main technical details we require from the theory
of population dynamics.

The Lotka–Volterra equations in Hamiltonian form


The Lotka–Volterra (LV) equations describe two competing popula-
tions; its variables denote the number of individuals in each popula-
tion. Here x represents the prey population while y is the predator
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 62

62 The Limits of Mathematical Modeling in the Social Sciences

population. For x > 0, y > 0, they interact through:

ẋ = +αx − βxy,
(1)
ẏ = −γy + δxy,

where α, β, γ, δ are positive real constants. There is an immediate


first integral,

exp H = y α xγ exp[−(βy + δx)] = k, (2)

where k is a positive constant. Solutions for the LV equations are


shown to be nonlinear cycles.
If we put u = log x, v = log y, then there is a time-independent
Hamiltonian

H = αv + γu − (βev + δeu ) = log k, (3)

so that equation (1) becomes:

u̇ = +∂H/∂v,
(4)
v̇ = −∂H/∂u.

For the references, see Goel et al. (1971) and Nicolis and Pri-
gogine (1977).

Competing populations with chaotic dynamics


We use a famous result by Ruelle and Takens.

Proposition 20.1. If ω = ω1 , ω2 , ω3 , ω4  is a constant vectorfield


on the torus T 4 , where ω1 ≤ ω2 ≤ ω3 ≤ ω4 , then given every
C 3 -neighborhood of ω there is an open set of vectorfields with strange
attractors.

Proof. See Ruelle and Takens (1971).


Now consider two sets of four LV predator–prey populations that
obey two uncoupled LV systems (equation (1)),

Ξ = x, y, x , y  .


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 63

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 63

Due to equations (1)–(4) they can be cast in Hamiltonian form and


(through the Liouville–Arnol’d theorem), they can be canonically
mapped onto a constant vectorfield ω on T 4 . Then, as a corollary of
the Ruelle–Takens result, when we go back to the LV systems, we
have the following corollary.

Corollary 20.2. Close to Ξ there is an open set of perturbed LV


equations with four population components and with a chaotic behav-
ior due to strange attractors.

The attractors are Smale horseshoes; a similar result within


Hamiltonian mechanics is explicitly dealt with in Holmes and Mars-
den (1982).

Remark 20.3. If Ξ also denotes the LV-system for those four pop-
ulations, then we note its chaotic perturbation:

Ξ = Ξ + Υ, (5)

where  is a small positive constant and Υ is the perturbation.

Class structures
We consider here three interacting populations where the number of
individuals in each is denoted by the variables x, y, z > 0 (see Nicolis
and Prigogine, 1977, p. 460). The dynamics of the model is given by:

ẋ = κx(N0 − x − y − z) − δx − ρxy,
ẏ = κy(N0 − x − y − z) − δy − f (x, y), (6)
ż = f (x, y) − δz.

Here f (x, y) = α1 z(y − α2 z) is a Verhulst term. Greek letters repre-


sent positive real constants; N0 is a maximum level of resources for
the competing populations. y + z is to be understood as a population
that split between a “working class” y and a (small-sized) “warrior
class” z; x is a “middle class”, which is preyed upon by the “warriors”
since it competes within the same ecological niche with the coalescing
“warriors” and “workers.”
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 64

64 The Limits of Mathematical Modeling in the Social Sciences

Stability analysis shows that the equilibrium state x0 , y0 , z0 = 0 is


unstable. One considers the equilibria x0 , y0 = 0; z0 = 0 (no warrior
class) or x0 = 0; y0 , z0 = 0 (no middle class). Condition for the
survival of the middle class is

κ(N0 − y0 − z0 ) − δ − ρz0 > 0. (7)

Since z0 is small, the x will vanish if either the workers y0 are a


large-sized population or if the warriors are very aggressive (ρ  0)
against the middle class x.
Recall that the Lotka–Volterra (LV) system,

ẋ = −x + αxy
(8)
ẏ = +y − βxy,

which describes two interacting populations, the predators (x) and


preys (y), can be made equivalent through a simple variable change,

u = log x,
(9)
v = log y,

which is derived from a Hamiltonian function H (not made explicit


here; see Nicolis and Prigogine, 1977), so that we have:

u̇ = −∂H/∂v,
(10)
v̇ = +∂H/∂u.

Moreover, since H is a first integral in that system, and since the LV


system has only closed orbits, we may transform it into a harmonic
oscillator system through a canonical transformation:

ξ˙ = −η,
(11)
η̇ = +ξ.

Those results will be our starting point.


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 65

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 65

Two perennially competing populations or two


populations that will fatally disappear?
We consider here a model derived from a Lotka–Volterra model which
undergoes a Hopf bifurcation:
ẋ = −y + αxy − x(µ − x2 − y 2 ), (12)
ẏ = +x − βxy − y(µ − x2 − y 2 ). (13)
It is now a simple exercise to obtain an undecidable dynamical system
out of the preceding equations.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 66

Part II. The monster in the belly of complexity theory

Appendix A. Preliminary remarks


We may use a more transparent (but colorless) title: incomplete-
ness out of fast-growing computable functions. We explore here the
fast-growing nature of several counterexample functions that appear
in complexity theory, as they lead to examples of undecidability
and incompleteness in reasonable axiomatic theories.13 Whenever
required, we use (again!) a formal framework S that has the following
properties:
We suppose that theory S satisfies the following:

• Its underlying language is the first-order classical predicate calcu-


lus.
• It has a recursively enumerable set of theorems.
• It includes PA (Peano Arithmetic) in the following sense: it
includes the language and proves all theorems of PA.
• It has a model with standard arithmetic.

(Think of S as PA or ZFC with models that have standard arith-


metic.) Suppose that we want to build a function that tops all total
recursive functions. Let’s do it formally. We will work within S:

Remark A.1. For each n, F(n) = maxk≤n ({e}(k)) + 1, that is, it is


the sup of those {e}(k) such that:

(1) k ≤ n.
(2) PrS (∀x ∃z T (e, x, z)) ≤ n.

PrS (ξ) means there is a proof of ξ in S, where ξ means:


the Gödel number of ξ. So PrS (ξ) means: “the Gödel number of
sentence ‘there is a proof of ξ in S.’ ” Condition 2 above translates
as: there is a proof of [{e} is total] in S whose Gödel number is ≤ n.

13
We follow here a suggestion by G. Kreisel in a private message to N. C. A. da
Costa and F. A. Doria.

66
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 67

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 67

Proposition A.2. We can explicitly compute a Gödel number eF so


that {eF } = F.

Proposition A.3. If S is consistent then S  ∀m∃n [{eF }(m) = n].

We do not get here a Busy Beaver like function; we get a par-


tial recursive function (the Busy Beaver is noncomputable), which
can neither be proved nor disproved total in S — it is total in the
standard model for arithmetic, provided that S has a model with
standard arithmetic.
Notice that this function is another version for Kleene’s function
F , which appears in the beginning of this chapter.

Sources
Again we will base our exposition in three papers of ours (see da
Costa and Doria, 2015, 2016; Doria, 2016), which will be freely quoted
in what follows.

Appendix B. Technicalities
We deal here with two possible formalizations for both P = N P
and P < N P . We have called the unusual formalizations the “exotic
formalization.” They are naı̈vely equivalent, but when we move to a
formal framework like that of S, we have difficulties.
Let tm (x) be the primitive recursive function that gives the oper-
ation time of {m} over an input x of length |x|. If {m} stops over an
input x, then

tm (x) = |x| + [number of cycles of the machine until it stops].

tm is primitive recursive and can in fact be defined out of Kleene’s


T predicate.

Definition B.1 (Standard formalization for P = N P ). [P =


N P ] ↔Def ∃m, a ∈ ω ∀x ∈ ω [(tm (x) ≤ |x|a + a) ∧ R(x, m)].
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 68

68 The Limits of Mathematical Modeling in the Social Sciences

R(x, y) is a polynomial predicate; it formalizes a kind of “verify-


ing machine” that checks whether or not x is satisfied by the output
of {m}.

Definition B.2. [P < N P ] ↔Def ¬[P = N P ].

Now suppose that {ef } = f is total recursive and strictly increas-


ing:

Remark B.3. The naı̈ve version for the exotic formalization is

[P = N P ]f ↔ ∃m ∈ ω, a ∀x ∈ ω [(tm (x) ≤ |x|f(a) + f(a)) ∧ R(x, m)].

However, there is no reason why we should ask that f be total; on


the contrary, there will be interesting situations where such a function
may be partial and yet provide a reasonable exotic formalization for
P < NP.

Let f be in general a (possibly partial) recursive function that is


strictly increasing over its domain, and let ef be the Gödel number
of an algorithm that computes f. Let p(ef , b, c, x1 , x2 , . . . , xk ) be an
universal Diophantine polynomial with parameters ef , b, c; that poly-
nomial has integer roots if and only if {ef }(b) = c. We may suppose
that polynomial to be ≥ 0. We omit the “∈ ω” in the quantifiers,
since they all refer to natural numbers.

Definition B.4. Mf (x, y) ↔Def ∃ x1 , . . . , xk [p(ef , x, y, x1 , . . . , xk )


= 0].

Actually Mf (x, y) stands for Mef (x, y), or better, M (ef , x, y), as
dependence is on the Gödel number ef .

Definition B.5. ¬Q(m, a, x) ↔Def [(tm (x) ≤ |x|a + a) → ¬R


(x, m)].

Proposition B.6 (Standard formalization, again.).

[P < N P ] ↔ ∀m, a ∃x ¬Q(m, a, x).

Definition B.7. ¬Qf (m, a, x) ↔Def ∃a [Mf (a, a ) ∧ ¬Q(m, a , x)].
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 69

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 69

Remark B.8. We will sometimes write ¬Q(m, f(a), x) for ¬Qf (m, a,
x), whenever f is provably recursive and total.

Definition B.9 (Exotic formalization).

[P < N P ]f ↔Def ∀m, a ∃x ¬Qf (m, a, x).

Notice that again this is a Π2 arithmetic sentence:

∀m, a ∃x, a , x1 , . . . , xk {[p(ef , a, a , . . . , x1 , . . . , xk ) = 0]


∧ ¬Q(m, a , x)}.

(Recall that Q is primitive recursive.)

Definition B.10. [P = N P ]f ↔Def ¬[P < N P ]f .

We will sometimes write ¬Q(m, g(a), x) for ¬Qg (m, a, x), when-
ever g is S-provably total.

Appendix C. Hard stuff


For the definition of sat (and a detailed presentation of the satisfia-
bility problem), see Machtey and Young (1979); for the BGS recursive
set of poly Turing machines, see Baker et al. (1975). In a nutshell,
sat is the set of all Boolean expressions in conjunctive normal form
(cnf) that are satisfiable, and BGS is a recursive set of poly Turing
machines that contains emulations of every conceivable poly Turing
machines.

The full counterexample function, intuitive ideas


The full counterexample function f is defined as follows: let ω code
an enumeration of the Turing machines. Similarly code by a standard
code sat onto ω:
• If n ∈ ω isn’t a poly machine, f (n) = 0.
• If n ∈ ω codes a poly machine:
— f (n) = first instance x of sat so that the machine fails to output
a satisfying line for x, plus 1, that is, f (n) = x + 1.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 70

70 The Limits of Mathematical Modeling in the Social Sciences

— Otherwise f (n) is undefined, that is, if P = N P holds for


n, f (n) = undefined.

As defined, f is noncomputable. It will also turn out to be at


least as fast growing as the Busy Beaver function, since in its peaks
it tops all intuitively total recursive functions.
The idea in the proof of that fact goes as follows:

• Use the s–m–n theorem to obtain Gödel numbers for an infinite


family of “quasi-trivial machines” — soon to be defined. The
table for those Turing machines involves very large numbers, and
the goal is to get a compact code for that value in each quasi-
trivial machine so that their Gödel numbers are in a sequence
c(0), c(1), c(2), . . ., where c is primitive recursive.
• Then add the required clocks as in the BGS sequence of poly
machines, and get the Gödel numbers for the pairs machine +
clock. We can embed the sequence we obtain into the sequence of
all Turing machines.
• Notice that the subsets of poly machines we are dealing with are
(intuitive) recursive subsets of the set of all Turing machines. More
precisely, if we formalize everything in some theory S, then the
formalized version of the sentence “the set of Gödel numbers for
these quasi–trivial Turing machines is a recursive subset of the
set of Gödel numbers for Turing machines” holds for the standard
model for arithmetic in S, and vice versa.
However, S may not be able to prove or disprove that assertion,
that is to say, such assertions will sometimes be formally indepen-
dent of S.
• We define the counterexample functions over the desired set(s) of
poly machines, and compare them to fast-growing total recursive
functions over similar restrictions.

Definition C.1. For f, g : ω → ω,

f dominates g ↔Def ∃y ∀x (x > y → f (x) ≥ g(x)).

We write f  g for f dominates g.


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 71

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 71

Quasi-trivial machines: intuitions


The counterexample function is highly oscillating. Since it collects
the counterexamples for a problem in the class N P , out of a natural
listing of the poly machines, there will be cases where the coun-
terexample takes a long time to reveal itself, while the very next
poly machine outputs a counterexample in the first trials. So the
idea here is to construct a family of poly machines, which produce
counterexamples that can be seen as part of a fast-growing functions;
a function which grows as fast as one wishes.
So, we construct a family of poly machines that fits our purpose:
• Consider an exponential algorithm for the problem we are deal-
ing with — it will settle all instances of the question, albeit in
exponential time. Call it E.
• Build the following algorithm A:
— A = E for any input up to instance k.
— A = 0 for all inputs > k.
• A is a poly machine, and the value of the counterexample function
at (the Gödel number of) A is k + 1.
• Now make k as large as one wishes.
Notice that k requires about log k bits to be described. As such,
a value must be coded in our machine’s Gödel number, which will be
very large for large k. However, we need a kind of controlled growth
in Gödel numbers for the A-machines family. So, instead of explicitly
encoding k as a numeral, we use it as the value of a function; we
put k = g(i), for a fast-growing function g. For the s–m–n theorem
comes in handy and ensures us that the modified A’s Gödel numbers
will always grow as a primitive recursive function, but not beyond it.

Quasi-trivial machines: formal treatment


Recall that the operation time of a Turing machine is given as follows:
given that x is the Turing machine’s input (in binary form) and |x|
is its length, if M stops over an input x, then we have the following
definition.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 72

72 The Limits of Mathematical Modeling in the Social Sciences

Definition C.2. The operation time over x,

tM = |x| + number of cycles of the machine until it stops.

Example C.3.
• First trivial machine. Note it O. O inputs x and stops.

tO = |x| + moves to halting state + stops.

So, operation time of O has a linear bound.


• Second trivial machine. Call it O . It inputs x, always outputs
0 (zero) and stops.
Again operation time of O has a linear bound.
• Quasi-trivial machines. A quasi-trivial machine Q operates as
follows: for x ≤ x0 , x0 a constant value, Q = R, R an arbitrary
total machine. For x > x0 , Q = O or O .
This machine has also a linear bound.

Remark C.4. Now let H be any fast-growing, superexponential total


machine. Let H be a total Turing machine. Form the following family
Q... of quasi-trivial Turing machines with subroutines H and H :

(1) If x ≤ H(n), QH,H ,n (x) = H (x);

(2) If x > H(n), QH,H ,n (x) = 0.

Proposition C.5. There is a family Rg(n,|H|,|H|) (x) = QH,H ,n (x),
where g is primitive recursive, and |H|, |H | denotes the Gödel number
of H and of H .

Proof. By the composition theorem and the s–m–n theorem.


Now let T be the usual, exponential algorithm that computes the
truth values of the elements of sat.

Remark C.6. Very important! Recall that we are interested in


quasi-trivial machines where H = T, that is, it is like, say, the stan-
dard truth-table exponential algorithm for sat.

We first give a result for the counterexample function when


defined over all Turing machines (with the extra condition that the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 73

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 73

counterexample function = 0 if Mm isn’t a poly machine). We have


the following proposition.

Proposition C.7. If N (n) = g(n) is the Gödel number of a quasi-


trivial machine as in Remark C.4, then f (N (n)) = f (g(n)) = H
(n) + 1.

Proof. Use the machines in Proposition C.5 and Remark C.6.

Appendix D. The counterexample function f


Our goal here is to prove the following result:

Proposition D.1. For no total recursive function h does h  f .

Sketch of proof. The idea behind the argument goes as follows:


suppose that there is a total recursive function, say, h, that tops —
dominates — the counterexample function f . Then we try to iden-
tify some fast-growing segment in the counterexample function f , it
overtakes h infinitely many times.
How are we to proceed? We use the quasi-trivial machines. Pick
up some function g that dominates h. We will try to “clone” that
function g in a subset of the values of f — or better, we’ll show that
g exists as if already cloned in f . We use the quasi-trivial machines
for that purpose.
How do we proceed? We construct a denumerable family of quasi-
trivial machines Qn of Gödel numbers c(n) so that the values of f at
those machines are given by

f (c(n)) = g(n).

c(n) is a primitive recursive function. Now we have g(c(n)) > h(c(n))


as we have, by construction, g(n) > h(n).

Appendix E. BGS-like sets


We require here the BGS (see Baker et al., 1975) set of poly machines:

Mm , |x|a + a,


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 74

74 The Limits of Mathematical Modeling in the Social Sciences

where we couple a Turing machine Mm to a clock regulated by the


polynomial |x|a + a, that is, it stops Mm after |x|a + a steps in the
operation over x, where x is the machine’s binary input and |x| its
bit-length.
The BGS set is constructed out of a recursive enumeration of all
Turing machines coupled to a clock as described (that is, another
Turing machine that shuts its partner down once it has operated for
a prescribed number of steps).
The BGS set has the following properties:

• Every machine in the BGS set is a poly Turing machine. (Obvi-


ously, by construction.)
• Every poly Turing machine has a BGS machine that emulates it.

In fact, given one particular poly machine there will be infinitely


many BGS machines which emulate it. And:

Proposition E.1. The BGS set is recursive.

(This contrasts with the fact that the set of all poly machines
isn’t even recursively enumerable.)
A more general machine-clock couple will also be used here:

Mm , |x|(a) + f(a) → Mc(m,|f|,a) ,

Its Gödel number is given by c(m, |f|, a), with c primitive recursive
by the s–m–n theorem, with f at least intuitively recursive.

Remark E.2. Notice that we can have c such that, for parameters
a, b, if a < b, then c(. . . a . . .) < c(. . . b . . .).

It is a generalization of the BGS set. It also satisfies:

• Every machine in the above described generalization of the BGS


set is a poly Turing machine. (Obviously, by construction.)
• Every poly Turing machine has a generalized BGS machine that
emulates it.

The generalized BGS set is also recursive.


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 75

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 75

Appendix F. An example
As an example recall that P < N P is given by a Π2 arithmetic
sentence, that is, a sentence of the form “for every x there is an y so
that P (x, y),” where P (x, y) is primitive recursive. Given our theory
S with enough arithmetic in it, S proves a Π2 sentence ξ if and only
if the associated Skolem function fξ is proved to be total recursive by
S. For P < N P , the Skolem function is what we have been calling
the counterexample function.

Remark F.1. However, there are infinitely many counterexample


functions we may consider. Why is it so? For many adequate, rea-
sonable theories S, we can build a recursive (computable) scale of
functions14 f0 , f1 , . . . , fk , . . . with an infinite set of S-provably total
recursive functions so that f0 is dominated by f1 which is then dom-
inated by f2 , . . . , and so on.

Given each function fk , we can form a BGS-like set BGSk , where


clocks in the time-polynomial Turing machines are bounded by a
polynomial:

|x|fk (n) + fk (n),

where |x| denotes the length of the binary input x to the machine.
We can then consider the recursive set:

BGSk
k

of all such sets. Each BGSk contains representatives of all poly


machines (time polynomial Turing machines). Now, what happens
if there is no such an g, but there are functions gk which dominate
each particular fk , while the sequence g0 , g1 , . . . is unbounded in S,
that is, grows as the sequence F0 , F1 , . . . in S?

14
Such a “scale of functions” exists and can be explicitly constructed.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 76

76 The Limits of Mathematical Modeling in the Social Sciences

Exotic BGSF machines


Now let F be a fast growing, intuitively total algorithmic function. We
consider the exotic BGSF machines already described, that is, poly
machines coded by the pairs m, a, which code Turing machines Mm
with bounds |x|F(a) + F(a). Since the bounding clock is also a Turing
machine, now coupled to Mm , there is a primitive recursive map c so
that:

Mm , |x|F(a) + F(a) → Mc(m,|F|,a) ,

where Mc(m,|F|,a) is a poly machine within the sequence of all Turing


machines. We similarly obtain a g as above, and follows.

Proposition F.2. Given the counterexample function fk defined


over the BGSk -machines, for no ZFC-provable total recursive h does
h  fk .

Proof. As in Proposition D.1, use Gödel number coding primitive


recursive function c to give the Gödel numbers of the quasi-trivial
machines we use in the proof.

Remark F.3. Notice that we have a — may we call it reasonable? —


formalization for our main question:
k
[P < N P ]k ↔ [P < N P ]f .

Also, S  [P < N P ]k ↔ [ fck is total]. Our analysis will give esti-


mates for the growth rate of each counterexample function fck .

We can state, for total fck , the following proposition.

Proposition F.4. For each j, there is a k, k > j+1, so that S proves


the sentence “ fk doesn’t dominate the BGSk counterexample func-
tion fck .”

A caveat: we cannot conclude that “for all j, we have that...” since


that would imply that S proves “for all j, fj is total” as a scholium,
which cannot be done (as that is equivalent to “FS is total,” which
again cannot be proved in S).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 77

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 77

What can be concluded: let S  be the theory S + FS is total.


Then, we have the following proposition.

Proposition F.5. If S is consistent and if fck is total in a model


with standard arithmetic for each k, then S  proves: there is no proof
of the totality of fck , any k, in S.

Proof. See the discussion above.

Remark F.6. Notice that:

• S   ∀k ([P < N P ]k ↔ [fck is total]), while S cannot prove it.


• S   ∀k ([P < N P ]k ↔ [P < N P ]) while again S cannot prove it.
• S  is S + [S is Σ1 sound].

Remark F.7. It means that we can conclude:


S  proves that, for every k, S cannot prove [P < N P ]k .

Now: does the [P < N P ]k adequately translate our main ques-


tion?

Remark F.8. Notice that theory S + “FS is total” is the same as


theory S + “S is Σ1 -sound.”

Now forget about the technicalities and ponder for a while those
results: waving hands, they mean that:

• S is totally blind with respect to some property X within itself.


• While S  , which includes S, is positive about the following: S 
asserts that there is no X within S.

Yet, if in fact no X exists within S, that remains an open ques-


tion.

Appendix G. Discussion and more intuitions


f and the infinitely many fc are very peculiar objects. They are
fractal-like in the following sense: the essential data about N P -
complete questions is reproduced mirror-like in each of the f (or over
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 78

78 The Limits of Mathematical Modeling in the Social Sciences

each BGSk ). The different BGSk are distributed over the set of all
Turing machines by the primitive recursive function c(m, k, a).
Also we cannot argue within S that for all k, fk dominates . . . ,
as that would imply the totality of the recursive function FS .
It is interesting to keep in mind a picture of these objects. First
notice that the BGS and BGSk machines are interspersed among
the Turing machines. The quasi-trivial Turing machines have their
Gödel numbers given by the primitive recursive function c(k, n) —
we forget about the other parameters — where:

• k refers to f k and to BGSk as already explained;


• n is the argument in f k (n).

So fast-growing function f k is sort of cloned among the values of


the BGSk counterexample function while slightly slowed down by c.
(Recall that c is primitive recursive, and cannot compete in growth
power with the f k . )
Function f k compresses what might be a very large number into
a small code given by the Gödel number of gk and by n (recall that
the length of f k (n) is the order of log f k (n)). The effect is that all
functions f j , j < k embedded into the k-counterexample function
via our quasi-trivial machines keep their fast-growing properties and
allow us to prove that the counterexample function is fast-growing
in its peaks for BGSk .
For j > k, the growth power of f k doesn’t compensate the length
of the parameters in the bounding polynomial that regulates the
coupled clock in the BGSk machines.
Finally while j < k, the compressed Gödel numbers of the quasi-
trivial machines — they depend on the exponent and constant of the
k
polynomial xf (n) + f k (n) which regulates the clock — grow much
slower that the growth rate of the counterexample function over these
quasi-trivial machines (depending on f j ) and so their fast growing
properties come out clearly.
A final remark: we stress here that something may look correct
if we look at it, say, with naı̈ve eyes. Yet it may be the case that the
landscape perceived from within a formal framework like our S or S 
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 79

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 79

is wildly counterintuitive, and contradicts the naı̈ve intuition. This


seems to be what happens here.

Appendix H. An application: Maymin’s theorem


A brief scenario
We start here from a recent intriguing result by Maymin (see Doria
and Cosenza, 2016; Maymin, 2011), which relates efficient markets
to the P vs. N P question.15 Roughly, a Maymin market is a market
coded by a Boolean expression. We are going to make some move in
the market. Our move now is determined by a series of k previous
moves.

Definition H.1.
• A k-run policy σk , k a positive integer, is a series of plays (b for
buy and s for sell) of length k. There are clearly 2k possible k-run
policies.
• A map v from all possible k-run policies into {0, 1} is a valuation;
we have a “gain” if and only if v(σk ) = 1; a “loss” otherwise.
• A policy is successful if it provides some gain (adequately defined);
in that case we put v(σk ) = 1. Otherwise v(σk ) = 0.

There is a natural map between these objects and k-variable


Boolean expressions (see below), if we take that v(σk ) = 1 means
that σk is satisfiable, and 0 otherwise. We say that a market configu-
ration (k-steps market configuration, or simply k-market configura-
tion) is coded by a Boolean expression in disjunctive normal form
(dnf). That map between k-market configurations and k-variable
Boolean expressions in dnf can be made one-to-one. The financial
game for our simplified market is simple: we wish to discover the
fastest way to algorithmically obtain a successful k-market config-
uration, given a particular market (i.e., a given k-variable Boolean
expression). Finally the k-market configurations are Maymin-efficient
(see below) if v can be implemented by a poly algorithm.

15
Based on “A beautiful theorem,” already cited.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 80

80 The Limits of Mathematical Modeling in the Social Sciences

We restrict our analysis to the so-called weakly efficient markets.


Since one adds the condition that there is a time-polynomial algo-
rithmic procedure to spread the data about the market, we name
those markets Maymin-efficient markets, where (we stress) v(σk ) is
computed by a time-polynomial Turing machine (or poly-machine).
So the existence of general poly procedures characterizes the mar-
ket as Maymin efficient. Now, we have the following Maymin’s the-
orem.

Proposition H.2. Markets are (Maymin) efficient if and only if


P = NP.

Now we put: markets are almost Maymin-efficient if and only


if there is an O’Donnell algorithm to determine its successful poli-
cies (see da Costa and Doria, 2016). Then, we have the following
proposition.

Proposition H.3. If P < N P isn’t proved by primitive recursive


arithmetic then there are almost Maymin-efficient markets.

Appendix I. Details
The main motive is very simple: we are going to code Maymin-
efficient markets as Boolean expressions. We use a result by E. Post.
The 2k binary sequences naturally code integers from 0 to 2k − 1;
more precisely, from

000 . . . 00, k digits,

to:

111 . . . 11, k digits.

Fix one such coding; a k-digit binary sequence is seen as a sequence


of truth values for a Boolean expression Ξk .

Proposition I.1. Let ξk be a binary sequence of length 2k . Then


there is a Boolean expression Ξk on k Boolean variables so that ξk is
its truth table.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 81

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 81

(We take 1 as “true” and 0 as “false.”). The idea of the proof goes
as follows:
¬p1 ∧ p2 ∧ p3 ∧ ¬p4 ∧ ¬p5
is satisfied by the binary 5-digit line:
01100
(When there is a ¬ in the conjunction put 0 in the line of truth-values;
if not put 1.)
Trivially every k-variable Boolean expression gives rise to a 2k -
length truth table which we can code as a binary sequence of, again,
size 2k bits. The converse result is given by Post’s theorem.
Sketch of proof. Consider the k-variable Boolean expression:
ζ = α1 p1 ∧ α2 p2 ∧ . . . ∧ αk pk ,
where the αi are either nothing or ¬. Pick up the line of truth values
ζ  = α1 α2 . . . αk , where “nothing” stands for 1 and ¬ for 0. ζ  satisfies
ζ, while no other line of truth values does. Our Boolean expression
ζ is satisfied by ζ  and by no other k-digit line of truth values.
The disjunction ζ ∨ ξ where ξ is a k-variable Boolean expression
as ζ, is satisfied by (correspondingly) two lines of truth values, and
no more. And so on.
The rigorous proof of Post’s theorem is by finite induction.
Definition I.2. The Boolean expression in dnf ζ is identified to a
Maymin k-market configuration.
Proposition I.3. There are Maymin-efficient markets if and only if
P = NP.
Proof. Such is the condition for the existence of a poly algorithmic
map v.

Appendix J. The O’Donnell algorithm


We are now going to describe O’Donnell’s algorithm (see da Costa
and Doria, 2016); the O’Donnell algorithm is a quasi-polynomial
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 82

82 The Limits of Mathematical Modeling in the Social Sciences

algorithm for sat.16 We require the so-called BGS set of poly


machines and fc , which is the (now recursive) counterexample func-
tion to [P = N P ] (see Baker et al. (1975) and da Costa and Doria
(2016) for details.)
Recall that a BGS machine is a Turing machine Mn (x) coupled
to a clock that stops the machine when it has operated for |x|p + p
steps, where x is the binary input to the machine and |x| is its length
in bits; p is an integer ≥ 1. Of course, the coupled system is a Turing
machine. All machines in the BGS set are poly machines, and given
any poly machine, there will be a corresponding machine in BGS
with the same output as the original poly machine.
Again, fc is the recursive counterexample function to P = N P .
To get it:
• Enumerate all BGS machines in the natural order (one can do it,
as the BGS set is recursive).
• For BGS machine Pn , fc (n) equals the first instance of sat, which
is input to Pn and fails to output a satisfying line for that instance
of sat.
O’Donnell’s algorithm is very simple: we list in the natural order-
ing all BGS machines. Given a particular instance x ∈ sat, we input
it to P1 , P2 , . . . up to the moment when the output is a satisfying
line of truth values. When we compute the time bound to that pro-
cedure, we see that it is near polynomial, that is, the whole operation
is bounded by a very slow-growing exponential.
Now some requirements:
• We use the enumeration of finite binary sequences
0, 1, 00, 01, 10, 11, 000, 001, 010, 011, . . . .
If F B denotes the set of all such finite binary sequences, form the
standard coding F B → ω which is monotonic on the length of the
binary sequences.

16
Actually we deal with a slightly larger class of Boolean expressions.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 83

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 83

• We use a binary coding for the Turing machines, which is also


monotonic on the length of their tables, linearly arranged, that is,
a 3-line table s1 , s2 , s3 , becomes the line s1 s2 s3 .
We call such monotonic codings standard codings.
• We consider the set of all Boolean expressions in cnf,17 including
those that are unsatisfiable, or totally false. We give it the usual
coding, which is one-to-one and onto ω.
• Consider the poly Turing machine V(x, s), where V(x, s) = 1 if
and only if the binary line of truth values s satisfies the Boolean
cnf expression x, and V(x, s) = 0 if and only if s doesn’t satisfy x.
• Consider the enumeration of the BGS (see Baker et al., 1975)
machines, P0 , P1 , P2 , . . ..18

We start from x, a Boolean expression in cnf binarily coded:

• Consider x, the binary code for a Boolean expression in cnf form.


• Input x to P0 , P1 , P2 , . . . up to the first Pj so that Pj (x) = sj and
sj satisfies x (that is, for the verifying machine V(x, sj ) = 1).
• Notice that there is a bound ≤ j = fc−1 (x).
This requires some elaboration. Eventually a poly machine (in the
BGS sequence) will produce a satisfying line for x as its output
given x as input. The upper bound for the machine with that abil-
ity is given by the first BGS index so that the code for x is smaller
than the value at that index of the counterexample function.
That means: we arrive at a machine Mm , which outputs a correct
satisfying line up to x as an input, and then begins to output
wrong solutions.
• Alternatively check for V(x, 0), V(x, 1), . . . up to — if it ever hap-
pens — some s so that V(x, s) = 1; or,

17
Conjunctive normal form.
18
Have in mind that the BGS machine set is a set of time-polynomial Tur-
ing machines, which includes algorithms that mimic all time-polynomial Turing
machines. See above and check (see Baker et al., 1975).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 84

84 The Limits of Mathematical Modeling in the Social Sciences

• Now, if fc is fast-growing, then as the operation time of Pj is


bounded by |x|k + k, we have that k ≤ j, and therefore it grows as
O(fc−1 (x)). This will turn out to be a very slowly growing function.
Again this requires some elaboration. The BGS machines are coded
by a pair m, k, where m is a Turing machine Gödel index, and
k is as above. So we will have that the index j by which we code
the BGS machine among all Turing machines is greater than k,
provided we use a monotonic coding.
More precisely, it will have to be tested up to j, that is the oper-
−1
ation time will be bounded by fc−1 (x)(|x|fc (x) + fc−1 (x)).
Again notice that the BGS index j ≥ k, where k is the degree of
the polynomial clock that bounds the poly machine.

Appendix K. Almost Maymin-efficient markets


More on almost Maymin-efficient markets.
For a theory S with enough arithmetic — we leave it vague —
and with a recursively enumerable set of theorems, for any provably
total recursive function h there is a recursive, total, function g so
that g dominates h.
Suppose now that we conjecture: the formal sentence P < N P
isn’t proved by primitive recursive arithmetic. Then the counterex-
ample function fc will be at least of the order of growth of Acker-
mann’s function.
Given that condition, we can state:

Proposition K.1. If P < N P isn’t proved by primitive recursive


arithmetic then there are almost Maymin-efficient markets.

Notice that we require very little in our discussion — main tool


is Post’s theorem.

Appendix L. A wild idea: inaccessible cardinals?


We conjecture: the existence of some of those fast-growing func-
tions is dependent on inaccessible cardinals. Recall that a strongly
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 85

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 85

inaccessible cardinal λ satisfies:


(1) λ > ω.
(2) If α is a cardinal and α < λ, then 2α < λ.
(3) For every family βi , i ∈ ι, ι < λ, and for each i, βi < λ, then
supi (βi ) < λ.
Let Consis(T ) mean the (usual) sentence that asserts that the-
ory T is consistent. Let Card(λ) mean that λ is a cardinal, and let
SInacT (λ) mean that λ is strongly inaccessible for theory T . Finally
let F be the fast-growing, partial recursive function that appears in
the exotic formulation:
Proposition L.1. There is a λF so that:
(ZFC + [F is total ])  Card(λ) ∧ SInacZFC λF ).
Tentative sketch of proof, to be further developed.
(1) As we suppose that Consis(ZFC + [F is total]) holds, then it has
a model M.
(2) Now, ZFC + [F is total ]  Consis (ZFC).
(3) It is a theorem of ZFC that:
Consis(ZFC) ↔ ∃x[x |= ZFC].
(We can also take this as a definition for Consis(T ).
(4) Given that:
ZFC + [F is total ]  Consis (ZFC),
there is a set x ∈ M that is a model for ZFC.
(5) Write Vλ = M − x.
(6) Since Vλ is nonempty and as the axiom of choice holds, there
are ordinals in it.
(7) Therefore, there is at least a cardinal in Vλ .
(8) Pick up the smallest of such cardinals; note it λ:
(a) One easily sees that for each cardinal α ∈ V , λ is different
from 2α .
(b) Also for each sequence βi , etc., λ is different from supi βi .
(Both conditions hold because if not, λ would be in V .)
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 86

86 The Limits of Mathematical Modeling in the Social Sciences

(9) Finally for all cardinals α ∈ V , λ > α. For if not, there would be
a β ∈ V , and λ < β, and λ would be in V .
This also means that V is in fact a set, V .

This argument doesn’t show that existence of this particular inac-


cessible cardinal λ proves P < N P ; it only shows (or purports to
show) that our extended theory ZFC + [F is total] implies the exis-
tence of an inaccessible cardinal. One must now show that f ∗ (ℵ0 ) can
be interpreted as an inaccessible cardinal, and that it then implies
P < N P . We would then have λ ≤ f ∗ (ℵ0 ).

References
Abraham, R. and Marsden, J. (1978). Foundations of Mechanics, Second Edition.
Addison-Wesley.
Adler, A. (1969). Some recursively unsolvable problems in analysis. Proceedings
of the American Mathematical Society, 22, 523–526.
Arnol’d, V. I. (1976). Problems of present day mathematics. XVII (Dynamical
systems and differential equations). Proceedings of Symposia in Pure Math-
ematics, 28, 59.
Ash, C. J. and Knight, J. (2000). Computable Structures and the Hyperarithmeti-
cal Hierarchy. Elsevier.
Baker, T., Gill, J. and Solovay, R. (1975). Relativizations of the P =?N P ques-
tion. SIAM Journal on Computing, 4, 431–442.
Balzer, W., Moulines, C. U. and Sneed, J. D. (1987). An Architectonic for Science:
the Structuralist Program. Synthèse Library, D. Reidel.
Black, M. (1933). The Nature of Mathematics, London.
Ben-David, S. and Halevy, S. (s/d). On the independence of P vs. N P . Preprint
Technion.
Chaitin, G. J., da Costa, N. C. A. and Doria, F. A. (2011). Gödel’s Way. CRC
Press.
Cho, Y. M. (1975). Higher-dimensional unifications of gravitation and gauge the-
ories. Journal of Mathematical Physics, 16, 2029–2037.
Corson, E. M., (1953). Introduction to Tensors, Spinors and Relativistic Wave-
Equations. Blackie & Sons.
da Costa, N. C. A. (1980). Ensaio sobre os Fundamentos da Lógica. São Paulo:
Hucitec-Edusp.
da Costa, N. C. A. and Chuaqui, R. (1988). On Suppes’ set-theoretic predicates.
Erkenntnis, 29, 95–112.
da Costa, N. C. A. and Doria, F. A. (1991a). Undecidability and incompleteness in
classical mechanics. International Journal of Theoretical Physics, 30, 1041–
1073.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 87

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 87

da Costa, N. C. A. and Doria, F. A. (1991b). Classical physics and Penrose’s


thesis. Foundation of Physics Letters, 4, 363–373.
da Costa, N. C. A. and Doria, F. A. (1992a). Gödel incompleteness in analy-
sis, with an application to the forecasting problem in the social sciences.
Philosophica, 50, 1–24.
da Costa, N. C. A. and Doria, F. A. (1992b). Suppes predicates for classical
physics, in J. Echeverrı́a et al. (eds.), The Space of Mathematics. Berlin:
Walter de Gruyter.
da Costa, N. C. A. and Doria, F. A. (1994a). Suppes predicates and the con-
struction of unsolvable problems in the axiomatized sciences. in Patrick Sup-
pes, Mathematician, Philosopher. Humphreys, P. (ed.), Kluwer Academic
Publishers.
da Costa, N. C. A. and Doria, F. A. (2003/2006). Consequences of an exotic
formulation for P = N P . Applied Mathematics and Computation, 145,
655–665. Also Addendum. Applied Mathematics and Computation, 172,
1364–1367.
da Costa, N. C. A. and Doria, F. A. (1993a). On Arnol’d’s Hilbert symposium
problems, in Gottlob, G., Leitsch, A., Mundici, D. (eds.), Proceedings of
the 1993 Kurt Gödel Colloquium: Computational Logic and Proof Theory.
Lecture Notes in Computer Science, vol. 713, p. 152. Springer.
da Costa, N. C. A. and Doria, F. A. (1993b). An undecidable Hopf bifurcation
with an undecidable fixed point. Preprint CETMC-19.
da Costa, N. C. A. and Doria, F. A. (1994b). Gödel incompleteness in analy-
sis, with an application to the forecasting problem in the social sciences.
Philosophia Naturalis, 31, 1–24.
da Costa, N. C. A. and Doria, F. A. (2003a). Metamathematics of physics.
Preprint, IEA-USP.
da Costa, N. C. A. and Doria, F. A. (2005). Computing the future, in K. Vela
Velupillai, (ed.), Computability, Complexity and Constructivity in Economic
Analysis. Blackwell.
da Costa, N. C. A. and Doria, F. A. (2007). Janus-faced physics: on Hilbert’s 6th
problem, in Calude, C. (ed.), Randomness and Complexity: from Leibniz to
Chaitin. World Scientific.
da Costa, N. C. A. and Doria, F. A. (2015). Variations on a complex theme.
Preprint.
da Costa, N. C. A. and Doria, F. A. (2016). On the O’Donnell algorithm for
N P -complete problems. Review of Behavioral Economics, 3, 221–242.
da Costa, N. C. A., Doria, F. A. and Bir, E. (2007). On the metamathematics of
the P vs. N P question. Applied Mathematics and Computation, 189, 1223–
1240.
da Costa, N. C. A., Doria, F. A. and Furtado-do-Amaral, A. F. (1993). Interna-
tional Journal of Theoretical Physics, 32, 2187.
da Costa, N. C. A., Doria, F. A. and de Barros, J. A. (1990). A suppes predicate
for general relativity and set–theoretically generic spacetimes. International
Journal of Theoretical Physics, 29, 935.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 88

88 The Limits of Mathematical Modeling in the Social Sciences

Davis, M. (1982). Hilbert’s tenth problem is unsolvable. in Computability and


Unsolvability, Dover.
DeMillo, R. A. and Lipton, R. J. (1979). Some connections between computational
complexity theory and mathematical logic. in Proceedings of the 12th Annual
ACM Symposium on the Theory of Computing, pp. 153–159.
DeMillo, R. A. and Lipton, R. J. (1980). The consistency of P = N P and related
problems with fragments of number theory. in Proceedings of 12th Annual
ACM Symposium on the Theory of Computing, pp. 45–57.
Dirac, P. A. M. (1967). The Principles of Quantum Mechanics. Oxford University
Press.
Doria, F. A. (1975). A Dirac-like equation for the gravitational field. Lett Nuovo
Cimento, 14, 480–483.
Doria, F. A. (1977). A Lagrangian formulation for noninteracting high–spin fields.
Journal of Mathematical Physics, 18, 564–571.
Doria, F. A. (2000). Is there a simple, pedestrian, arithmetic sentence which is
independent of ZFC? Synthese, 125, # 1/2, 69–79.
Doria, F. A. (2007). Informal vs. formal mathematics. Synthèse, 154, 401–415.
Doria, F. A. (2016). A monster lurks in the belly of complexity theory. Preprint.
Doria, F. A., Furtado do Amaral, A. F. and Abrahão, S. M. (1986). A Dirac–like
equation for gauge fields. Progress of Theoretical Physics, 75, 1440–1446.
Doria, F. A. (2014). Physics: A Primer, Klam/LabFuzzy.
Doria, F. A. and Cosenza, C. A. (2016). A beautiful theorem. in Akama, S. (ed.),
Towards Paraconsistent Engineering. Springer.
Emch, G. (1972). Algebraic Methods in Statistical Mechanics and Quantum Field
Theory. Wiley.
Einstein, A. (1967). The Meaning of Relativity. Methuen.
Einstein, A. et al. (s/d). The Principle of Relativity. Dover.
Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica
und verwandter Systeme, I. Monatshefte für Mathematik und Physik, 38,
173–198.
Goel, N. S., Maitra, S. C. and Montroll, E. W. (1971). On the Volterra and other
nonlinear models of interacting populations. Reviews of Modern Physics, 44,
231–276.
Guckenheimer, J. and Holmes, P. (1983). Nonlinear Oscillations, Dynamical Sys-
tems and Bifurcations of Vector Fields. Springer.
Hertz, H. (1956). The Principles of Mechanics. Translated by Jones, D. E. and
Walley, J. T. Dover Publications.
Hirsch, M. (1985). The chaos of dynamical systems. in Fischer, P. and Smith,
W. R. (eds.), Chaos, Fractals and Dynamics. Marcel Dekker.
Holmes, P. J. and Marsden, J. (1982). Horseshoes in perturbations of Hamilto-
nian systems with two degrees of freedom. Communications in Mathematical
Physics, 82, 523–544.
Jackson, J. D. (1962). Classical Electrodynamics. Wiley.
Jeans, J. H. (1925). Mathematical Theory of Electricity and Magnetism. Cam-
bridge University Press.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 89

Axiomatics, the Social Sciences, and the Gödel Phenomenon: A Toolkit 89

Kleene, S. C. (1936). General recursive functions of the natural numbers. Math-


ematische Annalen, 112, 727–742.
Kreisel, G. (1976). A notion of mechanistic theory. in Suppes, P. (ed.), Logic and
Probability in Quantum Mechanics, Reidel, D.
Lanczos, C. (1977). The Variational Principles of Mechanics. Univeristy of
Toronto Press.
Lewis, A. A. (1991a). On turing degrees of walrasian models and a general impos-
sibility result in the theory of decision making. Preprint, University of Cali-
fornia at Irvine, School of Social Sciences.
Lewis, A. A. and Inagaki, Y. (1991b). On the effective content of theories.
Preprint, University of California at Irvine, School of Social Sciences.
Lichtenberg, A. J. and Lieberman, M. A. (1983). Regular and Stochastic Motion.
Springer.
Lotka, A. J. (1956). Elements of Mathematical Biology. Dover.
Maymin, P. (2011). Markets are efficient if and only if P = N P . Algorithmic
Finance, 1(1), 1–22.
Machtey, M. and Young, P. (1979). An Introduction to the General Theory of
Algorithms. North-Holland.
MacLane, S. (1968). Geometrical Mechanics. Monograph, Department of Mathe-
matics, University of Chicago.
Maxwell, J. C. (1954). A Treatise on Electricity and Magnetism, I and II. Dover.
Nicolis, G. and Prigogine, I. (1977). Self-organization in Nonequilibrium Systems.
Wiley.
O’Rahilly, A. (1965). Electromagnetic Theory, I and II. Dover.
Penrose, R. (1989). The Emperor’s New Mind. Oxford University Press.
Richardson, D. (1968). Some undecidable problems involving elementary func-
tions of a real variable. Journal of Symbolic Logic, 33, 514–520.
Rogers, Jr. (1967). Theory of Recursive Functions and Effective Computability.
McGraw-Hill.
Ruelle, D. and Takens, F. (1971). On the nature of turbulence. Communications
in Mathematical Physics, 20, 167–192.
Samuelson, P. A. (1967). Foundations of Economic Analysis, Atheneum.
Scarpellini, B. (1963). Zwei unentscheidbare probleme der analysis. Z. Math. Logic
& Grudl. Math., 9, 265–289.
Seligman, B. B. (1971). Main Currents in Modern Economics, I–III. Chicago:
Quadrangle Books.
Stewart, I. (1991). Deciding the undecidable. Nature, 352, 664–665.
Sternberg, S. (1964). Deciding the undecidable. Lectures on Differential Geome-
try. Benjamin.
Stratton, J. A. (1941). Electromagnetic Theory. McGraw-Hill.
Suppes, P. (1967). Set-Theoretical Structures in Science. Mimeo. Stanford
University.
Suppes, P. (1988). Scientific Structures and their Representation. Preliminary
Version, Stanford University.
May 18, 2017 15:22 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch01 page 90

90 The Limits of Mathematical Modeling in the Social Sciences

Tonnelat, M. A. (1965). Les Théories Unitaires de l’Électromagnétisme et de la


Gravitation. Gauthier-Villars.
Tsuji, M., da Costa, N. C. A. and Doria, F. A. (1998). The incompleteness of the
theory of games. Journal of Philosophical Logic, 27, 553–568.
Utiyama, R. (1956). Invariant theoretical interpretation of interaction. Physics
Review, 101, 1597–1607.
van der Waerden, B. L. (ed.) (1968). Sources of Quantum Mechanics. Dover.
Weyl, H. (1952). Space Time Matter. Dover.
Wolfram, S. (1984). Computation theory of cellular automata. Communications
in Mathematical Physics, 96, 15.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 91

Chapter 2

The Perfect Language∗

Gregory J. Chaitin
Federal University of Rio de Janeiro, Brazil

What follows is based on a talk originally given by the author at the


Hebrew University in Jerusalem and then, in expanded form, at the
Perimeter Institute in Canada.
I’m going to talk about mathematics, and I’d like to give you a
broad overview, most definitely a nonstandard view of some intellec-
tual history.
There is a wonderful book by Umberto Eco called The Search for
the Perfect Language, and I recommend it highly.1
In The Search for the Perfect Language, you can see that Umberto
Eco likes the Middle Ages — I think he probably wishes we were
still there. The book talks about a dream that Eco believes played a
fundamental role in European intellectual history, which is the search
for the perfect language.


Reprinted from Inference: International Review of Science, Vol. 1, No. 3 (July
2015).
1
Umberto Eco, The Search for the Perfect Language, trans. James Fentress (Lon-
don, UK: HarperCollins, 1997).

91
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 92

92 The Limits of Mathematical Modeling in the Social Sciences

What is the search for the perfect language? Nowadays, a physi-


cist would call this the search for a Theory of Everything (TOE), but
in the terms in which it was formulated originally, it was the idea of
finding, shall we say, the language of creation, the language before the
Tower of Babel, the language that God used in creating the universe,
the language whose structure directly expresses the structure of the
world, the language in which concepts are expressed in their direct,
original format.
You can see that this idea is a little bit like the attempt to find
a foundational TOE in physics.
The crucial point is that knowing this language would be like
having a key to universal knowledge. If you’re a theologian, it would
bring you closer, very close, to God’s thoughts, which is dangerous.
If you’re a magician, it would give you magical powers. If you’re a
linguist, it would tell you the original, pure, uncorrupted language
from which all languages descend.
This very fascinating book is about the quest to find that lan-
guage. If you find it, you’re opening a door to absolute knowledge,
to God, to the ultimate nature of reality.
And there are a lot of interesting chapters in this intellectual
history, one of them the Catalan, Raymond Lull, who lived in or
about 1200.
He was a very interesting man who had the idea of mechanically
combining all possible concepts to get new knowledge. So you would
have a wheel with different concepts on it, and another wheel with
other concepts on it, and you would rotate them to get all possible
combinations. This would be a systematic way to discover new con-
cepts and new truths. If you remember Jonathan Swift’s Gulliver’s
Travels, Swift makes fun of an idea like this.
In The Search for the Perfect Language, there is an entire chapter
about Gottfried Wilhelm Leibniz. Leibniz is wonderful because he
is universal. He knew all about Kabbalah, Christian Kabbalah and
Jewish Kabbalah, and all kinds of hermetic and esoteric doctrines,
and he knew all about alchemy. He actually ghost-authored a book
on alchemy. Leibniz knew about all these things, and he knew about
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 93

The Perfect Language 93

ancient philosophy, he knew about scholastic philosophy, and he also


knew about what was then called mechanical philosophy, which was
the beginning of modern science. And Leibniz saw good in all of this.
Leibniz formulated a version of the search for the perfect lan-
guage, which was firmly grounded in the magical, theological original
idea, but which is also fit for consumption nowadays, that is, accept-
able to modern ears, to contemporary scientists. This is a universal
language, which he called the characteristica universalis, which was
supposed to come with a crucial calculus ratiocinator .
The idea is to reduce reasoning to calculation, to computation,
because the most certain thing is that 2 + 5 = 7, and what is this
if not a calculation? If two people have an intellectual dispute, Leib-
niz remarked, instead of dueling they could just sit down and say,
“Gentlemen, let us compute!” and get the correct answer, and find
out who was right.
This is Leibniz’s version of the search for the perfect language.
How far did he get with it? Well, Leibniz was a person who got
bored easily and flew like a butterfly from field to field, throwing out
fundamental ideas, rarely taking the trouble to develop them fully.
One case of the characteristica universalis that Leibniz did
develop is called the calculus. This is one case where Leibniz worked
out his ideas for the perfect language in beautiful detail.
Leibniz’s version of the calculus differs from Isaac Newton’s pre-
cisely because it was part of Leibniz’s project for the characteristica
universalis. Christiaan Huygens hated the calculus. He taught Leib-
niz mathematics in Paris at a relatively late age, when Leibniz was
in his twenties. Most mathematicians start very, very young. And
Christiaan Huygens hated Leibniz’s calculus because he said that
it was mechanical, it was brainless: any fool can just calculate the
answer by following the rules, without understanding what he or she
is doing.
Huygens preferred the old, synthetic geometry proofs, where you
had to be creative and come up with a diagram and some particular
reason for something to be true. Leibniz wanted a general method.
He wanted to get the formalism, the notation, right, and have a
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 94

94 The Limits of Mathematical Modeling in the Social Sciences

mechanical way to get the answer. Huygens didn’t like this, but that
was precisely the point. That was precisely what Leibniz was looking
for. The idea was that if you get absolute truth, if you have found
the truth, it should mechanically enable you to determine what’s
going on, without creativity. This is good, this is not bad. This is also
precisely how Leibniz’s version of the calculus differed from Newton’s.
Leibniz saw clearly the importance of having a formalism that led
you automatically to the answer.
Let’s now take a big jump, to David Hilbert, about a century ago.
No, first I want to tell you about an important attempt to find the
perfect language: Georg Cantor’s theory of infinite sets. This late
19th-century theory is interesting because it’s firmly based in the
Middle Ages and also, in a way, the inspiration for all of 20th-century
mathematics. This theory of infinite sets was actually theology —
mathematical theology. Normally you don’t mention that fact. The
price of admission to the field of mathematics demands that the
mathematician throw out all the philosophy, leaving only something
technical behind. So all the theology has been thrown out.
But Cantor’s goal was to understand God. God is transcendent.
The theory of infinite sets has a hierarchy of bigger and bigger infini-
ties, the alephs, the ℵ’s. You have ℵ0 , ℵ1 , the infinity of integers, of
real numbers, and you keep going. Each one of these is the set of all
subsets of the previous one. And very far out you get mindboggling
infinities like ℵω . This is the first infinity after
ℵ0 , ℵ1 , ℵ2 , ℵ3 , ℵ4 . . . .
Then you can continue with
ω + 1, ω + 2, ω + 3 . . . 2ω + 1, 2ω + 2, 2ω + 3 . . . .
These so-called ordinal numbers are subscripts for the ℵ’s, which are
cardinalities. Let’s go farther:
ℵω 2 , ℵω ω , ℵω ω ω . . . ,
and there’s an ordinal called epsilon-nought
...
ωω
0 = ω ω ,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 95

The Perfect Language 95

which is the smallest solution of the equation

x = ωx.

The corresponding cardinal

ℵ0

is pretty big!
God is very far off, since God is infinite and transcendent. We
can try to go in his direction. But we’re never going to get there,
because after every cardinal, there’s a bigger one, the cardinality of
the set of all subsets. And after any infinite sequence of cardinals
that you get, you just take the union of all of that, and you get a
bigger cardinal than is in the sequence. So this thing is inherently
open-ended.
This is absolutely wonderful, breathtaking stuff.
The only problem is that it’s contradictory.
The problem is very simple. If you take the universal set, the set
of everything, and you consider the set of all its subsets, by Cantor’s
diagonal argument this should have a bigger cardinality, but how can
you have anything bigger than the set of everything?
This is the paradox that Bertrand Russell discovered. Russell
looked at this and asked why you get this bad result. And if you look
at the Cantor diagonal argument proof that the set of all subsets of
everything is bigger than everything, it involves the set of all sets
that are not members of themselves,

{x : x ∈ x},

which can neither be in itself nor not be in itself. This is called the
Russell paradox.
Cantor was aware that this happens, but he wasn’t bothered
by these contradictions, because he was doing theology. We’re finite
but God is infinite, and it’s paradoxical for a finite being to try
to comprehend a transcendent, infinite being, so paradoxes are fine.
But the mathematical community was not very happy with a the-
ory that leads to contradictions. What mathematicians have done is
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 96

96 The Limits of Mathematical Modeling in the Social Sciences

forget about all this theology and philosophy and try to sweep the
contradictions under the rug. There is an expurgated version of all
this called Zermelo–Fraenkel set theory, with the axiom of choice,
usually designated ZFC.
This is a formal axiomatic theory that you develop using first-
order logic, and it is an expurgated version of Cantor’s theory
believed not to contain any paradoxes.
Bertrand Russell was inspired by all of this to attempt a general
critique of mathematical reasoning, and to find a lot of contradic-
tions, a lot of mathematical arguments that lead to contradictions.
I already told you about his most famous one, the Russell paradox.
Russell was an atheist who was searching for the absolute, who
believed in absolute truth. And he loved mathematics and wanted
mathematics to be perfect. Russell went around telling people about
these contradictions in order to try to get them fixed.
Besides the paradox that there’s no biggest cardinal, and that
the set of subsets of everything is bigger than everything, there’s
also a problem with the ordinal numbers that’s called the Burali–
Forti paradox, namely that the set of all the ordinals is an ordinal
that’s bigger than all the ordinals. This works because each ordinal
can be defined as the set of all the ordinals that are smaller than it
is. (Then an ordinal is less than another ordinal if and only if it is
contained in it.)
Russell was going around telling people that reason leads to con-
tradictions. So David Hilbert, about a century ago, proposed a pro-
gram to put mathematics on a firm foundation. And basically what
Hilbert proposed is the idea of a completely formal axiomatic theory,
which is a modern version of Leibniz’s characteristica universalis and
calculus ratiocinator.
In such a formal axiomatic theory, you would have a finite num-
ber of axioms, axioms that are not written in an ambiguous natural
language. Instead, you use a precise artificial language with a simple,
regular artificial grammar. You use mathematical logic, not informal
reasoning, and you specify the rules of the game precisely. It should
be mechanical to decide whether a proof is correct.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 97

The Perfect Language 97

Hilbert was a conservative. He believed that mathematics gives


absolute truth, which is an idea from the Middle Ages. You can see
evidence of the Middle Ages whenever you mention absolute truth.
Nevertheless, modern mathematicians remain enamored with abso-
lute truth. As Kurt Gödel said, we pure mathematicians are the last
holdouts of the Middle Ages. We still believe in the Platonic world
of ideas, at least mathematical ideas, when everyone else, including
philosophers, now laughs at this notion. But pure mathematicians
live in the Platonic world of ideas, even though everyone else stopped
believing in it a long time ago.
So mathematics gives absolute truth, said Hilbert. Every math-
ematician somewhere deep inside believes this. Then there ought to
exist a finite set of axioms, and a precise set of rules for deduction, for
inference, such that all mathematical truth is a consequence of these
axioms. You see, if mathematical truth is black or white, and purely
objective, then if you fill in all the steps in a proof and carefully use
an artificial language to avoid ambiguity, you should be able to have
a finite set of axioms we can all agree on, which, in principle, enables
you to deduce all mathematical truth. This is just the notion that
mathematics provides absolute certainty.
An important consequence of this idea goes back to the Middle
Ages. This perfect language for mathematics, which is what Hilbert
was looking for, would in fact, give a key to absolute knowledge,
because, in principle, you could mechanically deduce all the theo-
rems from the axioms, simply by running through the tree of all
possible proofs. You start with the axioms, then you apply the rules
of inference once, and get all the theorems that have one-step proofs;
you apply them two times, and you get all the theorems that have
two-step proofs; and like that, totally mechanically, you would get
all of mathematical truth, by systematically traversing the tree of all
possible proofs.
This would not put all mathematicians out of work. In prac-
tice, this process would take an outrageous amount of time to get to
interesting results, and all the interesting theorems would be over-
whelmed by uninteresting theorems, such as the fact that 1 + 1 = 2.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 98

98 The Limits of Mathematical Modeling in the Social Sciences

It would be hard to find the interesting theorems and to separate


the wheat from the chaff. But, in principle, this would give you all
mathematical truths. You wouldn’t actually do it, but it would show
that mathematics gives absolute certainty.
So this was the idea of putting mathematics on a firm foundation
and removing all doubts. This was Hilbert’s idea, about a century
ago. Meta-mathematics studies a formal axiomatic theory from the
outside. Notice that this is a door to absolute truth, following the
notion of the perfect language.
What happened? There is some good news and some bad news.
Some of the good news I already mentioned. The thing that comes
the closest to what Hilbert asked for is Zermelo–Fraenkel set theory,
and it is a beautiful axiomatic theory. I want to mention some of the
milestones in the development of this theory. One of them is the von
Neumann integers, so let me tell you about that. Remember that
Baruch Spinoza had a philosophical system in which the world is
built out of only one substance, and that substance is God. That’s
all there is. Zermelo–Fraenkel set theory is similar. Everything is sets,
and every set is built out of the empty set. That’s all there is: the
empty set, and sets built starting with the empty set.
Zero is the empty set {}, that’s the first von Neumann integer,
and in general n + 1 is defined to be the set of all integers less than
or equal to n:

n + 1 = {0, 1, 2, . . . , n}.

If you write this out in full, removing all the abbreviations, all you
have are curly braces, you have set formation starting with no con-
tent, and the full notation for n grows exponentially in n because
everything up to that point is repeated in the next number. In spite
of this exponential growth, this is a beautiful conceptual scheme.
Then you can define rational numbers as pairs of these integers,
you can define real numbers as limit sequences of rational numbers,
and you get all of mathematics, starting just with the empty set.
So it’s a lovely piece of ontology. Here’s all of mathematical creation
just built out of the empty set.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 99

The Perfect Language 99

This is a formal theory that most mathematicians believe enables


you to carry out all the arguments that normally appear in mathe-
matics — maybe if you don’t include category theory, which is very
difficult to formalize, and even more paradoxical than set theory,
from what I hear.
So that’s some of the positive work on Hilbert’s program. Now
some of the negative work on Hilbert’s program is, of course, Gödel
in 1931 and Alan Turing in 1936. What they show is that you can’t
have a perfect language for mathematics, you cannot have a formal
axiomatic theory for all of mathematics because of incompleteness,
because no such system will include all of mathematical truth. It will
always leave out truths; it will always be incomplete.
This is Gödel’s incompleteness theorem of 1931, and Gödel’s orig-
inal proof is very strange. It’s basically the paradox of “this statement
is false,” which is a paradox, of course, because it can be neither true
nor false. If it’s false that it’s false, then it’s true, and if it’s true
that it’s false, then it’s false. That’s just a paradox. But what Gödel
does is say “this statement is unprovable.” So if the statement says
of itself it’s unprovable, there are two possibilities: it’s provable, or
it isn’t. If it’s provable, then we’re proving something that’s false,
because it says it’s unprovable. So we hope that’s not the case; by
hypothesis, we’ll eliminate that possibility. If we prove things that
are false, we have a formal axiomatic theory that we’re not interested
in, because it proves false things. The only possibility left is that it’s
unprovable. But if it’s unprovable, then it’s true, because it asserts
it’s unprovable, therefore there’s a hole. We haven’t captured all of
mathematical truth in our theory.
This proof of incompleteness shocked a lot of people.
A better proof of incompleteness, a deeper proof, comes from
Turing in 1936. He derived incompleteness from a more fundamental
phenomenon, which is uncomputability, the discovery that mathe-
matics is full of stuff that can’t be calculated, of things you can define,
but which you cannot calculate, because there’s no algorithm. And, in
particular, the uncomputable thing that he discovered was the halt-
ing problem, a very simple question: does a computer program that’s
self-contained halt, or does it go on forever? There is no algorithm
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 100

100 The Limits of Mathematical Modeling in the Social Sciences

to answer this in every individual case, therefore there is no formal


axiomatic theory that enables you to always prove in individual cases
what the answer is.
So Turing’s insight in 1936 was that incompleteness, which Gödel
found in 1931, for any formal axiomatic theory, comes from a deeper
phenomenon, which is uncomputability. Incompleteness is an imme-
diate corollary of uncomputability, a concept which does not appear
in Gödel’s 1931 paper.
But Turing’s paper has both good and bad aspects. There’s a
negative aspect of his 1936 paper, which I’ve just told you about,
but there’s also a positive aspect. You get another proof, a deeper
proof of incompleteness, but you also get a kind of completeness.
You find a perfect language.
There is no perfect language for mathematical reasoning. Gödel
showed that in 1931, and Turing showed it again in 1936. But what
Turing also showed in 1936 is that there are perfect languages, not
for mathematical reasoning, but for computation, for specifying algo-
rithms. What Turing discovered in 1936 is that there’s a kind of
completeness called universality and that there are universal Turing
machines and universal programming languages.
What universal means, what a universal programming language
or a universal Turing machine is, is a language in which every pos-
sible algorithm can be written. On the one hand, Turing showed
us in a deeper way that any language for mathematical reasoning
has to be incomplete, but, on the other hand, he showed us that
languages for computation can be universal, which is just a synonym
for completeness. There are perfect languages for computation, for
writing algorithms, even though there aren’t any perfect languages
for mathematical reasoning.
This is the positive side, this is the completeness side, of Turing’s
1936 paper.
Now, what I’ve spent most of my professional life on, is a subject
I call algorithmic information theory, which derives incompleteness
from uncomputability by taking advantage of a deeper phenomenon,
by considering an extreme form of uncomputability, which is called
algorithmic randomness or algorithmic irreducibility.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 101

The Perfect Language 101

There’s a perfect language again, and there’s also a negative side,


the halting probability Ω, whose bits are algorithmically random,
algorithmically irreducible mathematical truths.

Ω = 0.010010111 . . . .

This is a place in pure mathematics where there’s no structure. If you


want to know the bits of the numerical value of the halting probabil-
ity, this is a well-defined mathematical question, and in the world of
mathematics all truths are necessary truths, but these look like acci-
dental, contingent truths. They look random; they have irreducible
complexity.
There are actually an infinite number of halting probabilities,
depending on your choice of programming language. After you choose
a language, then you ask what the probability is that a program
generated by coin tossing will eventually halt. And that gives you a
different halting probability.
The numerical value will be different; the paradoxical properties
are the same.
There are cases for which you can get a few of the first bits.
For example, if Ω starts with 1s in binary or 9s in decimal, you can
know those bits or digits, if Ω is 0.11111 . . . base two or 0.99999 . . .
base ten. So you can get a finite number of bits, perhaps, of the
numerical value, but if you have an N -bit formal axiomatic theory,
then you can’t get more than N bits of Ω. That’s sort of the general
result. It’s irreducible logically and computationally. It’s irreducible
mathematical information.
That’s the bad news. Algorithmic information theory (AIT) goes
further than Turing, and picks out, from Turing’s universal lan-
guages, maximally expressive programming languages — because
those are the ones that you have to use to develop this theory where
you get to Ω.
AIT has the notion of a maximally expressive programming lan-
guage in which programs are maximally compact, and deals with
a very basic complexity concept, which is the size of the smallest
program to calculate something.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 102

102 The Limits of Mathematical Modeling in the Social Sciences

Now we have a better notion of perfection. Universal program-


ming languages are not all equally good. We concentrate on a subset,
comprising the ones that enable us to write the most concise pro-
grams. These are the most expressive languages, the ones with the
smallest programs. This definition of complexity is a dry, technical
way of expressing an idea in modern terms. But let me put this
into medieval terminology, which is much more colorful. What we’re
asking is, how many yes/no decisions did God have to make to create
something? — which is obviously a rather basic question to ask, if
you consider that God is calculating the universe. I’m giving you a
medieval perspective on these modern developments.
Theology is the fundamental physics, it’s the theoretical physics
of the Middle Ages.
The notion of the universal Turing machine that is used in AIT is
Turing’s very basic idea of a flexible machine. It’s flexible hardware,
which we call software. Now, AIT picks out a particular class of
universal Turing machines U .
What are the universal computers U ? A universal computer U
has the property that, for any other computer C and its program p,
the universal computer U will calculate the same result if you give
it the original program p for C concatenated to a prefix πC , which
depends only on the computer C that you want to simulate. πC tells
U which computer to simulate. In symbols,
U (πC p) = C(p).
In other words, πC p is the concatenation of two pieces of information.
It’s a binary string. You take the original program p, which is also a
binary string, and in front of it you put a prefix that tells you which
computer to simulate. This means that these programs πC p for U
are only a fixed number of bits larger than the programs p for any
individual machine C. These U are the universal Turing machines
that you use in AIT. These are the most expressive languages. These
are the languages in which programs are as concise as possible. This
is how you define program-size complexity. God will naturally use
the most perfect, most powerful programming languages, when he
creates the world, to build everything.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 103

The Perfect Language 103

AIT is concerned with particularly efficient ways for U to be uni-


versal. Turing’s original notion of universality was not this demand-
ing. The fact that you can just add a fixed number of bits to a
program for C to get one for U is not completely trivial. Let me tell
you why. After you put πC and p together, you have to know where
the prefix ends and the program that is being simulated begins. There
are many ways to do this. A very simple way to make the prefix πC
self-delimiting is to have it be a sequence of 0’s followed by a 1:

πC = 0k 1.

And the number k of 0’s tells us which machine C to simulate. That’s


a very wasteful way to indicate this.
The prefix πC is actually an interpreter for the programming
language C. AIT’s universal languages U have the property that you
give U an interpreter plus the program p in this other language C,
and U will run the interpreter to see what p does.
If you think of this interpreter πC as an arbitrary string of bits,
one way to make it self-delimiting is to just double all the bits. 0 goes
to 00, 1 goes to 11, and you put a pair of unequal bits 01 as punctu-
ation at the end.

πC : 0 → 00, 1 → 11, 01 at the end.

This is a better way to have a self-delimiting prefix that you can


concatenate with p. It only doubles the size, whereas the 0k 1 trick
increases the size exponentially. And there are more efficient ways to
make the prefix self-delimiting. For example, you can put the size of
the prefix in front of the prefix. But it’s sort of like Russian dolls,
because if you put the size |πC | of πC in front of πC , |πC | also has to
be self-delimiting:

U (. . . ||πC || |πC | πC p) = C(p).

Anyway, picking U this way is the key idea in the original 1960s
version of AIT that Andrey Kolmogorov, Ray Solomonoff, and I inde-
pendently proposed. But ten years later, I realized that this is not
the right approach. You actually want the whole program πC p for
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 104

104 The Limits of Mathematical Modeling in the Social Sciences

U to be self-delimiting, not just the prefix πC . You want the whole


thing to be self-delimiting to get the right theory of program-size
complexity.
Let me compare the 1960s version of AIT and the 1970s version
of AIT. Let me compare these two different theories of program-size
complexity.
In the 1960s version, an N -bit string will in general need an N -bit
program, if it’s irreducible, and most strings are algorithmically irre-
ducible. Most N -bit strings need an N -bit program. These are the
irreducible strings, the ones that have no pattern, no structure. Most
N -bit strings need an N -bit program, because there aren’t enough
smaller programs. But in the 1970s version of AIT, you go from
N bits to N + log2 N bits, because you want to make the programs
self-delimiting. An N -bit string will usually need an N + log2 N bit
program.
Actually, in 1970s AIT it’s N plus H(N ), which is the size of
the smallest self-delimiting program to calculate N , which is exactly
what that logarithmic term is. In other words, in the 1970s version of
AIT, the size of the smallest program for calculating an N -bit string
is usually N bits plus the size in bits of the smallest self-delimiting
program to calculate N , which is roughly

log N + log log N + log log log N + · · ·

bits long.
That’s the Russian dolls aspect of this.
The 1970s version of AIT, which takes the idea of being self-
delimiting from the prefix and applies it to the whole program, gives
us even better perfect languages. AIT evolved in two stages. First we
concentrate on those U with

U (πC p) = C(p)

with πC self-delimiting, and then we insist that the whole thing πC p


has also got to be self-delimiting. And when you do that, you get
important new results, such as the sub-additivity of program-size
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 105

The Perfect Language 105

complexity,
H(x, y) ≤ H(x) + H(y),
which is not the case if you don’t make everything self-delimiting.
This just says that you can concatenate the smallest program for
calculating x and the smallest program for calculating y to get a
program for calculating x and y.
And you can’t even define the halting probability Ω in 1960s AIT.
If you allow all N -bit strings to be programs, then you cannot define
the halting probability in a natural way, because the sum for defining
the probability that a program will halt

Ω= 2−(size in bits of p)
p halts

diverges to infinity instead of being between zero and one. This is


the key technical point in AIT.
I want the halting probability to be finite. The normal way of
thinking about programs is that there are 2N N -bit programs, and
the natural way of defining the halting probability is that every N -bit
program that halts contributes 1/2N to the halting probability. The
only problem is that for any fixed size N there are roughly on the
order of 2N programs that halt, so if you sum over all possible sizes,
you get infinity, which is no good.
In order to get the halting probability to be between zero and
one

0 < Ω= 2−(size in bits of p) < 1
p halts

you have to be sure that the total probability summed over all pro-
grams p is less than or equal to one. This happens automatically if
we force p to be self-delimiting. How can we do this? Easy! Pretend
that you are the universal computer U . As you read the program bit
by bit, you have to be able to decide by yourself where the program
ends, without any special punctuation, such as a blank, at the end
of the program. This implies that no extension of a valid program is
itself a valid program, and that the set of valid programs is what’s
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 106

106 The Limits of Mathematical Modeling in the Social Sciences

called a prefix-free set. Then the fact that the sum that defines Ω
must be between zero and one, is just a special case of what’s called
the Kraft inequality in Shannon information theory.
But this technical machinery isn’t necessary. That 0 < Ω < 1
follows immediately from the fact that as you read the program bit
by bit, you are forced to decide where to stop without seeing any
special punctuation. In other words, in 1960s AIT, we were actually
using a three-symbol alphabet for programs: 0, 1, and blank. The
blank told us where a program ends. But that’s a symbol that you’re
wasting, because you use it very little. As you all know, if you have
a three-symbol alphabet, then the right way to use it is to use each
symbol roughly one-third of the time. So if you really use only 0’s
and 1’s, then you have to force the Turing machine to decide by itself
where the program ends. You don’t put a blank at the end to indicate
that.
So programs go from N bits in size to N + log2 N bits, because
you’ve got to indicate in each program how big it is. On the other
hand, you can just take subroutines and concatenate them to make
a bigger program, so program-size complexity becomes sub-additive.
You run the universal machine U to calculate the first object x, and
then you run it again to calculate the second object y, and then
you’ve got x and y, and so

H(x, y) ≤ H(x) + H(y).

These self-delimiting binary languages are the ones that the study of
program-size complexity has led us to discriminate as the ideal lan-
guages, the most perfect languages. We got to them in two stages,
1960s AIT and 1970s AIT. These are languages for computation, for
expressing algorithms, not for mathematical reasoning. They are uni-
versal programming languages that are maximally expressive, maxi-
mally concise. We already knew how to do that in the 1960s, but in
the 1970s we realized that programs should be self-delimiting, which
made it possible to define the halting probability Ω.
That’s the story, and now maybe I should summarize all of this,
this saga of the quest for the perfect language. As I said, the search
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 107

The Perfect Language 107

for the perfect language has some negative conclusions and some
positive conclusions.
Hilbert wanted to find a perfect language giving all of mathemat-
ical truth, all mathematical knowledge; he wanted a formal axiomatic
theory for all of mathematics. This was supposed to be a TOE for
the world of pure mathematics. And this cannot succeed, because
we know that every formal axiomatic theory is incomplete, as shown
by Gödel, by Turing, and by me. Instead of finding a perfect lan-
guage, a perfect formal axiomatic theory, we found incompleteness,
uncomputability, and even algorithmic irreducibility and algorithmic
randomness.
That’s the negative side of this story, which is fascinating from
an epistemological point of view, because we found limits to what we
can know; we found limits of formal reasoning.
Now interestingly enough, the mathematical community couldn’t
care less. They still want absolute truth! They still believe in absolute
truth, and that mathematics gives absolute truth. And if you want
a proof of this, just go to the December 2008 issue of the Notices
of the American Mathematical Society. That’s a special issue of the
Notices devoted to formal proof.
The technology has been developed to the point where they can
run real mathematics, real proofs, through proof-checkers, and get
them checked. A mathematician writes the proof out in a formal
language, and fills in the missing steps and makes corrections until
the proof-checker can understand the whole thing and verify that it
is correct. And these proof-checkers are getting smarter and smarter,
so that more and more of the details can be left out. As the tech-
nology improves, the job of formalizing a proof becomes easier and
easier. The formal-proof extremists are saying that in the future all
mathematics will have to be written out formally and verified by
proof-checkers.2

2
For a discussion of recent developments in this area, see Vladimir Voevodsky,
“Univalent Foundations” at http://www.math.ias.edu/vladimir/files/2014
IAS.pdf.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 108

108 The Limits of Mathematical Modeling in the Social Sciences

The position of these extremists is that, in the future, all math-


ematics will have to be written out in a formal language, and you
will have to get it checked before submitting a paper to a human
referee, who will then only have to decide if the proof is worth pub-
lishing, not whether the proof is correct. And they want a repository
of all mathematical knowledge, which would be a database of checked
formal proofs of theorems.
I’m not disparaging this extremely interesting work, but I am
saying that there’s a wonderful intellectual tension between it and
the incompleteness results that I’ve discussed in this talk. There’s a
wonderful intellectual tension between incompleteness and the fact
that people still believe in formal proof and absolute truth. People
still want to go ahead and carry out Hilbert’s program and actually
formalize everything, just as if Gödel and Turing had never happened!
I think this is an extremely interesting and, at least for me, a
quite unexpected development.
These were the negative conclusions from this saga. Now I want
to wrap this talk up by summarizing the positive conclusions.
There are perfect languages for computing, not for reasoning.
They’re computer programming languages. And we have univer-
sal Turing machines and universal programming languages, and
although languages for reasoning cannot be complete, these universal
programming languages are complete. Furthermore, AIT has picked
out the most expressive programming languages, the ones that are
particularly good to use for a theory of program-size complexity.
So there is a substantial practical spinoff. Furthermore, since I’ve
worked most of my professional career on AIT, I view AIT as a sub-
stantial contribution to the search for the perfect language, because
it gives us a measure of expressive power, and of conceptual com-
plexity and the complexity of ideas. Remember, I said that from the
perspective of the Middle Ages, that’s how many yes/no decisions
God had to make to create something, which obviously he will do in
an optimal manner.3

3
Note that program-size complexity = size of smallest name for something.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch02 page 109

The Perfect Language 109

From the theoretical side, however, this quest was disappointing:


due to Gödel incompleteness and because there is no TOE for pure
mathematics. In fact, if you look at the bits of the halting prob-
ability Ω, they show that pure mathematics contains infinite irre-
ducible complexity, and in this precise sense it is more like biology,
the domain of the complex, than like theoretical physics, where there
is still hope of finding a simple, elegant TOE.4
So this is the negative side of the story, unless you’re a biologist.
The positive side is we get this marvelous programming technology.
So this dream, the search for the perfect language and for absolute
knowledge, ended in the bowels of a computer, it ended in a Golem.
How would all this look to someone from the Middle Ages? This
quest, the search for the perfect language, was an attempt to obtain
magical, God-like powers.
Let’s bring someone from the 1200s here and show them a note-
book computer. You have this dead machine, it’s a machine, it’s a
physical object, and when you put software into it, all of a sudden it
comes to life!
So from the perspective of the Middle Ages, I would say that
the perfect languages that we’ve found have given us some magical,
God-like power, which is that we can breathe life into some inanimate
matter. Observe that hardware is analogous to the body, and software
is analogous to the soul, and when you put software into a computer,
this inanimate object comes to life and creates virtual worlds.
So from the perspective of somebody from the year 1200, the
search for the perfect language has been successful and has given us
some magical, God-like abilities, except that we take them entirely
for granted.
Thanks very much!

4
Incompleteness can be considered good rather than bad: it shows that mathe-
matics is creative, not mechanical.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 111

Chapter 3

Logics and Formal Ethics


in Karl Menger

Giandomenica Becchio
University of Torino, Italy

Karl Menger’s aim as a scholar was to find a new kind of logic to be


applied to social sciences (Menger, 1979). As a mathematician, he
focused his attention on finding whether the building of social order
can be described from a formalistic point of view. This chapter deals
with his contributes to find which kind of logic — able to include
uncertainty — is used by individual and by social groups when cohe-
sive aggregations are formed in order to get a formal way to describe
ethics.
Karl Menger, the mathematician son of the Austrian economist
Carl, was a supporter of formal ethics as well as a strong opponent of
standard logic; and in his writings, he tried to explain how the use
of n-value logic could have been proficiently used to explain a system
of formal ethics. Menger studied mathematics and logic in the 1920s
in Vienna under Hahn, and thereafter in Amsterdam under Brouwer,
one of the leading exponents of intuitionism. Even though between
1921 and 1927 Menger worked prevalently on topology and geom-
etry (above all set theory and curve theory, Golland and Sigmund,
2000), he was also interested in economic theory. In 1921, he wrote
a paper on the role of uncertainty in economic agents, starting from
the St. Petersburg’s paradox.

111
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 112

112 The Limits of Mathematical Modeling in the Social Sciences

During the 1930s, his interest for a possible development of logics


(basically the introduction of nonstandard logic) and the urgency to
find a possible way of describing individual behaviors in a formal way
became central in his work. In 1939, Menger wrote an article about
the difference between the standard logic (the ordinary 2-valued
logic) and the n-valued logics of Post and L ukasiewicz. According to
Menger, a new logic would have allow the building of a formal ethics
able to explain in terms of realism individual and social behavior.

1. The search for a new logic for social sciences


starting from The St. Petersburg Paradox 1
During the 1920s, the philosophical climate in Vienna was influenced
by a strong rejection of both idealism and neo-Kantianism that cul-
minated in the advent of the new empiricism developed by the Vienna
Circle (founded in 1922 by Moritz Schlick). Members of the Vienna
circle reached a certain consensus on: (a) the merits of the logical
analysis of language (basically they were followers of Wittgenstein’s
Tractatus); (b) the urgency to develop a rigorous epistemology for a
scientific vision of the world; and (c) the unity of scientific explana-
tion and knowledge in general (Stadler, 2001).2 New inquiries into
epistemology and logic were explained in a different way also in the
Mathematical Colloquium, which was founded by Menger in 1928
(Menger, [1935] 1998).3 There was a general agreement among mem-
bers of the Mathematical Colloquium about the urgency to apply
logical-deductive method to social sciences and to give ethics and

1
This paragraph is a reduction of Becchio (2008).
2
These matters formed the core of the international movement in the 1930s, when
the philosophical position of the Vienna Circle was most prominently represented
by Carnap’s analysis of language and by Neurath’s physicalism and its program
for a unified rational reconstruction of science (including the human sciences)
(Stadler, 2006).
3
Official members of the Colloquium were Abraham Wald, Kurt Gödel, Franz
Alt, Georg Nöbeling; many other guests, such as John von Neumann and Oskar
Morgenstern, sometimes gave lectures. (Weintraub, 1983; Punzo, 1989; Golland
and Sigmund, 2000; Becchio, 2008).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 113

Logics and Formal Ethics in Karl Menger 113

economics a new foundation, based on a strictly logical coherence


(Menger, 1935).
In 1921, Menger had already dealt with these issues. He published
his first paper on “the role of uncertainty in economics” to describe
standard behavior of agents and any possible deviation from it.4 The
paper5 was a focused on the conflict between rational expectations
and common sense, by following the St. Petersburg paradox, formu-
lated in the 18th century by Nicholas Bernoulli.6 He illustrated his
famous paradox with the following situation: a coin will be tossed
until a head appears; if the first head appears on the nth toss, then
the payoff is 2n and the game is over. The paradox, of course, is that
the mathematical expectation in this game is infinite.7 Yet, although
the expected payoff is infinite, one would not suppose, at least intu-
itively, that a real-world person would be willing to pay an infinite
amount of money to play the game. In fact, there is a quite high
probability that he would collect only a few units of money.8

4
The “uncertainty” to which Menger was referring is to be intended as the ran-
domness of choices when they are made in a context of knowable probabilities.
5
When Menger wrote it, Hans Mayer, then the editor of Zeitschrift für
Nationalökonomie, refused to publish the paper because it made excessive use
of mathematical formulas. The paper was discussed in 1927 at a meeting of the
Viennese Economic Society and provoked differing reactions. It was only published
in 1934, when Oskar Morgenstern — according to Menger “one of the very few
Austrian economists who were free from prejudices against mathematical methods
in economics” (Menger, 1967, p. 259) — was appointed editor of the Zeitschrift
(Becchio, 2008).
6
Nicholas’s cousin, Daniel Bernoulli, claimed that mathematical expectations
should be replaced with “moral” expectations, and was thus the first to investigate
the meaning of so-called expected utility for a gambler who persists with playing
the same game. He defined the notion of expected utility by decomposing the
valuation of a risky venture as the sum of utilities from outcomes weighted by
the probabilities of outcomes.
7 st
1 toss: payoff = 1; 2nd toss: payoff = 4 and so on and E(w) = (1/2n ) · 2n =
(1/2) · 2 + (1/4)22 + (1/8)23 + · · · = 1 + 1 + 1 + · · · = ∞.
8
In the St. Petersburg case, the value of the game to an agent (assuming initial
wealth is zero) is: E(u) = (1/2n ) · u(2n ) = (1/2) · u(2) + (1/4) · u(22 ) + (1/8) ·
u(23 ) + · · · < ∞, which Bernoulli conjectured is finite because of the principle
of diminishing marginal utility. Daniel Bernoulli’s solution involved two ideas:
(i) that people’s utility from wealth, u(w), is not linearly related to wealth (w) but
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 114

114 The Limits of Mathematical Modeling in the Social Sciences

Consequently, people would only be willing to pay a finite amount


of money to play the game, even though its expected return is infinite.
On replacing the first factor with the subjective (or, as he put it,
“moral”) value of the gain — which, Bernoulli emphasized, depends
not only on the amount A of the gain but also on the wealth W of
the evaluating person — he had assumed the subjective value to be
proportional to A and inversely proportional to W .9 By considering
the subjective value, we can obtain a finite expectation, and then a
solution of the paradox.
Menger’s critique was focused on the fact that the subjective
expectation is also infinite. There are many cases where man’s behav-
ior fails to conform to mathematical expectations: games in which a
player can win only one very large amount with a very small proba-
bility or games offering a single moderate amount with a very high
probability.10 Menger therefore proposed that utility must also be
bounded above for paradoxes of this type to be resolved. Menger’s
explanation was that most people systematically underrate very
small and very high probabilities in their economic actions, while
they overrate medium probabilities. Moreover, Menger pointed out,
besides a person’s wealth W , one must also consider the amount
U that he needs in order to continue his present standard of liv-
ing, as well as the amount U  that he absolutely needs to go on
living. And the difference W − U plays a crucial role in a rational
person’s attitude toward risk. Finally, other “personal” parameters
may modify the way a person gambles. Hence, Menger stressed that
deviations of behavior are very complex: even in a game with a finite

rather increases at a decreasing rate — the famous idea of diminishing marginal


utility; (ii) that a person’s valuation of a risky venture is not based on the expected
return from that venture, but rather the expected utility from that venture.
9
Hence the subjective value became c log(1 + A/W ) where c = number indepen-
dent that may differ among persons but is constant for each person.
10
Furthermore, we can always find a sequence of payoffs x1 , x2 , x3 , . . . , which yield
infinite expected value, and then propose, say, that u(xn ) = 2n , so that expected
utility is also infinite.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 115

Logics and Formal Ethics in Karl Menger 115

number of solutions, individual choices may disregard mathematical


expectations because individual choices are conditioned by psycho-
logical motivations or by taste (Leonard, 1995).
Menger never stopped to draw his attention to decision theory:
in 1932 he presented “Das Botenproblem” (the Messenger Problem)
at his Mathematical Colloquium (Menger, 1932). Many scholars con-
sider this as a precursor of the TSP (travelling salesman problem).11
In Menger’s words, “We designate as the Messenger Problem (since
this problem is encountered by every postal messenger, as well as
by many travelers) the task of finding, for a finite number of points
whose pairwise distances are known, the shortest path connecting
the points. This problem is naturally always solvable by making a
finite number of trials. Rules are not known which would reduce
the number of trials below the number of permutations of the given
points. The rule, that one should first go from the starting point to
the point nearest this, etc., does not in general result in the shortest
path” (translated by Bock, 1963). Menger was very aware of the fact
that the problem is difficult to solve because of the fact that shorter
paths are as complex as the original components of the problem.
Hence any possible solution is not optimal.
In the same year (1932), Menger delivered a lecture in Vienna on
“the new logic” (Menger, 1979): it was a historical reconstruction of
the crisis of logics in 19th and 20th centuries, and represented the
first initiative made by Menger in order to apply a new logic to social
sciences.
Menger presented both Peano’s calculus of relations and Russell’s
calculus of functions as “revolutionary” and in open opposition to the
old logic (based on Boole’s, Pierce’s and Schröder’s algebras of logic
as well as on the logic of proposition from Frege to Wittgenstein).
Menger maintained that there were three possible solutions for the

11
The term “travelling salesman problem” was coined by A. Whitney (Schrijver,
2005).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 116

116 The Limits of Mathematical Modeling in the Social Sciences

crisis of the old logic: Russell’s logicism,12 Hilbert’s formalism,13 and


Brouwer’s intuitionism.14
It should be regarded as a natural consequence of Menger’s rejec-
tion of Brouwer’s intuitionism (Menger, 1979; Leonard, 1998) and
his adherence to Hilbertism. A caveat: even though since the late
1920s, Menger had clearly rejected Brouwer’s intuitionism, Brouwer’s
influence on Menger has been kept alive, so to speak, by his inter-
est in nonclassical (or nonstandard) logics. Contrary to standard
logic (ordinary 2-valued logics), Menger severely criticized the idea
that logic was unique. He strongly supported the n-valued logics of
L
 ukasiewicz and Post in which neither the law of excluded middle
nor the law of noncontradiction operated.15
Many-valued logic was created by the logician L  ukasiewicz
(1920), and it was developed first in Poland. His first intention was
to use a third, additional truth-value for “possible”, and to model
in this way the modalities “it is necessary that” and “it is possible
that”. This intended application to modal logic did not materialize.

12
Russell thought that language could be analyzed into a perfect logical structure.
In addition, he claimed that mathematics could become a part of logic. Russell’s
logicism had a great influence on scientists and philosophers of the Wiener Kreis.
Viennese scholars were also deeply influenced by Wittgenstein, for whom logic
was something that both the world and language must have in common. In fact,
according to Wittgenstein, language can be used to picture the world only because
it has something in common with logic; and it is only because of logic that our
sentences are meaningful.
13
Hilbert’s formalism was based on the axiomatization of logical and mathemat-
ical theory: every branch of mathematics starts with a number of axioms or
statements that are assumed to be true and all other statements in that branch
can be proven. Axiomatization makes the system consistent.
14
Unlike logicism and Hilbertism, intuitionism (Kronecker; Brouwer; Heyting)
was a nonclassical logic that refused to reduce either mathematics to logic or
logic to mathematics. According to intuitionists, mathematical and logical proofs
work differently: in particular, Brouwer (who has been Menger’s mentor in the
mid-1920s) showed that in some cases the law of excluded middle does not hold
in mathematics (it is impossible in infinite sets).
15
In particular, Menger regarded L
 ukasiewicz’s 3-valued logic to be able to include
uncertainty: “the third value being the excluded middle of the traditional 2-valued
system” (Leonard, 1998, p. 16).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 117

Logics and Formal Ethics in Karl Menger 117

The outcome of these investigations are, however, the L  ukasiewicz


systems, and a series of theoretical results concerning these sys-
tems. Like L  ukasiewicz’s approach, the Polish-American mathemati-
cian Post (1921) introduced the idea of additional truth degrees and
applied it to problems of the representability of functions.16
Menger shared with Post and L  ukasiewicz the idea that logic
17
could take various forms. He named this kind of theoretical attitude
“logical tolerance”, and he considered dogmatic the so-called unique-
ness of logic and toward which he was very skeptical. On several
occasions, Menger defended what Carnap later called the principle
of “logical tolerance” (Carnap, 1934) and turned against the employ-
ment of the word “meaningless”, according to which, for example,
all mathematical propositions are tautologies. In a certain sense, as
Menger himself clearly stated later, his tolerant attitude over the log-
ical foundations of mathematics was very close to Popper’s criticism
of essentialism, i.e., “the futility of defining a science [and] the arbi-
trariness of precisely circumscribing its object” (Menger, 1979, p. 16).
As previously stated, Menger considered the so-called uniqueness
of logic to be a dogma, and he was highly skeptical of the notion;
he shared with Post and L  ukasievicz the idea that logic could take
various forms. Menger’s acceptance of many-valued logics “drew him
gently but inexorably away from Hanh on logic and Neurath on the
unity of science” (Menger, 1979, p. 16).18

16
Later on, Gödel (1932) tried to understand intuitionistic logic in terms of many-
truth degrees. A few years later, Jaskowski (1936) constructed an infinite-valued
characteristic matrix for intuitionistic logic. It seems, however, that the truth
degrees of this matrix do not have a nice and simple intuitive interpretation.
A philosophical application of 3-valued logic to the discussion of paradoxes was
proposed by the Russian logician Bochvar (1938), and a mathematical one to
partial function and relations by the American logician Kleene (1938). Much later,
Kleene’s connectives also became philosophically interesting as a technical tool to
determine fixed points in the revision theory of truth initiated by Kripke (1975).
17
In fact, he deeply criticized Wittgenstein’s idea — shared by member of the
Vienna Circle in mid-1930s — that logic was unique.
18
As Menger himself stated later, his tolerant attitude toward the logical foun-
dations of mathematics was very close to Popper’s criticism of essentialism, i.e.,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 118

118 The Limits of Mathematical Modeling in the Social Sciences

2. A new logic to found a formal ethics19


Menger’s book Moral, Wille und Weltgestaltung, Grundlegung zur
Logic der Sitten (Menger, 1974)20 represented a formal approach
to moral problems, based on an unambiguous definition of ethical
language; it is a tentative to present a positive ethics.
When Menger’s book appeared in Vienna, in 1934, it was wel-
comed by Oskar Morgenstern as “the only examination of a strictly
formal nature about social groups” (Leonard, 1995, p. 746) and by
Karl Popper as “one of a few books in which the author attempts to
depart from the stupid talk in ethics” (Popper, 1944–1945).21 Hayek
paid also attention to Menger’s work on ethics.22

“the futility of defining a science [and] the arbitrariness of precisely circumscribing


its object” (Menger, 1979, p. 16).
19
This paragraph is a reduction of Becchio (2009).
20
It was translated into English as Menger (1974).
21
Both Popper and Menger, in fact, totally disagreed with the neopositivists of
the Vienna Circle, who — influenced by Wittgenstein — considered norms to be
“meaningless” because they cannot be deduced from facts. According to Popper,
Menger did not fall in this fallacy and tried to develop a positive logic of norms.
22
When discussing compatibility among the plans of different individuals, Hayek
explicitly cited Menger’s investigations in social theory. Hayek hoped that Menger
would be able to develop his first insights into an “exact sociological theory”
(Hayek, 1937). In a footnote, Hayek wrote: “It has long been a subject of won-
der to me why there should have been no systematic attempts in sociology to
analyse social relations in terms of correspondence and noncorrespondence, or
compatibility and noncompatibility, of individual aims and desires. It seems that
the mathematical technique of analysis situs (topology) and particularly such
concepts developed by it as that of homeomorphism might prove very useful in
this connection, although it may appear doubtful whether even this technique, at
any rate in the present state of its development, is adequate to the complexity
of the structures with which we have to deal. A first attempt made recently in
this direction by an eminent mathematician (Karl Menger, 1934) has so far not
yet led to very illuminating results, but we may look forward with interest to
the treatise on exact sociological theory Menger has promised” (Hayek, 1937,
p. 38). Hayek was referring to a paper on social relations and groups that Menger
had presented in the same year at the Third Annual Conference of the Cowles
Commission in Colorado: “An Exact Theory of Social Relations and Groups”, in
Report of Third Annual Research Conference on Economics and Statistics, Cowles
Commission for Research in Economics, Colorado Springs, 1937, pp. 71–73), later
published (Menger, 1938).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 119

Logics and Formal Ethics in Karl Menger 119

Menger was convinced that ethics as a normative science was


impossible because experience reveals only what “is”, not what
“ought to be”. His book on ethics should — he declared — be consid-
ered an application of “exact thinking in the field of ethics” free from
any influence “by subjective feelings” (Menger, 1974, p. 1). He would
“steer clear of the search for ultimate meanings or essences such
as ‘the concept of morality’ or ‘the principle of virtue’ ” (Leonard,
1998, p. 20). Menger’s aim was to identify “specific rules of conduct”
in a strict logical sense. He maintained that it is necessary to free
ethics from metaphysical inquiries into “the concept of morality” or
“virtue” or “the essence of good”, and he sought “to confine [these]
cognitive studies to facts”. Menger’s investigations were concerned
“with the application of mathematical modes of thinking and deal
only with the formal side of questions”. His ethics was a formal-
ist model: not a scale of values but rather a set of rules, a sort of
Hilbertian programme extended to the moral field.23
An epistemology of morality could be developed by founding it
on a logical basis expressed by mathematical tools able to construct

23
The main reference in formalist ethics is, of course, to Kant and his “practi-
cal reason” founded on three categorical imperatives (based on the conformity
of any action with universality, autonomy, and to notion of humans as ends in
themselves) in opposition to hypothetical ones (religion, laws, hedonistic plea-
sure, and personal ideals), which do not confer morality on an action. Menger
stated that his morality was very far from Kantianism. He considered ethics in
order to understand individual decisions, and subsequently social organizations,
and he argued that the Kantian categorical imperative was neither a necessary
nor a sufficient condition for constituting cohesive (or peaceful) groups. Hence,
ethical imperatives are always hypothetical, never categorical. The unresolved
question in Kantianism was “what concrete precepts result from the categorical
imperative in specific situations” (Menger, 1974, p. 9). In order to answer this
question (how to apply the categorical imperative to a decision), it is necessary
to consider a decision that implies a cognitive activity. This is the point where,
according to Menger, ethics and logic are strictly connected. The ethical problem
in Menger’s thought was understanding how social coexistence comes about in
concrete situations where “there are several mutually incompatible decisions to
consider” (Menger, 1974, p. 10) and the categorical imperative needs supplemen-
tary stipulations and additional norms in order to generate the well-being of a
group.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 120

120 The Limits of Mathematical Modeling in the Social Sciences

an “ethics without morality and immorality”. Menger consequently


focused his attention on a set of decisions by individuals whether or
not to adhere to certain norms: the main point was understanding
why, when, and how a person accepts a definite system of norms,
basing his decisions on that system.
According to Menger, rational foundations for decisions are pos-
sible.24 Cognitive action is the starting point for the explanation of
how individual decisions and a social changes arise, although volition
has a central role as well. But the problem is that, if we can study
only cognition (the logical side of decision-making), volition remains
a matter of individual taste. According to Menger, it is possible to
develop a logic of ethics, which considers every single norm or system
of norms within a social group in which people (in connection with
any norm) can approve the norm, disapprove of it, or be indifferent
to it.
Thus, for every norm (n), people are always divided into three
groups: those who always follow the rule; those who sometimes do
and sometimes do not follow it; and those who never follow it. It
may happen that one (or two) of these sets is empty; if everybody
always (or never) adheres to the norm the other two sets are empty.
If someone is indifferent, he may or may not follow the rule: he hence
forms another group comprising indifferent people who sometimes
follow the rule and sometimes does not, so that the groups become
three in number
In 1938, Menger published another paper on the formation of
cohesive groups (Menger, 1938). If we consider a group of men G as
a total group, it can be divided into two fundamental groups with no
overlaps: G1 and G2 (for example, men and women). Each member of
G has four possible attitudes toward the association (G1 or G2 ); with

24
By way of example, he cited the idea of maximization in economic theory and
stated that if economists wished to claim that “the optimal distribution of com-
modities and the greatest welfare of mankind could be achieved under certain
system of organization which they describe”, they “indeed must first take the
trouble to study logic”(Menger, 1974, p. 31), and he added ironically that such
an exercise would “without doubt increase their self-criticism”.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 121

Logics and Formal Ethics in Karl Menger 121

everybody (G1,2 ) or with nobody (G0 ). Groups are thus represented


as Gba , where a is membership of a group and b is compatibility or
mutual acceptance. Thus we have eight main classes:

G11 , G21 , G12 , G22 , G1,2 1,2 0 0


1 , G2 , G1 , and G2 .

Groups G11 , G22 , G1,2 1,2


1 , G2 are consistent = any member of G is willing
to associate with any member of his own group (the last two
also with members of the other group).
Groups G21 , G12 , G01 , G02 are inconsistent = no member of G is willing
to associate with a member of his own group (the last two with
nobody).
Groups G11 and G22 are antipathetic each other = any member of G
is willing to associate only with a member of his own group.
Groups G21 , G1,2 1 1,2
2 and G2 , G1 are mixed.

Menger’s aim is to understand when is it possible to unify all the


members of G into a consistent group. The answer lies in the tol-
erance of the mixed groups G1,2 1,2
1 and G2 that makes possible the
overlapping between G1 and G2 . Kantian categorical imperative is
unable to guarantee general harmony because it operates only in
groups G11 and G22 , i.e., only in those groups in which there is no
need to find a tool of cohesiveness or peace, because they are formed
by member who shared the same set of rules.
In the postscript to the English edition of his book (Menger,
1974), Menger summed up its purpose as an attempt to apply logico-
mathematical thought to ethical matters. He wrote that it was possi-
ble to construct an exact system of thought in regard not to dealing
with personal dilemmas but to treating social problems in what today
is called group decision theory.

3. Menger’s latest development on a suitable logic


for social science in a realistic framework
In 1939, Menger refounded his Mathematical Colloquium at the
University of Notre Dame, Indiana. He decided to come back to logic
in order to find a mathematical tool able to explain social sciences
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 122

122 The Limits of Mathematical Modeling in the Social Sciences

able to include realism. This time he chose to discuss and to propose


his own logic, an n-valued logic, called optative logic, or a logic of
wishes and commands. In his paper, he wrote: “the ordinary 2-valued
logic divides propositions into two classes” (Menger, 1939, p. 91): true
or false. Then, it combines them into more complex propositions by
connectives such as & (and), → (implies) and so on. Proposition p&q
belongs to the class of true proposition if and only if both p and q are
true. On the other side, the n-valued logics of Post and L  ukasiewicz
divides propositions into n classes.
According to Menger, we must build a realistic logic, a logic of
everyday life, that is not 2-valued, but n-valued. Logic of everyday
life in fact does not share the feature that the class of the compound
is uniquely determined by the classes of components when it deals
with doubtful propositions: p&q can be doubtful or false if both p
and q are doubtful.
Logic of doubtful divides propositions into:

Asserted = µ+
Doubtful = µ0
Negated = µ−

And µ(p) is the class of modality to which p belongs: in contrast with


3-valued logic, “the modality of a compound is not determined by the
modalities of the components”

Asserted = µ+ (deductive system) contains:

1. tautologies of ordinary 2-valued calculus of propositions


2. p&q if it contains p and q
3. q if it contains p and q → q or (p&q  )
4. p, q and q&p if it contains p&q
5. q  if it contains (p&q) and p.
6. (p&q) if it contains p

If p belongs to µ+ then p belongs to µ− and conversely; P and Q


are tautologically equivalent if P → Q and Q → P are tautologies in
2-valued sense.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 123

Logics and Formal Ethics in Karl Menger 123

From this premise, four theorems follow, as they are listed below.

Theorem 1. If P and Q are two equivalent compound connectives


made up of the particles  , &, →, etc., then

µ(P (p, q, . . .)) = µ(Q(p, q, . . .)) for any propositions p, q, . . . .

Theorem 2. In the usual notation, we have the following tables of


modality:

p p p q p&q
+ − + + 0
0 0 + 0 0
− + 0 + 0
− ∗ −
∗ − −
0 0 0 or −

∗ = any of the symbols +, 0, −.

Theorem 3. There are at most seven classes of pairs of doubtful


propositions. If µ(p) = µ(q) = µ0 , then the modalities of p&q, p &q  ,
p &q, p&q  have one of the seven following quadruples of values:

p&q p &q  p &q p&q  p and q are Symbols

− 0 0 0 Incompatible piq
− − 0 0 Contradictory pcq
0 − 0 0 Alternative paq
0 0 − 0 p weaker q pwq
0 0 0 − p stronger q psq
0 0 − − Equivalent peq
0 0 0 0 Unrelated puq

Theorem 4. Set P = (p&q) &(p &q  ) and Q = (p &q) &(p&q  ) .


The propositions P and Q are both doubtful if exactly one of the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 124

124 The Limits of Mathematical Modeling in the Social Sciences

propositions p and q is doubtful. If p and q are both doubtful, then


we have

piq pcq paq pwq psq peq puq

P 0 + 0 0 − 0 0
Q 0 − 0 0 + 0 0

where

piq = sets whose intersection is empty but whose union is not the
universal set;
pcq = sets whose intersection is empty but whose union is the uni-
versal set;
paq = sets whose intersection is nonempty and whose union is the
universal set;
pwq = the second set is a proper subset of the first;
psq = the first set is a proper subset of the second;
peq = sets that are identical;
puq = sets with a nonempty intersection neither being a subset of the
other and whose union is not the universal set.

According to Menger, an exact theory of wishes and commands


is a useful and proper tool for a mathematical approach to ethical
problems.
Although a first attempt in this direction has been made by
Mally, who introduced “!p” = p ought to be (Mally, 1926), accord-
ing to Menger, Mally made a mistake: he founded his system on
a 2-valued logic, which is not applicable to wish and commands,
because they are doubtful. Mally set up five postulates:

(1) [(p →!q)&(q → r)] → (p →!r);


(2) [(p → (!q&!r)] → [p →!(q&r)];
(3) (p →!q) ↔!(p → q);
(4) !u; that is, there exists a proposition u for which !u holds;
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 125

Logics and Formal Ethics in Karl Menger 125

(5) (!v) ; that is, there exists a proposition v for which the negation
of !v holds.
After deriving 50 theorems from these assumptions, Mally arrived at
the following conclusion:
p ↔!p.
Hence, according to Menger, the introduction of the sign ! is super-
fluous in the sense that it may be cancelled or inserted in any formula
or place we please to apply.
Stressing the importance of substituting 2-valued logic with
n-valued logic, Menger’s conclusions were:
(A) The objects of human wishes (Dp) and commands (Cp) are
neither necessities nor impossibilities. Hence, propositions are
neither asserted nor negated: they are doubtful. Mally’s mistake
can be founded on the fact that he used 2-valued calculus of
propositions.
(B) The words “command” and “wish” differ from each other.
(C) The expression “I wish” in everyday language is incomplete and
ambiguous.
(D) A theory of Dp should take account some results of economic
theory, in particular the following two:
• The distinction of certain classes of pairs of goods, for exam-
ple, complementary goods wanted only in combination; or
goods that can be substituted for each other and each of which
is wanted without, possibly even only without, the other.
• For each individual I, the combinations of goods constitute a
set that is partially ordered by a preference relation such that
absence of preference is transitive; that is, if M and N are two
combinations of goods, then either N is preferred to M , or M
is preferred to N , or neither is preferred to the other. In the
last case (absence of preference), I is indifferent to M and N
and we can apply the properties of the SARP or WARP.
How can these remarks be applied to optative logic in Menger’s sys-
tem? Even the most elementary wishes presuppose at least normal
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 126

126 The Limits of Mathematical Modeling in the Social Sciences

conditions in some respect; while the number of conditions is prac-


tically unlimited, a theory must start with simplifying assumptions:
for instance, with the restriction to definite finite sets of doubtful
propositions and of conditions
Es: p and its negation p
1. If Dp, then p belongs to µ0 , only Dp are objects of I’s wishes.
2. Dp doesn’t belong to µ0 , no doubt as to whether or not I wishes
p = I wishes p = Ex(y)
3. (Dp&Dp ) belongs to µ− , I does not both wish p and wish p =
Ind p or Ind p
4. Dp&Dp is ruled out by the law of optative consistency.
The attitudes Dx(y), Ex(y), Ix(y) may be interpreted in terms
of preferences between the pairs (x, y) and (x , y) namely respec-
tively by

(x , y) < (x, y), (x, y) < (x , y), (x, y) ∼ (x , y).

Hence, Menger introduced substitutive goods: when we finally con-


sider two possible objects of desires, p and q (with their negations,
p and q  ) such that either one affects the optative attitude toward
the other. As before, Ax(y) denotes the attitude toward x under the
condition y (where A is D or E or I, while A∗ will denote E, D, or
I, respectively, and A∗∗ = A) then we have the following:
(1) If Ax(y) then Ay(x)
(2) If Ax(y) then A∗ x(y)
There are eight attitudes that can be cyclically arranged:
A1 p(q) A2 q(p)
A8 p (q) A3 q  (p)
A7 q(p ) A4 p(q  )
A6 q  (p ) A5 p (q  )

From assumption (1) it follows that Ai+1 = Ai for I = 1, 3, 5, 7.


From assumption (2) it follows that Ai+1 = A∗i for I = 2, 4, 6, 8.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 127

Logics and Formal Ethics in Karl Menger 127

Jointly the assumptions imply that:


A1 = A2 = A5 = A6 = A∗3 = A∗4 = A∗7 = A∗8 .
Complementary objects illustrate the case A1 = D; incompatible
objects correspond the case where A1 = E. Attitudes can again be
interpreted in terms of preferences between (unordered) pairs: for
Dp(q) by:
(p , q) < (p.q), (p, q  ) < (p, q), (p , q) < (p , q  ), (p, q  ) < (p , q  ).
As we can realize, Menger’s imperative logic is founded on the con-
cept of deterrence (the idea that norm are threats is far from being
universally accepted in legal philosophy), and his optative logic is
based on a theory of wishes that should take account of some results
within economic theory, that can be summed up in the following
terms.
Pairs, triple and n-tuples of goods that are complementary or
substitutive, and for every individual, the combinations of goods con-
stitute a set that is partially ordered by preference relations. Menger
did build a normative logics (including wishes) based on logic of
preference.
On 27 December 1961, Oskar Morgenstern wrote to Menger:
“I do not know whether you have followed the literature on the
problem of “fair division”. You will find an interesting discussion in
the book by Luce and Raiffa, Games and Decisions. This whole area
has a close connection with your work on the logic of ethics, which
is unfortunately quite unknown to these authors because of the
time of its publication and its being available only in German. (. . .)
The relation of the explorations in your book to game theory and
fair division certainly bears further study, and I would be happy
if I could stimulate you to resume when they were begun” (Karl
Menger papers25 ).

Morgenstern was referring to the well-known book by Luce and


Raiffa, published in 1957, and dedicated to the memory of John von

25
Karl Menger’s archive, at Perkins Library, Duke University, Durham, NC, USA.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 128

128 The Limits of Mathematical Modeling in the Social Sciences

Neumann. That book was about game theory and placed particu-
lar emphasis on the social science point of view. Chapter 13 dealt
with individual decision-making under uncertainty, and Chapter 14
on group decision-making, devoting a section to “games of fair divi-
sion”.26 Of course, Morgenstern’s reference of “a couple of papers on
the logic of the putative”, were those written by Menger in 1933 and
in 1939.
In 1983, Menger returned to his interest in cohesive groups: he
proposed a general criterion for explaining how cohesive social groups
come into being. The model was the same as in 1938, but the paper
comprised some additions on the ethics of the Vienna Circle and on
game theory that are worth recalling. Menger criticized both Kant’s
formalistic morality and the ethics proposed by the Vienna Circle and
founded on Wittgenstein’s Tractatus. According to Menger, Kant’s
categorical imperative was unable to form a cohesive group because
“in most specific situations it is impossible to deduce specific pre-
cepts for behavior unless the imperative is supplemented by the
value judgments”, and it is very difficult to find a “maxim that can
become a general law”. Menger was also dissatisfied with the Vienna
Circle’s notion that, after the complete elimination of value judg-
ments from ethics, only historical and ethnographical descriptions of
moral beliefs and conditions were possible. The multiplicity of beliefs
and evaluations seemed to recommend the formal study of inner
judgments and attitudes among human groups with incompatible

26
Luce and Raiffa maintained that a fair rule is a mode of conduct considered
socially desirable: a fair procedure is a Pareto optimal outcome. A group decision’s
welfare function is built by passing from individual values to social preferences.
The main difficulty is to devising a system that is sufficiently egalitarian and
flexible to cope with the dynamics of individual tastes. In Arrow’s social welfare
function, there is a fair method for aggregating different sets of individual rankings
into a single ranking for the society: from a n-tuple of individual rankings to
a single ranking for the society in order to construct a social welfare function
(Arrow, 1951).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 129

Logics and Formal Ethics in Karl Menger 129

wishes, and of conflicting decisions between individuals and individ-


uals, individuals and groups, groups and groups (Menger, 1983). In
this paper, Menger claimed that experiments should “test the pres-
ence and demonstrate the evolution of ethical norms” (Perlmann and
McCann, 1998, p. 441).

4. Conclusion
Menger’s interest in the relationship between logics and social sci-
ences (ethics and economics) was one of his major focus areas during
the interwar period. He never stopped thinking of the possibility to
find a way to apply a nonstandard logic to describe ethics in a formal
way, based on the assumption that moral attitudes are based and
reinforced by decisions. His work made him a forerunner of decision
theory and experimentalism in social sciences.

References
Arrow, K. (1951). Social Choice and Individual Values. New York: Wiley.
Becchio, G. (2008). The complex role of Karl Menger in the Viennese economic
theory. Review of Austrian Economics, 21, 61–79.
Becchio, G. (2009). Ethics and economics in Karl Menger. in Unexplored
dimensions: Karl Menger on Economics and Philosophy, Becchio, G. (ed.).
Advances in Austrian Economics, Vol. 12, pp. 21–39.
Bock, F. (1963). Mathematical programming solution of traveling salesman exam-
ples. in Recent Advances in Mathematical Programming, Graves, R. and
Wolfe, P. (eds.). pp. 339–341. New York: McGraw-Hill.
Bochvar, D. (1938). On a three-valued logical calculus and its application to the
analysis of the paradoxes of the classical extended functional calculus. History
and Philosophy of Logic, 2, 87–112.
Carnap, R. (1937 [1934]). The Logical Syntax of Language. London, Kegan Paul.
Gödel, G. (1932). Zum intuitionistischen Aussagenkalkül. Anzeiger Akademie
der Wissenschaften Wien (Math.-naturwiss. Klasse), 69, 65–66; reprinted:
Menger, K. (ed.). (1933), Ergebnisse eines Mathematischen Kolloquiums,
pp. 4–40.
Golland, L. A. and Sigmund, K. (2000). Exact thought in a demented time: Karl
Menger and his mathematical colloquium. The Mathematical Intelligences,
22, 34–45.
Hayek, F. (1937). Economics and knowledge. Economics, 4, 33–54.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 130

130 The Limits of Mathematical Modeling in the Social Sciences

Jaskowski, S. (1975 [1936]). Recherches sur le système de la logique intuitioniste.


in Actes du Congrès Internationale de Philosophie Scientifique 1936, 6, Paris,
pp. 58–61. [English translation: Studia Logica, 34, pp. 117–120.]
Kleene, S. (1938). On notation for ordinal numbers. Journal Symbolic Logic, 3,
150–155.
Kripke, S. (1975). Outline of a theory of truth. Journal of Philosophy, 72, 690–
716.
Leonard, R. (1995). From Parlor game to social sciences: Von Neumann, Morgen-
stern, and the creation of game theory, Journal of Economic Literature, 33,
730–761.
Leonard, R. (1998). Ethics and the excluded middle. Karl Menger and the social
science in Interwar Vienna, A Journal of the History of Science Society, 89,
1–26.
Luce, D. R. and Raiffa, H. (1957). Games and Decisions. New York: Wiley.
 ukasiewicz, J. ([1920] 1970). Selected Works, Borkowski, L. (ed.). Amsterdam:
L
North-Holland; Warsaw: PWN.
Mally, E. (1926). Grundgesetze des Sollens: Elemente der Logik des Willens. Graz:
Leuschner und Lubensky, Universitäts-Buchhandlung.
Menger, K. (1932). Das Botenproblem, in Ergebnisse eines Mathematischen Kol-
loquiums, Vol. 2, Menger, K. (ed.). pp. 11–12, New York: Springer.
Menger, K. ([1935] 1998). Erbegenisse eines Mathematischen Kolloquiums,
Dierker, E. and Sigmund, K. (eds.). New York: Springer.
Menger, K. (1938). An exact theory of social relations and groups. in Report
of Third Annual Research Conference on Economics and Statistics. Cowles
Commission for Research in Economics, Colorado Springs, 1937, pp. 71–73.
Menger, K. (1939). A logic of the doubtful: On optative and imperative logic. in
Reports of a Mathematical Colloquium, pp. 53–64. Notre Dame, IN: Indiana
University Press.
Menger, K. (1967). The role of uncertainty in economics. in Essays in Mathemat-
ical Economics in Honor of O. Morgenstern, Shubik, M. (ed.). pp. 211–223.
Princeton: Princeton University Press.
Menger, K. (1974). Morality, Decision and Social Organization toward a Logic of
Ethics. Dordrecht: Reidel Publishing Co.
Menger, K. (1979). Selected Papers in Logic and Foundations, Didactics, Eco-
nomics. Dordrecht: Reidel Publishing Co.
Menger, K. (1983). On social groups and relations. Mathematical Social Sciences,
6, 13–26.
Perlmann, M. and McCann, R. (1998). The Pillars of Economic Understanding.
Ideas and Traditions. Ann Arbor, MI: Michigan University Press.
Popper, K. (1944–1945). The Open Society and Its Enemies. London: Routledge.
Post, E. (1921). Introduction to a general theory of elementary propositions.
American Journal Mathematics, 43, 163–185.
Punzo, L. (1989). Karl Menger’s mathematical kolloquium. in John Von Neumann
and Modern Economics, Dore, M., Chakravarty, S. and Goodwin, C. (eds.).
pp. 129–165. Oxford: Clarendon Press.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch03 page 131

Logics and Formal Ethics in Karl Menger 131

Schrijver, A. (2005). On the history of combinatorial optimization (till 1960).


in Handbook of Discrete Optimization, Aardal K., Nemhauser, G. and
Weismantel, R. (eds.). pp. 1–68. Amsterdam: Elsevier.
Stadler, F. (2001). The Vienna Circle. Studies in the Origins, Development, and
Influence of Logical Empiricism. New York: Springer.
Stadler, F. (2006). The Vienna circle. In A Philosophy of Science: An Encyclope-
dia, Sarkar, S. and Pfeifer, J. (eds.). pp. 858–863. London: Routledge.
Weintraub, R. (1983). On the existence of a competitive equilibrium: 1930–1954.
Journal of economic Literature, XXI, 1–39.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 133

Chapter 4

Shocked Disbelief

Roger Koppl
Whitman School of Management, Syracuse University
Syracuse, NY, USA

Chaitin, da Costa and Doria (2012, p. 16) say, “The Gödel–Turing


phenomenon is an all-pervasive fact that has to be understood in
order to be conquered and usefully mastered.” But the Gödel–Turing
phenomenon is subversive of power and authority. It has, therefore,
been effectively excluded from mainstream economics, which serves
existing powers, at least in macroeconomics. Mainstream economists
are experts, and monopoly experts have an incentive to act like the
seers of Ancient Greece. But if we take the self-referential logic of
Gödel–Turing seriously, the expert is put back on a plane of equal-
ity with the nonexpert. The implications of Gödel–Turing in social
science, in other words, are democratic and antihierarchical. The
deeply democratic message of this paper, however, is hostile to the
interests of experts and thus unlikely to find a place in mainstream
economics in the foreseeable future. If we do not conquer the Gödel–
Turing phenomenon, we may fall into magical thinking. By “magical
thinking,” I mean an argument with one or more steps that require
something impossible. Unfortunately, experts often have an incentive
to engage in magical thinking. Experts, including economic experts,
are therefore unlikely to engage or even recognize the Gödel–Turing
phenomenon. I begin with an example.

133
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 134

134 The Limits of Mathematical Modeling in the Social Sciences

In 2005, the Federal Reserve Bank of Kansas City sponsored a


symposium to “honor Alan Greenspan’s service” as chairman of the
Federal Reserve System. It would be hard to exaggerate the partic-
ipants’ groveling praise. The lead essay says, “While there are some
negatives in the record, when the score is toted up, we think he
has a legitimate claim to being the greatest central banker who ever
lived. His performance as chairman of the Fed has been impressive,
encompassing, and overwhelmingly beneficial — to the nation, to
the institution, and to the practice of monetary policy” (Blinder
and Reis, 2005, p. 13). This fawning opinion was shared by leading
economists allied with both major political parties in the US.
The Great Recession, however, tore the curtain back on the great
wizard, leading to his famous mea culpa. Following the Great Reces-
sion, Alan Greenspan confessed to an error in his economic logic. In
testimony before the Congress in October 2008, Greenspan (2008, p.
2) said he had “found a flaw” in his model of capitalism:

[T]hose of us who have looked to the self-interest of lending insti-


tutions to protect shareholder’s equity (myself especially) are in
a state of shocked disbelief. Such counterparty surveillance is a
central pillar of our financial markets’ state of balance. If it fails,
as occurred this year, market stability is undermined.

In Greenspan’s logic, “self-interest” was sufficient to ensure foresight


and stability. Call it the “Greenspan gaffe.”
How could “the greatest central banker who ever lived” go so
wrong? With the Greenspan gaffe, we move directly from interest to
outcome. The counterparties had an interest in knowing how risky
different assets were. This interest alone was sufficient to ensure that
such knowledge will somehow exist in all the right heads and generate
prudent actions reflecting such knowledge. Greenspan’s logic did not
require a mechanism to carry us from the desire for knowledge to the
possession of knowledge. His logic was, then, an example of magical
thinking. Jeffery Friedman puts it well: “Wanting to be knowledge-
able — willing it (as when one has high incentives to know) — does
not produce it. That is the nature of knowledge. It has to be acquired
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 135

Shocked Disbelief 135

by means of hardwired or human or cognitive mediators. Any other


theory of knowledge acquisition entails magical thinking” (Friedman,
2012, p. 81).
Mathematics was supposed to save us from such errors.
Economists tell their graduate students about the greater logical
precision of mathematics. They tout the supposed superiority of
mathematical methods for their “exactitude, rigor, precision, clar-
ity, elegance of expression, ease of exposition, avoidance of causality
and teleology, omission of unnecessary moralization, departure from
vague and ambiguous language into dignified neutrality and purity”
(Niksa, 1959, p. 153). Mathematics is a useful tool in economics, but
it does not prevent model builders from making implicit assumptions.
Any explanation simplifies. Something must be left out. In both
mathematical and verbal models, simplification may easily create
the implicit assumption that something costly is either costless or
impossible. There may be no middle ground for excluded variables.
The implicit assumption of the Greenspan gaffe was that knowl-
edge flowed freely and instantaneously through the system, which
is impossible. If there is no mechanism of knowledge acquisition,
there can be no impediments to knowledge acquisition. The implicit
assumption of one’s model will be either that information has a zero
cost, as in Greenspan’s gaffe, or that information has an infinite cost
as we see in some “Keynesian” models such as Farmer (2011).
I asked how “the greatest central banker who ever lived” could
go so wrong. But the error was not Greenspan’s alone. We have
seen the groveling praise heaped upon him by mainstream macroe-
conomists. And the most prestigious models of theoretical economics
often embody essentially the same assumption of costless knowl-
edge. The Efficient Market Hypothesis may be characterized as the
assumption that knowledge acquisition is costless and instantaneous.
Light travels at a finite speed through a void, but information travels
at infinite speed through a standard economic model.
Grossman and Stiglitz (1980) demonstrated the “impossibility of
informationally efficient markets”. If markets were always in perfect
informational equilibrium, then arbitrage would yield a return of
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 136

136 The Limits of Mathematical Modeling in the Social Sciences

zero. But if arbitrage is costly, no arbitrage would be undertaken


and there would be no mechanism to bring about the assumed per-
fect equilibrium. Brock and Hommes (1997) have shown how a simi-
lar logic can lead to complex dynamics. Unfortunately, these sorts
of models and arguments have not induced mainstream macroe-
conomists to develop more complexity-based models.
The standard models in macroeconomics, DSGE models, assume
general economic equilibrium. But Saari and Simon (1978) have
shown that “a staggering amount of information” (p. 1099) would
be required to reach equilibrium. Kirman (2010) considers it to be
an “infinite” amount of information. The assumption of convergence
to equilibrium in a DSGE model, then, is an example of magical
thinking.
Perhaps we should not be surprised if the entire edifice of modern
mainstream macroeconomics is built on magical thinking. What we
call “macroeconomists” today are, within the mainstream at least,
experts who are typically committed to a kind of financial divina-
tion. In 2008, Queen Elizabeth asked economists why nobody saw
the financial crisis coming. “Why did nobody notice it?” she asked
(Pierce, 2008). The British Academy gave her something of an offi-
cial answer. “Everyone seemed to be doing their own job properly”,
they told the Queen, “and often doing it well. The failure was to see
how collectively this added up to a series of interconnected imbal-
ances over which no single authority had jurisdiction” (Besley and
Hennessy, 2009). Koppl et al. (2015) commented, “Rather than ques-
tioning the dynamics of the econosphere, this answer questions the
organization of economic authorities. If we had had a better organiza-
tion amongst ourselves, the whole thing could have been prevented”
(p. 6). The project of financial divination is not challenged, only the
organization of the high priests.
Macroeconomists who adhere to the “efficient market hypoth-
esis” deny that a certain form of financial divination is possible.
Nobel laureate Robert Lucas (2009), for example, has said, “One
thing we are not going to have, now or ever, is a set of models
that forecasts sudden falls in the value of financial assets, like the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 137

Shocked Disbelief 137

declines that followed the failure of Lehman Brothers in September.”


Economists like Lucas are right to say that their models predict the
unpredictability of such crises. But even they tout a kind of financial
divination that makes them oracles for central bankers to consult.
In the article just quoted, Lucas touts models used after Lehman
Brothers collapsed. They “gave what turned out to be very accu-
rate estimates of the private-spending reductions that ensued over
the next two quarters. When Ben Bernanke, the chairman of the
Fed, warned Hank Paulson, the then treasury secretary, of the eco-
nomic danger facing America immediately after Lehman’s failure,
he knew what he was talking about.” Thus, even this group prac-
tices a form of financial divination that is meant to support policy
formation by central banks and other state entities Note, by the
way, the praise heaped on another chairman of the Federal Reserve
System.
It may seem extravagant to compare mainstream macroeco-
nomics to divination in service of central bankers. But macroeco-
nomics researchers in the US are highly dependent on the Federal
Reserve System, whose responsibilities entail precisely the sort of
foresight mainstream macroeconomists pretend to provide. I take up
each point in turn.
White (2005) has shown that macroeconomic researchers in the
US are dependent on the Federal Reserve System.

Judging by the abstracts compiled by the December 2002 issue


of the e-JEL, some 74 percent of the articles on monetary policy
published by US-based economists in US-edited journals appear in
Fed-published journals or are co-authored by Fed staff economists.
Over the past five years, slightly more than 30 percent of the articles
by US-based economists published in the Journal of Monetary Eco-
nomics had at least one Fed-based co- author. Slightly more than
80 percent had at least one co-author with a Fed affiliation (cur-
rent or prior Fed employment including visiting scholar appoint-
ments) listed in an online vita. The corresponding percentages for
the Journal of Money Credit and Banking were 39 percent and
75 percent. The editorial boards (editors and associate editors) of
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 138

138 The Limits of Mathematical Modeling in the Social Sciences

these journals are even more heavily weighted with Fed-affiliated


economists (9 of 11, and 40 of 46, respectively).
(White, 2005, pp. 325–326).

White concludes dryly, “Fed-sponsored research generally adheres to


a high level of scholarship, but it does not follow that institutional
bias is absent or that the appropriate level of scrutiny is zero” (White,
2005, p. 344). The Federal Reserve System’s dominance of Ameri-
can macroeconomics may help to explain the fawning praise of Alan
Greenspan I noted earlier. It also helps to explain why mainstream
macroeconomists practice financial divination.
The core responsibility of the Federal Reserve System, as
expressed in the Federal Reserve Act, has been “to promote effec-
tively the goals of maximum employment, stable prices, and moder-
ate long-term interest rates” (FRS, 2005, p. 15). But there is a long
list of other responsibilities of the Fed or its officers. The Fed has an
important role in bank supervision, for example. The chairman “is
the alternate US member of the board of governors of the Interna-
tional Monetary Fund, a member of the board of the Bank for Inter-
national Settlements (BIS), and a member, along with the heads of
other relevant U.S. agencies and departments, of the National Advi-
sory Council on International Monetary and Financial Policies.” And
so on. The Dodd–Frank Act added further responsibilities, including
a central role in identifying the “systemic risk” associated with large
financial institutions. Forecasting models are essential to the Fed’s
attempts to execute these varied duties, as Greenspan (2004) has
emphasized. “The success of monetary policy depends importantly
on the quality of forecasting” (Greenspan, 2004, p. 39). There is,
then, both a supply and a demand for “forecasting” in macroeco-
nomics. I have been focusing relatively narrowly on macroeconomics
and monetary policy, it seems fair to say that there is a brisk demand
and willing supply for prognostication and magical thinking in eco-
nomics in general.
In economics today, it is rather as if germs could sponsor medical
research. It may seem mere jocularity to ask with Peart and Levy
(2012), What if germs could sponsor research? We should, instead,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 139

Shocked Disbelief 139

take the question seriously. Peart and Levy note at least two conse-
quences of germ patronage. On the one hand, “The researcher may
be entirely self-interested and willing to trade something for results
that favor germs.” On the other hand, they may be “sympathetic
to germs.” In either event, “what germs want is results that favor
germs.” Thus, “Self-interested germs want researchers who fall into
one or the other category.” Clearly, the germs will sponsor research
that promotes their (perceived) interests (p. 20).
As Peart and Levy (2012, pp. 21–22) note, Thomas Hobbes saw
the root problem. In Leviathan, he says, “I doubt not but if it had
been a thing contrary to any mans right of dominion, or to the inter-
est of men that have dominion, That the three Angles of a Triangle
should be equal to two Angles of a Square; that doctrine should have
been, if not disputed, yet by the burning of all books of Geometry,
suppressed, as farre as he whom it concerned was able” (Hobbes,
1909 [1651], pp. 79–80).
We can translate Hobbes’s argument into the lingo of modern
economics. The doctrine of “consumers’ sovereignty” holds that con-
sumers’ decisions to buy or not to buy determine the production
of goods and services. “Competitive institutions are the servants of
human wants” (Hutt, 1936, p. 175). The point of the thought exper-
iments of Hobbes and of Peart and Levy is, in part at least, to show
that this doctrine of consumer sovereignty applies no less forcefully
in the market for ideas than in the market for men’s shoes. Thus,
the demand for magical thinking meets a willing supply
The pattern and problem go back at least as far as ancient Greece.
Nilsson (1940, pp. 121–139) discusses the conflict between Greek
“seers and oracle mongers” and the philosophers and sophists of the
6th and 5th centuries BCE. The seers were the experts challenged
by the philosophers and sophists. Nilsson explains:

The real clash took place between that part of religion which inter-
fered most in practical life and with which everyone came into
contact every day, namely, the art of foretelling the future, and
the attempts of natural philosophy to give physical explanations
of celestial and atmospheric phenomena, or portents, and other
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 140

140 The Limits of Mathematical Modeling in the Social Sciences

events. Such explanations undermined the belief in the art of the


seers and made it superfluous. For if these phenomena were to be
explained in a natural way, the art of the seers came to naught.

Given the tight connection between religion and politics that Nilsson
notes, to question divination was a political act that threatened state
power. The death of Socrates is to be seen in the light of political
struggles in which the philosopher challenged the authority of reli-
gious experts who then used political means to defend their preroga-
tives. The politically powerful used seers and divination to maintain
power while being in some measure subject to the influence, even
control, of these same experts. The parallels with modern economic
experts seem quite close. And as we have seen, modern economists,
no less than ancient oracle mongers use magical thinking to support
their auguries.
If the “oracle mongers” were politically important, we should
find evidence of corruption in at least some cases. And we do for
at least one important case, the oracle at Delphi. Herodotus records
at least two instances of bribes given to the oracle (V 63, VI 66,
Fairbanks, 1916, pp. 40–41). Reeve (1990) gives some details of the
operation of the oracle and notes that there were “two methods of
consulting the oracle” (p. 29). One was expensive, “involving the
sacrifice of sheep and goats,” and the other cheap. The existence of
an expensive method strongly suggests that Delphic pronouncements
were up for sale. Other evidence supports the same conjecture. Broad
(2006) notes the “monumental wealth” of Delphi and says, “It was
the custom for thankful supplicants to send back riches. These and
other gifts and tithes accumulated over the centuries to the point
that Delphi became one of the wealthiest places on Earth” (p. 16). It
seems hard to distinguish such “gifts” from bribes. Arnush notes that
both public and private “consultants” had to pay “taxes in the form
of a sacrifice and a special type of cake (the pelanos) in order to gain
access to the oracle” (p. 100). Lloyd-Jones (1976, p. 68) grudgingly
admits, “Anti-clerical critics can easily accuse the Delphians of cyn-
ical pursuit of their own private interest.” Broad says, “The odor of
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 141

Shocked Disbelief 141

corruption wafted about the Oracle when at times she seemed ready
to please whoever held power.”
If the oracle at Delphi was up for sale, we might wonder whether
Socrates (or some of his supporters) may not have paid for the
oracle to make its famous pronouncement that there was no man
wiser than Socrates. Recall that this statement came in response
to Chaerephon’s question whether there was anyone wiser than
Socrates. Chaerephon was “notoriously poor” and probably used the
cheap method of consultation (Reeve, 1990, p. 29). But if someone
sent him to Delphi to get the desired answer, he might well have
brought money, gifts, or livestock he could not have provided out of
his own apparently meager resources. Indeed, what better agent to
deliver the bribe than one “notoriously poor”? The conjecture that
Socrates cheated is consistent with a reading of the Apology in which
Socrates was an expert insisting that Athens place its trust in him. In
this reading, the conflict between Greek philosophers and her “seers
and oracle mongers” (Nilsson, 1940, pp. 121–139) becomes, in this
instance at least, a struggle for power between two different groups
of experts. Plato’s Republic, on this reading, is closer to his master’s
vision than we might otherwise have thought.
Xenophon’s version of the Socrates’ story contains a passage that
may bolster the view of Socrates as expert. Meletus exclaims to him,
“I know those whom you persuaded to obey yourself rather than the
fathers who begat them.”

“I admit it,” Socrates replied, “in the case of education, for they
know that I have made the matter a study; and with regard to
health a man prefers to obey his doctor rather than his parents; in
the public assembly the citizens of Athens, I presume, obey those
whose arguments exhibit the soundest wisdom rather than their
own relations. And is it not the case that, in your choice of generals,
you set your fathers and brothers, and, bless me! your own selves
aside, by comparison with those whom you believe to be the wisest
authorities on military matters”?

(Xenophone, 2007, pp. 6–7)


May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 142

142 The Limits of Mathematical Modeling in the Social Sciences

In this passage, Socrates seems to call for the rule of experts.


In any given area such as education or medicine, we should obey
those who “have made the matter a study” and become “the wisest
authorities” in that area. Socrates’ expertise is in education, he tells
us. Trust me to educate your youth, Athens, for I am the wisest
educator among you. One is reminded of a quote attributed to the
economic expert Paul Samuelson. “I don’t care who writes a nation’s
laws — or crafts its advanced treatises — if I can write its economics
textbooks” (Weinstein, 2009).
Central bankers, Greek seers, and professors of economics are all
experts. An expert is anyone paid for his or her opinion. The problem
with experts is that they may “fail,” which is to say that they may get
it wrong. “We have expert failure if an expert produces an analysis
that deviates from some normative expectation associated with the
decision to rely on that expert” (Koppl, 2015, p. 353). It is hardly
damnation to say that some class of persons may err. But it may be
worth asking when experts are more likely to fail and when less.
According to information choice theory (Koppl, 2012, 2015),
expert failure is more likely when experts decide for nonexperts
rather than merely advising them. And expert failure is more likely
when experts have monopoly power than when they must compete
with other experts (Koppl, 2015, p. 354). State-sponsored eugenics
experts may be monopoly experts with the power to decide for others
whether they should be sterilized. In such cases, the risk of expert fail-
ure is high. Such cases are not, unfortunately, “ancient history” upon
which we may look back with a shudder and a sense of superiority. In
the United States, formally recognized and legally sanctioned coer-
cive sterilizations were performed well into the 1970s (Stern, 2005;
Shreffler et al., 2015). More recently, the Center for Investigative
Reporting has found, “Doctors under contract with the California
Department of Corrections and Rehabilitation sterilized nearly 150
female inmates from 2006 to 2010 without required state approvals”
(Johnson, 2013). The popular magazine Consumer Reports, on the
other hand, merely advises its readers. And it competes with other
providers of consumer reviews, advice, and information. In this mar-
ket, the risk of expert failure is lower.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 143

Shocked Disbelief 143

Monopoly experts have an incentive to engage in magical think-


ing. Obfuscation and mystification tend to elevate their prestige and
reduce the probability of being challenged. As Berger and Luck-
mann (1966) have pointed out, a body of experts may defend
its monopoly with “intimidation, rational and irrational propa-
ganda . . . mystification,” and “manipulation of prestige symbols”
(p. 87). The alchemists, for example, wrote treatises that sometimes
contained real information. Often, however, their treatises were will-
fully obscure. Some of them claimed to have performed marvelous
feats while keeping the supposed technique a secret. (Chapter 4 of
Mackay (1852) is a history of alchemy.)
Sadly, we cannot claim to have left expert mystification behind
us. Berger and Luckmann note that physicians manipulate prestige
symbols, mystify, and propagandize on the power and mystery of
modern medicine (p. 88). The “general population is intimidated by
images of the physical doom that follows” from rejecting a doctor’s
advice. “To underline its authority the medical profession shrouds
itself in . . . symbols of power and mystery, from outlandish costume
to incomprehensible language.” Medical insiders are kept in, that
is kept from “quackery,” “not only by the powerful external con-
trols available to the profession, but by a whole body of professional
knowledge that offers them ‘scientific proof’ of the folly and even
wickedness of deviance” (p. 88).
Forensic science provides another example of experts engaged in
mystification and magical thinking. These disciplines rely on sub-
jective judgments that are not generally scrutinized by others. One
forensic examiner makes a subjective judgment and that’s that. This
allows choices little different from those of the ancient Greek oracle
mongers. Even the most conscientious and scrupulous forensic exam-
iners will be unable to judge whether they have followed any rules of
science that they may even urgently wish to uphold. The logic here
seems parallel to Wittgenstein’s argument that a private language is
impossible (Wittgenstein, 1958, pp. 89e–95e). We might say that a
person has a “private language,” but precisely because it is private no
one can check to be sure the rules of this supposed language have been
followed, not even the private language user herself. Similarly, the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 144

144 The Limits of Mathematical Modeling in the Social Sciences

subjective judgments of forensic scientists typically go unchallenged


by others, including other forensic scientists. Instead, a “private”
process of evaluation yields an incorrigible subjective judgment. It is
impossible to check whether an examiner followed scientific norms or,
more generally, arrived at a reasonable (let alone correct!) judgment.
The forensic examiner herself is unable to judge whether she followed
the rules or made a sensible decision.
As I have noted in Koppl (2010, p. 225), the reliance on subjective
techniques implies that the personal judgment of a forensic scientist
is the principal criterion for judging what the evidence means and
whether a known and unknown sample have a common origin. Many
forensic disciplines rely on subjective judgment, including “impres-
sion evidence” such as shoeprints and tire tracks, toolmarks and
firearms identification (the latter commonly called “ballistics”), tra-
ditional hair microscopy, the handwriting comparisons of questioned
document examiners, bloodstain pattern analysis, and fingerprint
examinations (NAS, 2009). Thompson (2009) shows that subjective
judgment is often used in DNA profiling as well.
Nichols (2007) attempts a spirited defense of subjective judgment
in firearms and toolmark identification, which is commonly known
as ballistics. He insists that standard techniques are “rooted in firm
scientific foundations” and “critically studied according to the pre-
cepts of the scientific method.” And yet Nichols says, “Currently,
the interpretation of individualization/identification is subjective in
nature, founded on scientific principles and based on the examiner’s
training and experience” (p. 587). In other words, they match when
we say so and that’s what we call science! It is understandable that
such defenses to a discipline would be accompanied by resistance
to redundancy and oversight and strong demarcations between out-
siders and insiders (Gestring, 2009).
In one Illinois case, the forensic scientist testified, “I couldn’t
distinguish if I was looking almost at two hairs. They looked just
like one.” He elaborated, “What I saw was a hair like this where I
could see all the pieces like a puzzle. Like in the previous hair. A line
would just fit in. In other words it wasn’t a type of situation of sitting
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 145

Shocked Disbelief 145

down and looking for it. Just like if you dropped two dollar bills and
you see two dollar bills on the floor. You see two one dollar bills. It’s
obvious. And that’s how it looked there” (Garrett and Neufeld, 2009,
p. 56). As I point out in Koppl (2010, p. 225), two distinct human
hairs from the same head may have similar characteristics such as
color and diameter, but they will not have the number of precise and
detailed points of strict correspondence that are present by design in
two one-dollar notes.
We no longer consult augurs and oracular priests. Today’s magic
is performed through seemingly scientific procedures or, as in the case
of most forensic science examinations, subjective judgments dressed
up in scientific garb. Impossible implicit assumptions, however, may
turn science into magic. Economic experts, unfortunately, have been
only too eager to promote schemes of control and domination under
the banner of science and rational planning. I noted above the demon-
stration of Saari and Simon (1978) that “a staggering amount of
information” (p. 1099) would be required to reach equilibrium. And
I noted White’s (2005) evidence of monopoly power in the production
of macroeconomic expertise. In this situation, we have the conditions
of expert failure in strong measure.
If we are to purge magical thinking from economic science, the
Gödel–Turing phenomenon must be understood, conquered, and use-
fully mastered. Tsuji et al. (1998) provide a good example to follow.
They have a seemingly arcane result in computability theory. It has
very practical implications, however, that imply limits to the power
of the powerful. They reveal just how pervasive noncomputability is.
It is surprising to learn just how little we can figure out about the
mathematical world even of classical analysis. They cite favorably
“Wolfram’s conjecture” that “undecidability exists everywhere, even
in trivial physical theories.” They show that even finite games can
be undecidable. This almost bizarre result merits attention. In one
sense, any finite game is trivially decidable. If we have a finite num-
ber of players, each of whom has a finite number of strategies, then
we can list every strategy combination and its corresponding payoff
vector. We can simply run down this finite list and see which entries,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 146

146 The Limits of Mathematical Modeling in the Social Sciences

if any, are Nash equilibria. Citing Prasad (1991) da Costa and Doria
(2005) say “by brute force comparison we end up with the desired
equilibria” (p. 555). So far so good. But this result, they point out,
assumes we have a complete list of all strategies and payoffs “without
the mediation of any formal language” (p. 555).
Often, however, we have no such list available to us. Instead, we
describe strategies and payoffs obliquely through formal language.
Thus, as da Costa and Doria explain, finite games that seem so
trivially decidable can be described with “complicated expressions,
which may be the case when we are dealing with a model . . . of
some market.” And for some games so described, it is not possible
to compute the Nash equilibria.
Tsuji et al. (1994) show, then, that it may be impossible to com-
pute equilibria in finite games. As da Costa and Doria (2005) explain,
Tsuji et al. show that the “determination of equilibrium prices in a
competitive market” is “formally equivalent” to “determining equi-
librium in finite noncooperative Nash games,” and will sometimes
be, therefore, formally impossible. “So, the main argument in favor
of a planned economy clearly breaks down.” Yet they report, “the
equilibrium point of the market is eventually reached while we cannot
in general compute it beforehand” (pp. 38–39). The dream of a thor-
oughgoing socialist planning that replaces all market mechanisms
has been shown to be impossible and, therefore, a form of magical
thinking.
Building on Rice’s theorem, da Costa and Doria (2014) show
that “For any set of policies P there will be a state p of the economy
for which the action of those policies will be algorithmically unpre-
dictable” (p. 242). The general thrust of this and their related results
is that general regulatory schemes are impossible. Velupillai (2007)
shows that “an effective theory of economic policy is impossible”
for an economy that can be modeled as “a dynamical system capa-
ble of computation universality (2007, pp. 273, 280). He links this
important result to F. A. Hayek’s “lifelong skepticism on the scope
for policy in economies that emerge and form spontaneous orders”
(Velupillai, 2007, p. 288).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 147

Shocked Disbelief 147

These results seem to suggest that the Gödel–Turing phe-


nomenon challenges the power of the powerful. Consider da Costa
and Doria’s proof that a general theory of regulation is not decidable.
da Costa and Doria show that certain states of the economy are for-
mally undecidable under any regulatory scheme. For any regulatory
scheme and any two decidable states of the economy, we can identify
a third state of the economy that is undecidable. This result cuts the
supports out from under the policy expert’s pretence of knowledge.
(See Hayek (1989) for a discussion of the “pretence of knowledge” in
modern economics.) If we are not generally in a position to predict the
results of regulation, it becomes harder to justify any regulatory —
or deregulatory — proposal. The economic expert who would design,
for example, financial market regulations must be able to say what
will happen with the regulation and what will happen without it. But
if, as da Costa and Doria show, it is not generally possible make such
predictions, economic experts’ advice would seem to have little value
to policymakers. And the policymakers would seem to have less value
as well! If this is the sort of subversive result that comes from the
Gödel–Turing phenomenon, perhaps we should not be surprised that
it has not entered fully into the discussion in mainstream economics.
da Costa and Doria use an extension of Rice’s theorem to produce
their subversive result. Rice’s theorem is a fitting tool for them to
use because would-be regulators and economic experts are essentially
trying to program the economy as if it were a computer. “Roughly,”
they explain, Rice’s theorem “asserts that you cannot test for prop-
erties of programs in the general case” (p. 238). Rice’s theorem
implies, in essence, that I have no general procedure to ensure the
computer program I wrote does or does not have any give property.
As Tychonievich (2012) eloquently points out, “This includes nice
little properties like ‘doesn’t send all my data to my competitors and
then misuse my hardware until it over-heats and fries’ or ‘saves my
documents when I ask it to.”’
Experts and regulators imagine themselves able to program and
reprogram the economy without the epistemic dangers revealed by
Rice’s theorem. They put themselves in the position of the “man
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 148

148 The Limits of Mathematical Modeling in the Social Sciences

of system” excoriated by Smith (1759, pp. 380–381). The “man of


system,” Smith explained,

seems to imagine that he can arrange the different members of a


great society with as much ease as the hand arranges the different
pieces upon a chess-board. He does not consider that the pieces
upon the chess-board have no other principle of motion besides that
which the hand impresses upon them; but that, in the great chess-
board of human society, every single piece has a principle of motion
of its own, altogether different from that which the legislature might
choose to impress upon it.

For economics and other social sciences, the power in Rice’s theorem
is that it shows that Smith’s “man of system” overreaches even when
no piece has a principle of motion of its own. How much greater is
this overreach when each piece does have a principle of motion of its
own?
Whether for reasons given by Adam Smith or by Rice’s theorem,
economists and other experts should not pretend to stand above the
system like the man of system The observer is in the system, not
above it. Wolpert (2001) considers all computers in the system simul-
taneously. He shows that for any pair of computers, it is impossible
for each to reliably predict the output of the other. It follows that if
at least one private actor uses a computer as powerful as that of the
regulator, then the regulator will make mistakes.
Wolpert (2001, p. 016128-1) shows that “the universe cannot con-
tain a computer to which one can pose any arbitrary computational
task”. Any computer that exists in the world will sometimes be mis-
taken about the world. Wolpert builds on this basic result to show
that no computer in the world can predict everything about the world
ahead of time. Even if the computer could answer every question
about moment t + x given input available at time t, for at least some
questions the computer would not be able to spool out its “predic-
tion” until after t + x. In this sense, no computer can “process infor-
mation faster than the universe.” Wolpert is thinking about physics.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 149

Shocked Disbelief 149

Citing Wolpert (1996), Koppl and Rosser (2002, pp. 354–358) show
that his results about the limits of prediction apply to social science
as well. Wolpert (2001, pp. 016128-3 and 016128-14) notes that his
results are robust to the interpretation of the universe in his analy-
sis; they require only that the computers considered exist within the
system in question.
The model of the universe that Wolpert uses to get his unpre-
dictability result is so broad that it “does not rely on temporal order-
ings of events.” Thus, his unpredictability results also shows that
“no computer can infallibly predict the past” either (Wolpert, 2001,
p. 016128-1, emphasis in original). This limit, in turn, implies that
memory is fallible. Nor can there be “an infallible general purpose
observation device” or “an infallible general purpose control device”
(p. 016128-2).
Wolpert’s analysis would seem to have an intriguing theological
implication. If God intervenes in the world actively, then God is not
omniscient. If God is omniscient and unerring, it is only because he
does not interact with the world he created. This theological insight
raises a political question. If we must imagine a personal god to be
fallible, how much faith can we place in human regulators, lawgivers,
and overlords?
Wolpert’s concept of “god computer” helps to reveal limits to
error correction. He defines a god computer as “any physical com-
puter in a set of computers such that all other physical comput-
ers in that set are predictable to the god computer.” Any set of
(“pairwise-distinguishable”) computers can have at most one god
computer. A god computer is omniscient if it can predict not merely
any other individual computer in the set, but all of them simultane-
ously (Wolpert, 2001, p. 016128-10). Notice that an omniscient god
computer is “omniscient” only with respect to the outputs of other
computers in the set. One might expect that a cooperative combina-
tion of fallible computers might be able to achieve perfection through
error correction. This does not seem to be the case, however. Consider
the case of three (“pairwise-distinguishable”) computers, C 1 , C 2 , and
C 3 , where C 2 corrects the prediction C 3 makes about C 1 . “Then
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 150

150 The Limits of Mathematical Modeling in the Social Sciences

it turns out that . . . if C 1 is omniscient, it is not possible that C 2


always correctly inputs a bit saying whether C 3 ’s answer is the cor-
rect response to C 3 ’s question.” This result is “a restriction on the
efficacy of any error correction scheme in the presence of a distin-
guishable omniscient computer” (p. 016128-10).
Wolpert’s (2001, p. 016128-2) impossibility results “are all
derived through what is essentially a physical version of a Cretan liar
paradox . . . involving two instances of the putative computer rather
than self-referential computers”. In other words, Wolpert gets his
results by putting all computers in the model and thus avoiding
the errors created by trying to adopt the godlike perspective of a
detached observer, the man of system. When you put all computers
into the model, self-reference creates computability limits.
Like the boy in “The Emperor’s New Clothes,” we should naively
ask whether the agents in our economic models can have the knowl-
edge imputed to them and whether they can do the mathematical
calculations imputed to them. Given the conditions favoring expert
failure in economics, however, it would be itself a form of magical
thinking to believe that such failure can be avoided or corrected
by convincing logical argument. The computability lessons of Tsuji
et al. (1994), da Costa and Doria (2014), Wolpert (2001), and oth-
ers may be doomed because economic experts have an incentive to
persist in magical thinking. An interesting episode in the intersec-
tion between computability theory and economics may bolster my
pessimistic assessment.
Canning (1992) produced a result similar to that of Tsuji et al.
(1994), though not as strong. He showed that a Nash game may not
be algorithmically solvable if there is no “Nash equilibrium in which
the equilibrium strategy is a best reply to all best replies to itself”
(p. 877). He calls this condition “strict Nash.” This is fundamental,
but he argued curiously that his result implied only a “slight” adjust-
ment to then-current practice in social science and applied game
theory.
Canning notes, “Strictness is required because we can construct
an ‘imp’ algorithm, {r}, which always acts rationally, but, which,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 151

Shocked Disbelief 151

given this constraint, tries to make the solution algorithm’s choice


wrong. An ‘imp’ is a rational algorithm which acts as a ‘demon’
whenever it can do so at no cost to itself” (p. 884). In his conclusion,
Canning (1992, p. 888) says, “The usual domain used in game theory,
games with Nash equilibrium and opponents who are rational, needs
to be qualified slightly to be admissible. The equilibrium strategy
must be a best reply to all best replies to itself but we require only
rationality relative to the domain being faced.” The requirement of
“strictness” excludes many games including two given considerable
attention by von Neumann and Morgenstern (1953), namely, “Match-
ing Pennies” and “Stone, Paper, Scissors.” It seems doubtful whether
Canning’s restriction should be considered “slight.” If we recognize
that Cannning’s result is more monumental than slight, then we chal-
lenge the power, prestige, and incomes of economic experts.
One cannot help but wonder if Canning’s result was acceptable in
a leading economics journal only because his deeply subversive result
that many canonical Nash games are uncomputable was cloaked in
the blasé language of rationality. In his conclusion, Canning says
reassuringly, “Our main result is that, providing we limit the domain
of applicability of game theory in a natural way, computability does
not undermine rationality” (p. 887). Having just killed Nash equi-
librium, Canning stands over the corpse saying, “Move along, move
along. There is nothing to see here.”
If economists were ever to allow the Gödel–Turing phenomenon
to fully inform their policy advice, the demand for such advise would
fall precipitously. Unfortunately, as I argued earlier, the doctrine of
consumer sovereignty applies no less forcefully in the market for ideas
than in the market for men’s shoes. The demand for magical thinking
continues to be met with an abundant supply, with the result that the
Gödel–Turing phenomenon has been marginalized within academic
economics.
The Gödel–Turing phenomenon is a strictly modern develop-
ment. There were some prior hints, such as Poincaré’s analysis of
the three-body problem. But the problem was essentially unrec-
ognized before the publication of Gödel’s famous paper in 1931.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 152

152 The Limits of Mathematical Modeling in the Social Sciences

Schwindler’s recently discovered “Fragment auf Rechtsgeschichte,”


however, shows us an ancient society that seems to have had a fairly
deep understanding of an essentially similar problem arising in law
(Schwindler, 2020). Schwindler has refused to translate the Frag-
ment into a modern language or even any other ancient language on
the grounds that “Translation from one language to another is com-
pletely impossible” (Übersetzung von einer Sprache in eine andere ist
manchmal schwierig). Citing this unfortunate decision by Schwindler
(2020), LeCrétin (1899) attempted to provide the German transla-
tion Schwindler might have produced. Unfortunately, LeCrétin knew
no German at the beginning of the process. He was therefore forced
to rely on a French–German dictionary to complete the task. The
poor quality of LeCrétin’s translation has been an obstacle to more
extensive and informative research into the Fragment and its rela-
tionship to law and custom in the rest of the ancient world. For this
reason, I have solicited the help of Hail Goodfellow of the Syracuse
University Classics Department to produce a new translation, this
time into English. It is published here for the first time. See Good-
fellow (2016) for a description of the translation, its relation to the
original, how it corrects LeCrétin’s earlier translation, and evidence
that the famous “Lost History” described by Miranda (1658) might
be a copy of the Schwindler’s Fragment.

1. Fragment on legal history


The law of this country, travelers reported, was ancient and
unbending. A minor transgression could cost the unfortunate citi-
zen his estate or, if he was poor, a hand or a foot. The severity of
the law was made worse by its mass. The original law was simple
and compact. But each law in the system generated new ones,
and the new ones others after that until the central problem of
the legislature was crafting laws to enable the storage of all the
law books that had accumulated over the centuries. Eventually all
public and private buildings were commandeered by the state for
the storage of law books. The law required the people to live with
(Continued)
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 153

Shocked Disbelief 153

(Continued)

law books lining their walls and stacked up under their tables and
on top of their cabinets.
Legal scholars in this country were once despised as charlatans.
Over the centuries, however, they grew in stature and came to
be venerated as holy men. These divines of the law were divided
into contesting schools. One group held that the law was arbitrary,
absurd, and impossible. Others held that the law itself required the
citizen to believe that all of the laws in all of the books covering the
interior spaces of the country were implicit in a few deep unifying
principles. Some went further by insisting that only one principle
unified the whole. They sought the Original Law from which it
would be possible to infer all other laws without the necessity of
studying ancient texts.
This last group was vindicated when the venerated Hilbertus
finally discovered the Original Law. The First and Original Prohi-
bition, he discovered, forbad knowledge of the law. The discovery
of this law was the highest crime recognized in the law. From
this first great taboo all other prohibitions and duties followed.
Whether from foresight or considerations of his personal safety,
Hilbertus attempted to hide the content of the Original Law from
other jurists. They quickly understood, however, the meaning of
his silence and fell into accusations and recriminations that grew
increasingly violent over time. Each disputant claimed the right to
punish the others. Ordinary citizens were drawn into the conflict,
and it became the sacred duty of every citizen to try every other
citizen for the crime of knowing the law. In the end, no one escaped
the judgment of the law.

It is unknown, of course, where the “country” described in the


Fragment may have been located or whether it was not, perhaps,
pure fiction. In any event, it seems evident that law did not arise
from custom in the country of the Fragment. It seems to have been
imposed by a lawgiver. This lawgiver must have held himself above
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 154

154 The Limits of Mathematical Modeling in the Social Sciences

the law if he was to avoid the severest penalty of the law. The
top-down imposition of the Original Law ensured that consistency
would be a higher legal principle than reasonableness. But consis-
tency is a harsh mistress. Like ice-nine in Vonnegut’s Cat’s Cradle,
it can transform the propositional ecosystem into a frozen wasteland.
For classical logic has no mechanism of self-correction. It is not
adaptive. In the land of Schwindler’s Fragment, the law became
an iron cage, and jurists become jailors. It might seem tempting
to imagine what might have happened if the jurists of this coun-
try had been familiar with the adaptive, paraconsistent logics of da
Costa and others (da Costa et al., 2007). But consistency seems to
have been the very point of the Original Law. Indeed, we cannot
exclude that the law was meant as vengeance against a defeated
enemy.
The legal history recounted in Schwindler’s Fragment should
serve as a kind of warning to economists. In both economics and
the country of the Fragment, the remorseless application of classical
logic to an impossible premise produces dire consequences. We may
count ourselves lucky in economics that there is an exit from bed-
lam. We can, if we choose, study the Gödel–Turing phenomenon to
understand it, conquer it, and usefully master it. Unfortunately, when
germs sponsor research, economists have every incentive to mortify,
ignore, and deny it. For economists willing to think and work inde-
pendently, however, for those economists willing to live without the
patronage of germs, the Gödel–Turing phenomenon can and will be
understood, conquered, and usefully mastered. Newton da Costa’s
work is a good place to start.

References
Arnush, M. (2005). Pilgrimage to the Oracle of Apollo at Delphi: patterns of
public and private consultation. in Elsner, J. and Rutherford, I. (eds.). Pil-
grimage in Graeco-Roman and Early Christian Antiquity: Seeing the Gods,
pp. 97–110. Oxford: Oxford University Press.
Berger, P. and Luckmann, T. (1966). The Social Construction of Reality. New
York: Anchor Books.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 155

Shocked Disbelief 155

Besley, T. and Hennessy, P. (2009). Letter to Queen Elizabeth, 22 July 2009,


http://www.feed-charity.org/user/image/besley-hennessy2009a.pdf (acces-
sed 11 January 2016).
Blinder, A. S. and Reis, R. (2005). Understanding the Greenspan standard. In The
Greenspan Era: Lessons for the Future. Proceedings of Symposium Sponsored
by The Federal Reserve Bank of Kansas City, August 25–27, 2005, Jackson
Hole, Wyoming, pp. 11–96. Available at: http://www.kc.frb.org/publicat-
ions/research /escp/escp-2005.cfm.
Broad, W. J. (2006). The Oracle: Ancient Delphi and the Science Behind Its Lost
Secrets. New York: The Penguin Press.
Brock, W. A. and Hommes, C. H. (1997). A rational route to randomness. Econo-
metrica, 65, 1059–1095.
Canning, D. (1992). Rationality, computability and nash equilibrium. Economet-
rica, 60, 877–888.
Chaitin, G., da Costa, N. and Doria, F. A. (2012). Gödel’s Way: Exploits into an
Undecidable World. CRC Press.
da Costa, N. C. A. and Doria, F. A. (2014). On an extension of Rice’s theo-
rem and its applications in mathematical economics. Advances in Austrian
Economics, 18, 237–257.
da Costa, N. C. A. and Doria, F. A. (2005). Computing the future. in Velupil-
lai, K. V. (ed.), Computability, Complexity and Constructivity in Economic
Analysis, pp. 15–50. Oxford: Blackwell Publishing.
da Costa, N. C. A., Krause, D. and Bueno, O. (2007). Paraconsistent logics and
paraconsistency. In Jacquette, D. (ed.), Philosophy of Logic, pp. 791–908.
Elsevier.
Fairbanks, A. (1916). Herodotus and the Oracle at Delphi. The Classical Journal,
1(2), 37–48.
Farmer, R. E. A. (2011). Confidence, crashes and animal spirits. Economic Jour-
nal, 122(559), 155–172.
Friedman, J. (2012). An Epistemological critique of technocracy. Unpublished
manuscript.
FRS (Federal Reserve System). The Federal Reserve System: Purposes and Func-
tions, 9th Edition. Washington DC: Board of Governors of the Federal
Reserve System.
Garrett, B. L. and Neufeld, P. J. (2009). Invalid forensic science testimony and
wrongful convictions. Virginia Law Review, 95(1), 1–97.
Gestring, B. (2009). The dawn of the ‘Forensic Science Provocateur.’ CAC News,
1st quarter 2009, 25–28.
Goodfellow, H. (2016). On Exactitude in History. Unpublished manuscript.
Greenspan, A. (2004). Risk and uncertainty in monetary policy. American Eco-
nomic Review, 94(2), 33–40.
Greenspan, A. (2008). Testimony of Dr. Alan Greenspan. Prepared for Com-
mittee of Government Oversight and Reform. October 23. Available online:
https://www.gpo.gov/fdsys/pkg/CHRG-110hhrg55764/html/CHRG-
110hhrg55764.htm.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 156

156 The Limits of Mathematical Modeling in the Social Sciences

Grossman, S. J. and Stiglitz, J. (1980). On the impossibility of informationally


efficient markets. American Economic Review, 70(3), 393–408.
Hayek, F. A. (1989). The pretence of knowledge. The American Economic Review,
79(6), 3–7.
Hobbes, T. (1909 [1651]). Leviathan. Clarendon Press.
Hutt, W. H. (1936 [1990]). Economists and the Public: A Study of Competition
and Opinion. New Brunswick, NJ: Transaction Publishers.
Johnson, C. G. (2013). Female inmates sterilized in California prisons without
approval. The Center for Investigative Reporting, 07 July 2013. Downloaded
3 February 2016 from http://cironline.org/reports/female-inmates-sterilized-
california-prisons-without-approval-4917.
Kirman, A. (2010). The economic crisis is a crisis for economic theory. CESifo
Economic Studies, 56(4), 498–535.
Koppl, R. (2015). The rule of experts. in Boettke, P. and Coyne, C. (eds.), Oxford
Handbook of Austrian Economics. Oxford: Oxford University Press.
Koppl, R. (2012). Information choice theory. Advances in Austrian Economics,
17, 171–202.
Koppl, R. (2010). Organization economics explains many forensic science errors.
Journal of Institutional Economics, 6(1), 71–81.
Koppl, R., Kauffman, S., Felin, T. and Longo, G. (2015). Economics for a creative
world. Journal of Institutional Economics, 11(1), 1–31.
Koppl, R. and Rosser, B. (2002). All that i have to say has already crossed your
mind. Metroeconomica, 53(4), 339–360.
LeCrétin, V. (1899). Allemand est difficile. Annales de l’histoire inventée, 31(4),
159–265.
Lloyd-Jones, H. (1976). The Delphic Oracle, Greece & Rome, 23(1), 60–73.
Lucas, R. (2009). In defence of the dismal science. The Economist, 6 August
2009. Downloaded 11 January 2016 from http://www.economist.com/node/
14165405/print.
Mackay, C. (1852). Memoirs of Extraordinary Popular Delusions and the Madness
of Crowds. London: Office of the National Illustrated Library, 1852. [Online]
available from http://www.econlib.org/library/Mackay/macEx15.html;
accessed 5 February 2004.
Miranda, S. (1658). Viajes devarones prudentes, Libro 3, Cap. 14159, Lerida.
NAS Committee on Identifying the Needs of the Forensic Sciences Community.
(2009). Strengthening Forensic Science in the United States: A Path Forward.
National Academies Press, http://www.nap.edu/catalog.php?record id=
12589.
Neumann, J. V. and Morgenstern, O. (1953). Theory of Games and Economic
Behavior. Princeton, NJ: Princeton University Press.
Nichols, R. G. (2007). Defending the scientific foundations of the firearms and
toolmark identification discipline: responding to recent challenges. Journal
of Forensic Science, 52(3), 586–594.
Niksa, V. (1959). The role of quantitative thinking in modern economic theory.
Review of Social Economy, 17(2), 151–173.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch04 page 157

Shocked Disbelief 157

Nilsson, M. P. ([1940] 1972). Greek Folk Religion. Philadelphia: University of


Pennsylvania Press.
Peart, S. and Levy, D. M. (2012). If germs could sponsor research: Reflections
on sympathetic connections amongst subjects and researchers. Advances in
Austrian Economics, 17, 19–35.
Pierce, A. (2008). The queen asks why no one saw the credit crunch coming.
Telegraph, 5 November 2008.
Prasad, K. (1991). Computability and randomness of Nash equilibrium in infinite
games. Journal of Mathematical Economics, 20, 429–442.
Reeve, C. D. C. (1990). Socrates in the Apology. Indianapolis, IN: Hackett Pub-
lishing Company.
Saari, D. G. and Simon, C. P. (1978). Effective price mechanisms. Econometrica,
46(5), 1097–1125.
Schwindler, R. (2020). Fragment auf Rechtsgeschichte. Studies in Historical
Invention, 3, 14159.
Shreffler, K. M., McQuillan, J., Greil, A. L. and Johnson, D. R. (2015). Sur-
gical sterilization, regret, and race: Contemporary patterns. (Social Science
Research), 50, 31–45.
Smith, A. ([1759] (1976)). The Theory of Moral Sentiments. Indianapolis, IN:
Liberty Classics.
Stern, A. M. (2005). Sterilized in the name of public health: race, immigration,
and reproductive control in modern California. American Journal of Public
Health, 95, 1128–1138.
Thompson, W. C. (2009). Painting the target around the matching profile: the
texas sharpshooter fallacy in forensic DNA interpretation, Law, Probability
and Risk, 8(3), 257–276.
Tsuji, M., daCosta, N. C. A. and Doria, F. A. (1998). The incompleteness of
theories of games. Journal of Philosophical Logic, 27, 553–564.
Tychonievich, L. (2012). Rice’s Theorem. Downloaded 8 February 2016 from
http://www.cs.virginia.edu/∼lat7h/blog/posts/270.html.
Velupillai, V. (2007). The impossibility of an effective theory of policy in a com-
plex economy. in Salzano, M. and Colander, D. (eds.), Complexity Hints for
Economic Policy. Milan: Springer.
Weinstein, M. M. (2009). Paul A. Samuelson, Economist, Dies at 94. New York
Times. 13 December 2009.
White, L. H. (2005). The Federal Reserve System’s influence on research in mon-
etary economics. Economic Journal Watch, 2(2), 325–354.
Wittgenstein, L. (1958). Philosophical Investigations, Third Edition. New York:
Macmillan.
Wolpert, D. H. (1996). An incompleteness theorem for calculating the future.
Santa Fe Institute Working Paper 96-03-008.
Wolpert, D. H. (2001). Computational capabilities of physical systems. Physical
Review E, 65, 016128 1–27.
Xenophon, (2007). The Apology of Socrates. Translated by Dakyns, H. G.
eBooks@Adelaide.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 159

Chapter 5

Complex Systems, Decision-Making


and Computability:
A Social Science Perspective

Denis J. Dean
School of Economic, Political and Policy Sciences
Program in Geospatial Sciences
The University of Texas at Dallas, TX, USA

Euel Elliott
School of Economic, Political and Policy Sciences
Program in Public Policy and Political Economy
The University of Texas at Dallas, TX, USA

1. Introduction
The study of what are commonly referred to as complex adaptive
systems (CASs) is relevant for understanding a sweeping array of
phenomena in the behavioral, life, and social sciences. The presence
of complex systems pose unique challenges for those scholars seeking
to study CASs and better understand their dynamics at different
levels of aggregation. CASs also pose important challenges for those
who seek to better understand how agents acting within these sys-
tems behave.
We begin this chapter with a discussion of CASs and their
essential features. We explore different methodologies for model-
ing CASs. In particular, we focus on the relevance of agent-based

159
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 160

160 The Limits of Mathematical Modeling in the Social Sciences

models (ABMs) to understanding such systems. We then discuss the


relationship between ABMs and important issues of computational
complexity relating to decision making and learning, including the
suggestion that the reality of CASs requires rethinking of traditional
assumptions about the role of standard or classical logic. We conclude
with some final thoughts involving the relationship between CAS and
the social sciences.

2. Overview of complex adaptive systems


A brief synopsis of the characteristics of CASs is in order. CASs are
typically characterized by multiple entities (sometimes termed actors
or agents) interacting in dynamic ways, with their behavior adapt-
ing and evolving over time in response to their environment and the
behavior of other actors or agents. Moreover, CASs are character-
ized by complex, macrolevel phenomena emerging from interactions
occurring at the level of the individual agents. This implies that CASs
can exhibit self-organizing and emergent behavior. This behavior is
self-organizing in the sense that the behavior of the system is not
predetermined or otherwise guided through some top-down, hierar-
chical control mechanism; and it is emergent in the sense that the
behaviors arise from the functioning of the system itself and do not
rely on any external agents or forces in order to evolve.
There are numerous examples of CASs, including the biosphere
(Hinkelman et al., 2011), many kinds of group-based behaviors (Axel-
rod, 1981, 1987, 1997a, b), cultures (Cecconi et al., 2010), various
aspects of global economic dynamics (Dawid and Neugart, 2011),
international politics (Cederman, 1997), and segregation dynamics
(Schelling, 1971), among others (see Bonabeau, 2002).
The interactions taking place within a CAS may be simple or
complex (or more frequently both), but are always abundant and typ-
ically characterized by complex, nonlinear feedback processes. CASs
typically operate in nonequilibrium conditions, and thus are con-
stantly evolving into new states rather than simply maintaining the
status quo. Many CASs also exhibit self-similar or fractal properties.
In other words, CAS phenomena are frequently scale-invariant, i.e.,
patterns observed at the microscale are replicated at the macroscale.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 161

Complex Systems, Decision-Making and Computability 161

Importantly, CASs are characterized by positive feedback pro-


cesses that run counter to the more typical negative feedback effects
with which we are familiar. Common negative feedback systems
involve the muting of signals over time, that is, the consequences
of some action or event become less as that action or event recedes
into the past. In contrast, at least some of the feedback processes tak-
ing place within CASs exhibit positive feedback, where consequences
become more significant over time. This is important because schol-
ars in the social and other sciences have tended to think in terms of
linear, negative feedback regimes. This approach can be limiting; for
example, consider rush hour traffic in a major city. In terms of traffic
flow, the initial consequences of an accident along a major highway
might be relatively minor, with only a lane or two of traffic being
impacted. However, as traffic in those lanes backs up and impacts
other lanes, emergency vehicles arrive on the scene and impact addi-
tional lanes of traffic, backups along the freeway lead to ancillary
backups along feeder roads, and so forth; the consequences of the
initial event become more significant over time, not less.1
As a consequence of their nonlinear, positive feedback systems,
the distributions of event occurrences within CASs are frequently
extremely nonnormal. CASs with large numbers of heterogeneous

1
Researchers have discovered a vast array of phenomena in the biological and
physical, as well as in the social and economic, world, which behave in a manner
that requires a quite different set of assumptions than those upon which a linear
world view were based. Much of this work was described under the unfortunately
named moniker of “chaos theory”, misnamed in the sense that we find phenomena
that while exhibiting seeming randomness and disorder, exhibit complex struc-
tures that require new methodologies to uncover. This research agenda has been
subsumed in the past two decades or so by the sciences of complexity and com-
plex systems, whereby chaotic dynamics have to be understood as being part
of a more encompassing set of processes governed by principles of emergence,
self-organization and self-organized criticality. Work by Kauffman (1993, 2000),
Holland (Holland and Miller, 1991), and others are thoroughly consistent with
this line of research. It is also worth noting that paralleling these new perspectives
in the physical, natural and social sciences, the fundamental paradigm shift in
mathematics was critical. Gödel’s startling revelations as to the incompleteness of
certain formal systems and the parallel research by Church and Turing regarding
the limits of computability added to the new perspective.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 162

162 The Limits of Mathematical Modeling in the Social Sciences

agents interacting with each other generate probability distributions


that tend to be fat-tailed, similar to Cauchy kinds of processes.
For example, the models used by analysts to assess risk prior to
the financial crisis of 2008 and the resulting “Great Recession” of
2008–09 assumed that financial risks could be understood in terms
of a normal, bell-shaped curve (Taleb, 2010). Thus, when evaluat-
ing the risks of the extreme events located in the tails of the dis-
tribution, the likelihood of a catastrophic decline in housing prices
was deemed to be extremely low. The fact that a low-probability
event occurred and triggered the great recession is not by itself proof
that this probability was incorrectly calculated; however, a consensus
seems to be emerging that holds that risks in financial markets may
not be distributed normally, but rather, that certain events, such as
market-related phenomena, may more closely resemble a power-law
distribution with fat tails, with the implication that unlikely events
are more likely to happen than would be the case if a normal, thin-tail
distribution is assumed (Johansen and Sornette, 2001; Viswanathan
et al., 2003; So et al., 2008). However, it is not clear precisely why
complex adaptive phenomena exhibit fat tails, though the long-range
correlation that times series observations exhibit (see Viswanathan
et al., 2003) suggest it has to do with the nonlinear feedback processes
and nonindependence of observations.

3. CASs, modeling, and computation


The modeling of CASs has important implications in terms
of computation, computational complexity, and the computabil-
ity/noncomputability issue. The question arises as to under what cir-
cumstances are models representing CAS processes computable, and
if computable, do these models allow for polynomial time (P) or non-
deterministic polynomial time (NP) solutions? Given the extraordi-
narily large number of computational approaches that can be applied
to the modeling of CASs, these questions do not have simple answers.
However, if the discussion is limited to a single computational method
that lends itself exceptionally well to the CAS framework, some
progress can be made toward answering these questions.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 163

Complex Systems, Decision-Making and Computability 163

The computational methodology that we focus on in this chapter


is agent-based modeling (ABM). What makes ABM such an attrac-
tive approach for representing CASs is that at its core, ABM pos-
sesses many of the fundamental characteristics that define CASs.
Both CASs and ABM focus on multiple agents interacting with their
environment and with one another, both allow for arbitrarily com-
plex feedback loops that impact future agent behaviors, and both
allow the behavior of the system to evolve from the conglomeration
of the actions of the individual agents. These similarities make ABM
a natural choice for modeling CASs.
Before we turn the focus of this chapter exclusively on ABM
approaches toward modeling CASs, it will be useful to look at some
of the alternatives, and their computational implications. One alter-
native approach involves the use of artificial neural networks (ANNs).
ANNs can be described as a set of simple statistical learning devices
(nodes) linked together in a network that is designed to mimic the
structure of neurons in the human brain. Given this architecture,
ANNs are capable of learning and adapting: when trained with
matched sets of input patterns (feed into the first layer of nodes
in the network) and output patterns (produced by the last layer of
nodes), the network will learn the proper analytic techniques to apply
at each node throughout the network to produce a predicted output
pattern that as closely as possible recreates the observed output pat-
tern, given the input pattern that produced the observed output
(Karayiannis and Venetsanopoulos, 2013).
ANNs are often considered superior to conventional statistical
predictive techniques because they make no assumptions about the
distributional qualities of the data, the form of the relationships (e.g.,
linear or not) between the variables, or the degree of correlation
between input variables. These same qualities make them attractive
to those modeling CASs. However, while the advantages of ANNs are
real, ANNs themselves do not in any meaningful way replicate the
function of CASs. While it sometimes argued that the interactions
between the nodes of an ANN are synonymous with the interactions
of agents within a CAS, such similarities are spurious. Agents in a
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 164

164 The Limits of Mathematical Modeling in the Social Sciences

CAS are independent actors working within an ecosystem of other


actors and environmental objects; nodes within an ANN are linked
computing structures that work collectively to produce a predicted
output given a predicted input. Given this, while it may be possible
for ANNs to recreate the results of certain CASs, it is certainly not
possible for them to recreate the function of a CAS.
A second computational architecture that can be used for
exploring CASs are genetic algorithms (GAs). GAs are evolutionary
processes that act upon algorithms intended to recreate observed
outputs from observed inputs. These algorithms are iteratively mod-
ified using techniques that are similar to and inspired by those
found in natural evolutionary processes, including cloning, muta-
tion, and crossover. GA-based analyses have many of the same
advantages as do ANN-based analyses (e.g., no a priori assumptions
about distributional qualities of input data, no assumptions about
the form of the relationships between variables, no independence
assumption, etc.), and they have the added benefit that analyses
of the final algorithm produced through the evolutionary process
can yield insights into the function of the system being modeled
(Michalewicz, 2013). However, it is equally clear that the functioning
of GAs are inherently very different from the agent-based functioning
of CASs.
Cellular automata (CA) represent another approach for those
seeking to model CASs (Wolfram, 2008). A cellular automaton con-
sists of a grid of cells each of which can be defined as being in one of
several states. CAs can be characterized by two or more dimensions,
although the higher the dimensionality the more complex the poten-
tial behaviors. For each cell, a set of cells called its neighborhood is
defined. These neighborhoods can vary in size, defined as the distance
between the cells in question. The value of any cell affects the cells
in its neighborhood (Preston and Duff, 2013). It is possible to think
of a CA model as a rather rigid form of an ABM: the cells repre-
sent agents, but they are immobile and can only interact with other
agents within their neighborhood. While these limitations are not
detrimental to some applications, they can obviously be problematic
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 165

Complex Systems, Decision-Making and Computability 165

in other situations. Thus, it is perhaps best to view CA models as a


special case of the more general ABM approach.
ABMs consist of agents interacting in simulated time and space.
In the object-oriented computer programming motif, agents are sim-
ply instances of an object of the type “agent.” An ABM can consist
of multiple types of agents, and there can be one or more instance
of each type of agent. Each agent has certain properties that can
change over time, or as a result of some action taken by the agent,
or as a result of interactions with other agents and–or their environ-
ment. As just implied, agents also have certain abilities that they
can employ to move through space, manipulate their environment,
or interact with other agents. Returning to our earlier example, an
ABM designed to simulate traffic would have a “car” agent, and there
would likely be many thousand instances of this type of agent. Each
car agent would have certain properties (such as how much fuel is
in its tank), and these properties would change over time (e.g., as
the car consumes fuel) and through actions taken by the agent (e.g.,
the rate of fuel consumption would increase while the car was accel-
erating). Each car agent would also have certain abilities, such as
the abilities to accelerate, decelerate, turn left or right, and so forth.
Epstein and Axtell (1996) provide an excellent introduction to ABM
and the construction of simple artificial societies. A more expansive
discussion is seen in Epstein (1999). One point to keep in mind is that
researchers do not necessarily use these methodologies by themselves,
but will seek to exploit multiple approaches in conducting research.
One might seek to develop hybrid CA and GA models of a particu-
lar phenomenon of interest, for example. A major challenge for the
future is to better delineate which methodologies or combinations of
methodologies are most appropriate under particular circumstances
and conditions.
Each agent in an ABM exhibits a certain behavior (i.e., deter-
mines which one of its abilities to employ at any given time) based
on its current properties and the ways it is currently interacting
with other agents and its surrounding environment. Since the agent’s
properties can change throughout an ABM simulation, the agent’s
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 166

166 The Limits of Mathematical Modeling in the Social Sciences

behavior can change as well. If an agent is properly designed, these


changes can reflect learning on the part of the agent.2 Note that most
ABMs employ at least some amount of stochasticity. While many
agent properties and behaviors are largely deterministic, others are
allowed to include random elements. These random variations are
employed to capture the vagaries of the real world.

4. Applications of ABMs
ABMs have been employed in a large array of research endeavors.
Schelling (1971; also see Hatna and Benenson, 2012) used a very
basic ABM with great effect in developing a model of residential
segregation. The beauty of Schelling’s segregation model was that
it demonstrated how complex and unexpected patterns of behav-
ior could result from artificial agents employing simple rules. In the
case of this particular model, it demonstrated how agents who have
no preference for living in segregated neighborhoods are driven by
the logic of the rules governing their behavior to nonetheless locate
in segregated neighborhoods. A compelling contribution by Axelrod
(1981, 1984; Axelrod and Hamilton, 1981) demonstrated the evo-
lution of cooperation; this model showed that over time, a “tit for
tat” strategy involving multiple agents allowed each individual agent
to better its situation compared to any strategy where each agent
pursued its goals independently. Axelrod’s work also provided a new
insight into traditional analytical game theoretic models such as the
prisoner’s dilemma, and similar and subsequent research is currently
producing fascinating new insights into the nature of conflict and
cooperation (see Axelrod, 1981, 1984, 1987, 1997a, b).
Another very well-known application of ABMs has come to be
known as the Anasazi project (Axtell et al., 2002). This model was
developed to generate insights into the disappearance of the Anasazi
Indians who lived in the US Southwest for hundreds (or possibly

2
Given the computational demands of such methodologies, ABM only became
feasible beginning in the 1970s, and especially in the last decade or two as com-
puting power has steadily increased.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 167

Complex Systems, Decision-Making and Computability 167

thousands) of years until their disappearance prior to the arrival


of Europeans. Using ABMs, researchers have developed extremely
useful and insightful explanations of how the complex interplay of
culture and environment explain the disappearance of the Anasazi.
These explanations may not have been generated using more tradi-
tional social science methodologies.
Other research has focused on the emergence of actors in the
international system, as well as other political processes and inter-
actions (Cederman, 1987). ABMs have yielded new insights into
dynamics that have led many economists to rethink the methodolog-
ical underpinnings of economics (Velupillai, 2010). Some have even
suggested that the utility of ABM may rival or even surpass that
of dynamic stochastic general equilibrium (DSGE) models, which
have been a cornerstone effort to bridge the micro–macro gap in
economics. Some scholars, such as Velupillai (2010), have called for a
fundamental revolution in economics that would move it away from
the traditional deductive and mathematically formalized framework
toward a computable and algorithmic foundation. ABMs could play
a key role in any such computable foundation, given the generative
and emergent properties of such simulations (see Epstein, 2006).
To be fair, ABMs are not without their critics. A fundamental
criticism of ABMs and simulation in general is that simulation model
formulations and outcomes are not typically given in mathematical
terms, thus making it difficult to bring the tools of mathematical
analysis to bear. But as Epstein (2006) notes in an important con-
tribution, the distinction between agent- and equation-based mod-
els is illusory (see also Hinkelmann et al., 2011). Epstein observes
that every ABM is a computer program, and as such, is Turing
computable. Therefore, “for every Turing machine there is a unique
corresponding and equivalent Partial Recursive Function” (Epstein,
2006, p. 51; Rogers, 1967). These functions are usually highly com-
plex and so difficult to interpret that their analysis is seldom pro-
ductive; nevertheless, they do exist. Consequently, ABMs (and as
Epstein argues, generative models in general) are indeed deductive,
but deductive models are not necessarily generative in character.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 168

168 The Limits of Mathematical Modeling in the Social Sciences

Moreover, as Epstein also notes, for certain classes of problems such


as PSPACE complete problems, it can be shown that simulation is
the best that can be achieved (Epstein, 2006, p. 51).3
Other important points are addressed by Epstein. In the math-
ematical social sciences in general, and particularly in economics,
there has traditionally been a strong focus upon demonstrating equi-
librium. Epstein points out that there is a distinction between equi-
libria that merely exist and those that are attainable. Epstein and
Hammond (2002) demonstrate this with a simple agent-based game.
Given that some equilibria may not be obtainable, Epstein questions
the wisdom of studying equilibria as a way of understanding economic
realities.

5. Classical logic and its (possible) discontents


The social sciences, and more generally the human and behav-
ioral sciences, have been profoundly shaped by the assumptions of
standard, or classical logic. Classical logic is the foundation of our
methodological and epistemological assumptions and the intellectual
frameworks within which we operate. We do not suggest in this chap-
ter that classical logic needs to be abandoned. We do contend, how-
ever, that many of the phenomena we encounter, not just in the
physical world but within the social sciences, do not always lend
themselves to classical logic. We consider the possibility that non-
standard logics may actually be necessary in certain instances for
us to understand certain kinds of complex phenomena. These non-
standard logics may, in turn, provide an important epistemological
foundation for helping us understand the world.
Generative social science, or small world simulation, makes no
assumptions regarding a singular, correct worldview. There is no
reality outside of the agents and the world that has been created,
and within which agents and the world co-evolve. ABM assumes a

3
PSPACE is defined as the set of all decision problems solvable by a Turing
machine and computable in polynomial time.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 169

Complex Systems, Decision-Making and Computability 169

local observer viewpoint (LOV), in which agents respond to other


agents and the immediate environment. Moreover, agents are capable
of using decision rules that relax traditional assumptions of classical
logic. In particular, it is not necessary in this world for the law of
the excluded middle to strictly apply where X either exists or its
contradiction, Y, exists. Under the assumption of intuitionistic logic,
where the law of the excluded middle is relaxed, agents can believe,
not believe, or they can be uncertain (Borrill and Tesfatsion, 2011,
p. 240). We suggest that agents could, in theory, possess contra-
dictory beliefs about a topic, just as individuals in the real world
can hold contradictory beliefs, and hold such beliefs simultaneously.
What is needed is a logic that accounts for these possibilities.
The cognitive scientist David Hoffman suggests, controversially,
that the traditional assumption that our perceptions represent a rea-
sonably accurate, although not necessarily perfect, representation of
the real world has to be set aside. The traditional assumption holds
that if human perceptions were not reasonably accurate, evolution
would have prevented the human race from achieving its current sta-
tus because of its inability to adapt to the environment. Evolution, so
it is argued, must produce organisms whose perception of the envi-
ronment matches well with reality. Hoffman rejects this argument,
stressing that it assumes a strong correspondence between truth, or
objective reality, and fitness, when, in fact, objective reality has little
to do with whether or not the organism is fit from an evolutionary
perspective. For Hoffman, what describes the world are conscious
agents, all with a first-person point of view, who interact with each
other. There is no single, objective third-person perspective. This
“conscious realism” is based upon a perspective that says that the
idea that we as individuals are capable of measuring or perceiving
the same object at the same time in the same place and get the same
results is wrong, and runs counter to the perspectives developed
in quantum physics over the last several decades (Hoffman, 2008;
2014; Hoffman and Singh, 2012). If Hoffman and his colleagues are
correct, then there is no singular, unique perspective that qualifies
as truth.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 170

170 The Limits of Mathematical Modeling in the Social Sciences

Hoffman’s assumptions draw heavily upon quantum theory, and


seems consistent at a general level with Fuchs’ findings. Fuchs’ (2010,
2014) research into quantum Bayesian approaches to reality provide
new insights that complement Hoffman’s “conscious realism”. The
traditional assumption of quantum theory is that a wave function
describes the complete state of some aspects of the world, and the
shape of the wave function essentially represents the probabilities of
the outcomes of measurements that some observer chooses to make.
At this point, the quantum view of reality is still very much grounded
in the traditional notion of a third person observer who observes some
single view of reality. Fuchs and his colleagues interpret the wave
function’s probabilities as Bayesian or subjective degrees of belief
about the system. According to Quantum Bayesianism, (QBism), the
wave function’s collapse is the observer updating his or her beliefs
after taking a measurement (or making an observation). Whereas the
Copenhagen interpretation of quantum behavior requires there be an
observer representing a God’s Eye View (GEV) of the world, Fuchs
treats the wave function as one observer’s subjective knowledge about
the world (Fuchs 2014; 2010). Again, the notion of a local observer
viewpoint (LOV), is reinforced.

6. CASs and real-world decision making


We live in a world characterized by multiple, often overlapping com-
plex systems that exist at different levels and have different spans
of influence. We have already discussed the basic characteristics of
those systems. They are interconnected, with each part responding
to the other in often nonlinear ways. Positive feedback loops are a
common feature, and such feedback loops assume that equilibrium
outcomes cannot be obtained. The behavior of financial markets may
be one such example (Taleb, 2010).
More generally, Velupillai (2010, see especially pp. 228–250)
demonstrates that many of the formalizations of modern economic
theory are lacking in empirical content. For example, he makes
the case that the following have no general closed-form algorithmic
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 171

Complex Systems, Decision-Making and Computability 171

solutions, although they may be computationally determined in cer-


tain instances:
(1) Nash equilibria of finite games are constructively indeterminate.
(2) The notion of computable general equilibrium (CGE) has no
grounding in empirical content.
(3) Fundamental elements of welfare economics are noncomputable.4
(4) Recursive competitive equilibria underpinning the stochastic
dynamic general equilibrium are uncomputable.
(5) There is no effective means of generating preference orderings.
Regarding (5), an overarching principle of any system requiring ratio-
nality as traditionally understood, i.e., utility maximization, is the
ability to select a maximal set of alternatives. Velupillai shows that
selection of alternatives can be represented by a Turing machine, in
turn, represented by a simple dynamical system. Since any Turing
machine can be simulated by a universal Turing machine, “no path
of a universal dynamical system can in any useful sense be related
to a maximum problem” (Velupillai, 2010, p. 341). So, maximization
is noncomputable. Were it indeed computable, the Turing machine
would be capable of solving the Halting problem. This point also
recognizes that rational behavior has to be viewed as adaptive or
procedural, or bounded in nature. Utility maximization, as noted by
Velupillai, cannot be associated with universal dynamical systems.
The problem of utility maximization and our traditional understand-
ing of rationality is explored further below.
Doria (n.d., this volume) also notes that equilibrium prices in
competitive markets are in general, noncomputable, and fall out-
side the techniques generally available among formal modeling tools.
Velupillai’s and Doria’s conclusions are similar in important respects
to Axtell’s (2005) important paper, which demonstrates that an
exchange economy, even under radically simplified conditions, is
very hard to compute in practical cases (NP-hard). The concept of

4
Specifically, the Second Welfare Theorem says that “any Pareto optimum can,
for suitably chosen prices, be supported as a competitive equilibrium” (Velupillai,
2010, p. 238).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 172

172 The Limits of Mathematical Modeling in the Social Sciences

Walrasian equilibrium is a critical tool in modern microeconomic


theory. Velupillai notes that Walrasian equilibrium with its Brouwer
fixed point theorem is effectively uncomputable. The requirements
for achieving equilibrium, where one assumes that some hypothet-
ical auctioneer must compute the equilibrium prices, assumes that
agents both truthfully reveal their preferences and no trading takes
place before the market-clearing price vector is announced, and that
all agents trade at the same prices. As Kauffman (1993, 2000) has
noted, such a framework assumes that one can finitely prestate all
possible goods and services, which in any practical sense is impossi-
ble. Moreover, a real economy is not static but constantly evolving,
as new goods and services are created from various combinations of
preexisting goods, and others die. So, the secular increase in goods
and services, and the emergence of fundamentally new innovations,
is not explained by Walrasian processes. None of this is to deny the
contribution of Arrow and Debreu for their work in proving the exis-
tence of equilibrium under specific conditions, but it cannot explain
certain real-world phenomena.
Axtell demonstrates using ABMs that practical implementation
of Walrasian general equilibrium models are intractable, that is, NP-
hard. This means, in effect, that real markets do not operate like
the Walrasian general equilibrium model. Axtell proposes and tests
an alternative in which prices are determined endogenously using a
decentralized decision process whereby agents are involved in a series
of bilateral, or more generally k-lateral exchanges among transient
coalitions of agents, who are procedurally rational, that is, exhibiting
bounded rationality. As an aside, one can assume that the decisions
these agents make regarding membership in these transient coalitions
is determined by various heuristics or rules of thumb. In the end,
and importantly, these strategies produce tractable, polynomial-type
computational requirements whereby the computational complexity
of agents and commodities have strict upward bounds.5

5
Besides P (polynomial time computations) there is a larger class of bounded
probabilistic polynomial (BPP) problems for which computations can be achieved
in bounded polynomial time by a randomized Turing machine (i.e., a Turing
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 173

Complex Systems, Decision-Making and Computability 173

7. What is rationality?
Humans make decisions under extraordinarily complex environmen-
tal contexts. Humans are confronted with truly daunting challenges
in decision-making. The standard argument made by mainstream
economists has been the utility maximization model. A mainstay of
economic analysis for much of the 20th century, it assumes that indi-
viduals have the computational skills to be able to optimize in what-
ever environment he or she finds oneself. Such optimizing requires
adherence on the part of the decision maker to certain canons of logic,
including, critically, the ability to select a maximal set of alternatives.
But, do most of us, at least under many circumstances, behave in
this fashion? How do we actually decide? The traditional approach
assumes, at least for the sake of mathematical tractability, that indi-
viduals exhibit a comprehensive, or unbounded, rationality in which
utility maximization lays at the heart of the mainstream economists’
toolkit of assumptions.
The preceding discussion of Velupillai and Axtell’s contributions
highlight the fact that an alternative approach to decision-making
is necessary, and that it be grounded in computational complexity
theory. Over the last few decades, the standard model of economists
has been challenged. Beginning with the work of Simon (1955, 1957),
who pioneered research into human decision-making and continuing
through the work of Kahneman and Tversky (1979, 1996) (and later
Gigerenzer, 2008; Gigerenzer and Brighton, 2009; Gigerenzer and
Gaissmaier, 2011), an alternative model, or more accurately “mod-
els”, have been proposed that can be described as cognitive behav-
ioral economics. The finer details of this research are beyond the
scope of this chapter. However, the fundamental assumption of all
these models is that individuals possess cognitive constraints. It
says that individuals act with agency, but the complexity of the

machine that at any computational step may choose from among a set of possible
transition by making a random binary decision) (see Valiant, 2013, p. 35). A still
outstanding question is “whether there exist algorithms that if they can solve any
given #P problem could be used to solve all #P problems and hence all problems
in BPP, BQP and NP” (Valiant, 2013, p. 41).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 174

174 The Limits of Mathematical Modeling in the Social Sciences

task environment creates challenges that our cognitive architecture


cannot handle. Thus, we develop alternative strategies for decision-
making. For example, we limit the number of alternatives from which
to choose and develop satisficing rather than maximizing strategies.
Relatedly, we develop cues or heuristic devices (cognitive roadmaps)
to guide our decision-making.
Velupillai and Kao (2014) argue that Simon’s overall approach
to cognitive behavioral economics is “underpinned by a model of
computation, highlighting the complexity of chemical decision pro-
cesses on the basis of computational complexity theory” (Velupillai
and Kao, 2104, p. 40). They show how theorems developed by Simon
provide a foundation for understanding the cognitive constraints indi-
viduals face with regard to bounded rationality (the complexity of
the problems faced is greater than the individual’s ability to solve
those problems). The use of satisficing rather than optimizing strate-
gies, the use of procedural as opposed to substantive rationality, the
reliance upon heuristics, or “cognitive roadmaps” for decision-making
are critical features of alternative approaches to the traditional utility
maximization model.
Importantly for our purposes, we can hypothesize that the time
constraints imposed on individuals reflect the constraints imposed
by NP problems, the very definition of NP (that certain problems
cannot be computed using an efficient algorithm in some tractable
amount of time). But also, as Velupillai and Kao suggest, complexity
issues arise not just from the standpoint of time complexity but space
complexity as well. That is, “the amount of information minds can
process at a given moment is severely limited” (Velupillai and Kao,
2014, p. 48).

8. The role of heuristics


The concept of heuristics is very important in better understand-
ing the application of bounded rationality; the preceding Axtell
model could not function without them; heuristics are simple efficient
rules governing how individuals will act under certain circumstances.
Gigerenzer, a leading theorist in the study of decision-making, refers
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 175

Complex Systems, Decision-Making and Computability 175

to “fast and frugal heuristics” as driving decision-making (Gigerenzer


and Gaissmaier, 2011). Gigerenzer’s heuristic-based decision-making
does not necessarily yield inferior results when compared to more
orthodox models of decision-making. Indeed, heuristics can offer
superior outcomes. While the details do not concern us here, we
suggest that heuristics require at some level a relaxation of stan-
dard logic. The more complex the decision-making environment, the
greater the likelihood that traditional logic is relaxed in favor of
a dynamic and variegated series of cognitive maps that guide our
passage through complex and uncertain environments.
Different agents have different experiences and evaluate those
experiences differently, which includes the way those experiences
affect different observers’ observations of the same phenomenon.
Indeed, agents could experience the identical phenomenon, but unless
their prior experiences were identical, evaluate that phenomenon dif-
ferently. It is possible there may be a formal, quantitative basis to
what are thought of as subjective assessments of introspection, aware-
ness, or confidence (or lack of confidence) in one’s decisions. And, we
suggest that such confidence is, plausibly, dictated by the heuristics
one uses to evaluate a situation.
As Sanders et al. (2016) point out, the idea that a subjective
sense of confidence in one’s decisions implies awareness of a statisti-
cal likelihood to the decision maker has been speculated upon (e.g.,
Griffin and Tversky 1992), but efforts to model human confidence in
algorithmic terms have heretofore not met with much success, which
is a problem since the ability to perform Bayesian confidence compu-
tations is the working assumption behind most economic analyses of
decision making. However, Sanders et al. (2016) used the statistical
formulation to generate empirically testable data relating confidence
to choice correctness, demonstrating that mental computations func-
tionally equivalent to decision confidence from a statistical perspec-
tive are manifested as subjective feelings.
The aforementioned findings are also consistent with Pouget et al.
(2016) who demonstrate there are computational reasons to distin-
guish between certainty and confidence, where like Sanders and his
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 176

176 The Limits of Mathematical Modeling in the Social Sciences

colleagues, confidence is defined as the probability that a decision


or proposition, whether overt or covert, is correct given the evidence
where one is involved in the making of potentially complex sequences
of decisions using Bayesian updating probabilities and not distin-
guish between confidence in a decision that is objectively correct vs.
one’s subjective confidence that a decision is correct. While we would
assume such computations and the algorithmic basis have polynomial
(P) solutions, we can speculate as to the possibilities for the human
brain to solve for certain kinds of NP problems, or whether such
capabilities are even possible.
Another important related area that has potentially important
implications for our understanding of computational complexity and
heuristics is the study of probably approximately correct learnability.
PAC learning, based upon the pathbreaking work of Valiant (1984,
2013), integrates machine learning, evolutionary theory in biology,
and computational complexity. PAC learnability says that evolution
can be explained in terms of algorithms that benefit from and inter-
act with their environment. For example, the human genome evolves
in ways that allow us to interact successfully with our environment,
although the process is accelerated through algorithms that offer
computational solutions that improve the organism’s ability to sur-
vive. These algorithms then accelerate evolution and do so in ways
that can be solved in polynomial time. Those mutations that are com-
putationally tractable are more likely to survive, whereas those that
are uncomputable or that are NP-hard are more likely to be rejected.
If learning is an evolutionary sense was NP, then the process of
change and adaptation by an organism would take vastly longer than
what we observe, again suggesting that learning in an evolutionary
sense can be done in P or polynomial time (Valiant, 2013, pp. 38–43).
Importantly for Valiant, the process of mental search for a solution
is characterized by NP, where mental search is defined as “search
objects one generates internally in one’s head or computer” (Valiant,
2013, p. 39).
Two other related concepts developed by Valiant (2013,
pp. 111–112) that are introduced are evolvability and deducibility.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 177

Complex Systems, Decision-Making and Computability 177

Evolvability concerns itself with a more restricted domain of learn-


ability (learnability does not equal evolvability) since some things
that are learnable do not require evolutionary time scales. Deducibil-
ity explores the fascinating problem of how we as individuals and as a
species learn when confronted with extraordinarily complex environ-
ments where we do not have a theory as how to negotiate obstacles
within that environment.6

9. Summary and conclusion


This chapter has examined issues relating to complex environments
or more precisely CAS and how these CASs relate to issues of compu-
tational complexity. We initiated our discussion by exploring method-
ological innovations in the study of CASs, by discussing the role of
ABM and the relationship between ABM and important theoretical
issues surrounding how to better understand computationally com-
plex environments. Moreover, we want to know what this complexity
means from the standpoint of the social and mathematical sciences,
but also for those of use interested in better understanding how to
better function in a world that is, in certain respects, permanently
opaque in terms of our ability to fully understand it.
We began with a discussion of ABM because ABM provides
both a methodological framework radically different from traditional
methodological approaches in the social sciences, and also because
it offers new theoretical and epistemological insights that are impor-
tant in themselves. Specifically, we hope the reader will appreciate
how ABM can be explored to uncover processes in computationally
complex environments, which would not be possible with traditional
methodological approaches that many, if not most, social scientists

6
An extremely interesting question is how, in the process of learning, novelty and
innovation arise. Learning, after all, can be merely an extrapolation of information
one has acquired. But how do fundamentally new insights come about, and can
they be explained in a computational framework? Markose (2004) shows how,
using the computational approach introduced by Binmore using game theory
(Binmore, 1987), players are modeled as universal Turing machines that can make
self-referential calculations of hostile behavior.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 178

178 The Limits of Mathematical Modeling in the Social Sciences

are familiar with. While the final word in ABM is far from being writ-
ten, it seems clear that this approach has already achieved important
successes. The truly remarkable aspect of ABM is that the specifi-
cation of a relatively small number of simple rules produces highly
complex and variegated macrolevel behavior, the results of which can
be used in generating theories and hypotheses about the real world.
Thus, ABM offers ways of creating a generative social science that
can offer new insights into the micro–macro divide and provide a
potential treasure trove of theoretical insights.
The behavior of CAS more generally captures the concept of the
LOV of the world whereby agents can only have partial knowledge of
their surroundings. Different agents, situated at different locations,
reach different conclusions regarding the truth or falsity of certain
statements. These behaviors suggest the need for some rethinking of
the standard, traditional assumptions governing logic and the role of
classical logic in the social sciences.
While consistent with the principles of bounded rationality, the
actions of agents in a CAS also relate in a deep way to the con-
cept of heuristic decision-making as an alternative to the classical
utility maximization model, and more speculatively suggests impor-
tant connections between bounded rationality, heuristics, and PAC
learnability. A careful analysis of CASs provides important insights
into the foundations of computing and computability, and provides
a means of not only bridging the social sciences but establishing a
much closer link between the social and natural sciences on impor-
tant questions of computation. This is seen clearly in the work of
Hoffman, Fuchs, Valiant, and others. The theory of CASs offers great
hope for developing a synthesis of our understanding of computation,
computational complexity, its possibilities, and limits.

References
Axelrod, R. (1981). The emergency of cooperation among egoists. American Polit-
ical Science Review, 75(2), 306–318.
Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 179

Complex Systems, Decision-Making and Computability 179

Axelrod, R. (1997a). The Complexity of Cooperation: Agent-Based Models of


Competition and Collaboration. Princeton: Princeton University Press.
Axelrod, R. (1997b). The evolution of strategies in the iterated prisoner’s
dilemma. in Bicchieri, C., Skyrms, B., and Jeffrey, R. (eds.), The Dynamics
of Norms. pp. 1–16. Cambridge: Cambridge University Press.
Axelrod, R. and Hamilton, W. D. (1981). The evolution of cooperation. Science,
211(4489), 1390–1396.
Axtell, R. (2005). The complexity of exchange. The Economic Journal, 115(504),
F193–F210.
Axtell, R. L., et al. (2002). Population growth and collapse in a multi-agent model
of the Kayenta Anasazi in the long house valley. Proceedings of the National
Academy of Sciences, 99, 7275–7279.
Binmore, K. (1987). Modeling rational players: part I. Economics and Philosophy,
3(2), 179–214.
Bonbabeau, E. (2002). Agent-based modeling: methods and techniques for simu-
lating human systems. Proceedings of the National Academy of Sciences, 99,
7280–7287.
Borill, P. L. and Tesfatsion, L. (2011). Agent-based modeling: the right math-
ematics for the social sciences. In Davis, J. B. and Hands, D. W. (eds.),
The Elgar Companion to Recent Economic Methodology, pp. 228–258. Chel-
tenham: Edward Elgar.
Cecconi, F., Campenni, M., Andrighetto, G., and Conte, R. (2010). What do
agent-based and equation-based modelling tell us about social conventions:
the class between ABM and EBM in a congestion game framework. Journal
of Artificial Societies and Social Stimulation, 13(1), 6–8.
Cederman, L.-E. (1997). Emergent Actors in World Politics: How States and
Nations Develop and Dissolve. Princeton: Princeton University Press.
Dawid, H. and Neugart, M. (2011). Agent-based models for economic policy
design. Eastern Economic Journal, 37(1), 44–50.
Doria, F. A. (n.d.). Axiomatics, the social sciences, and the Gödel phenomenon:
a toolkit. This volume.
Epstein, J. (1999). Agent-based computational models and generative social sci-
ence. Complexity, 4(5), 41–57.
Epstein, J. M. (2006). Generative Social Science: Studies in Agent-Based Com-
putational Modeling. Princeton: Princeton University Press.
Epstein, J. and Axtell, R. (1996). Growing Artificial Societies: Social Science
From the Ground Up. Cambridge: MIT Press.
Epstein, J. M. and Hammond, R. A. (2002). Non-explanatory equilibria: an
extremely simple game with (mostly) unattainable fixed points. Complexity,
7(4), 18–22.
Fuchs, C. A. (2010). Coming of Age with Quantum Information. Cambridge: Cam-
bridge University Press.
Fuchs, C. A. (2014). Introducing QBism. in New Directions in the Philosophy of
Science, pp. 385–402. Zürich: Springer International Publishing.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 180

180 The Limits of Mathematical Modeling in the Social Sciences

Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Sci-


ence, 3(1), 20–29.
Gigerenzer, G. and Brighton, H. (2009). Homo heuristicus: why biased minds
make better inferences. Topics in Cognitive Science, 1(1), 107–143.
Gigerenzer, G. and Gaissmaier, W. (2011). Heuristic decision making. Annual
Review of Psychology, 62, 451–482.
Griffin, D. and Tversky, A. (1992). The weighing of evidence and the determinants
of confidence. Cognitive Psychology, 24(3), 411–435.
Hatna, E. and Benenson, I. (2012). The schelling model of ethnic residential
dynamics: beyond the integrated-segregated dichotomy of patterns. Journal
of Artificial Societies and Social Simulation, 15(1), 6.
Hinkelmann, F., Murrugarra, D., Jarrah, A. S. and Laubenbacher, R. (2011).
A mathematical framework for agent based models of complex biological
networks. Bulletin of Mathematical Biology, 73(7), 1583–1602.
Hoffman, D. D. (2008). Conscious realism and the mind-body problem. Mind and
Matter, 6(1), 87–121.
Hoffman, D. D. (2014). The origin of time in conscious agents. Cosmology, 18,
494–520.
Hoffman, D. D. and Singh, M. (2012). Computational evolutionary perception.
Perception, 41(9), 1073–1091.
Holland, J. H. and Miller, J. H. (1991). Artificial adaptive agents in economic
theory. The American Economic Review, 81(2), 365–370.
Johansen, A. and Sornette, D. (2001). Large stock market price drawdowns are
outliers. Journal of Risk, 4(2), 69–110.
Kahneman, D. and Tversky, A. (1979). Prospect theory: an analysis of decision
under risk. Econometrica, 47, 263–291.
Kahneman, D. and Tversky, A. (1996). On the reality of cognitive illusions. Psy-
chological Review, 103, 502–591.
Karayiannis, N. and Venetsanopoulos, A. N. (1993). Artificial Neural Networks:
Learning Algorithms, Performance Evaluation, and Applications. New York:
Springer.
Kauffman, S. (1993). The Origins of Order. Oxford: Oxford University Press.
Kauffman, S. (2000). Investigations. Cambridge: Cambridge University Press.
Markose, S. M. (2004). Novelty in complex adaptive systems (CAS) dynamics: a
computational theory of actor innovation. Physica A: Statistical Mechanics
and Its Applications, 344(1), 41–49.
Michalewicz, Z. (1996). Genetic Algorithms +Data Structures =Evolution Pro-
grams. Berlin: Springer-Verlag.
Pouget, A., Drugowitsch, J. and Kepecs, A. (2016). Confidence and certainty: dis-
tinct probabilistic quantities for different goals. Nature Neuroscience, 19(3),
366–374.
Preston, K. and Duff, M. J. (2013). Modern Cellular Automata: Theory and Appli-
cations. New York: Springer.
Rogers, H. (1967). Theory of Recursive Functions and Effective Computability.
New York: McGraw-Hill.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch05 page 181

Complex Systems, Decision-Making and Computability 181

Sanders, J. I., Hangya, B. and Kepecs, A. (2016). Signatures of a statistical com-


putation in the human sense of confidence. Neuron, 90(3), 499–506.
Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical
Sociology, 1(2), 143–186.
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal
of Economics, 69(1), 99–118.
Simon, H. A. (1957). Models of Man: Social and Rational. New York: John
Wiley & Sons.
So, M. K., Chen, C. W., Lee, J.-Y. and Chang, Y.-P. (2008). An empirical evalu-
ation of fat-tailed distributions in modeling financial time series. Mathmatics
and Computers in Simulation, 77(1), 96–108.
Taleb, N. N. (2010). The Black Swan. New York: Random House.
Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM,
27(11), 1134–1142.
Valiant, L. G. (2013). Probably Approximately Correct. New York: Basic Books.
Velupillai, K. V. (2010). Computable Economics. Oxford: Routledge.
Velupillai, K. V. and Kao, Y. F. (2014). Computable and computational com-
plexity theoretic bases for Herbert Simon’s cognitive behavioral economics.
Cognitive Systems Research, 29(1), 40–52.
Viswanathan, G. M., Fulco, U. L., Lyra, M. L. and Serva, M. (2003). The ori-
gin of fat-tailed distributions in financial time series. Physica A: Statistical
Mechanics and its Applications, 329(1), 273–280.
Wolfram, S. (2012). A New Science. Champaign: Wolfram Media.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 183

Chapter 6

Is Classical Logic Enough?


Applications of Nonstandard Logic to
the Social Sciences

Denis J. Dean
School of Economic, Political and Policy Sciences
Program in Geospatial Sciences
The University of Texas at Dallas, TX, USA

Euel Elliott
School of Economic, Political and Policy Sciences
Program in Public Policy and Political Economy
The University of Texas at Dallas, TX, USA

1. Introduction
The social sciences, and more generally the human and behavioral sci-
ences, have been profoundly shaped by the assumptions of standard,
or classical logic. Classical logic is the foundation of our methodologi-
cal and epistemological assumptions and the intellectual frameworks
within which we operate. Most of us have spent our lives working
in an environment in which the assumptions of classical logic are
taken as a given. We do not suggest in this chapter that classical
logic needs to be abandoned. We do contend, however, that many
of the phenomena we encounter, not just in the physical world but
within the social sciences, do not always lend themselves to analysis
via classical logic.

183
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 184

184 The Limits of Mathematical Modeling in the Social Sciences

This chapter illuminates the ways in which nonstandard logics


may be applicable to understanding important elements of the world
around us. To date, there have been, with a few notable exceptions
(including Borrill and Tesfatsion, 2011; Chaitin et al., 2012; da Costa
and French, 2003; Koppl et al., 2014; Velupillai, 2005, 2007, 2010),
suggestions that the behavioral and social sciences could benefit from
incorporating new perspectives drawing from nonclassical logic and
mathematics. We make the case that allowing for the incorporation of
nonstandard perspectives can provide new insights into social science
phenomena. We begin with a discussion of the relevance of epistemics,
and epistemological issues developed in cosmology, and then build
upon that discussion in later sections.

2. What do cosmological epistemics have


to do with the social sciences?
In this section, we address the theoretical aspects of the use of non-
classical logic by exploring the implications of the observer prob-
lem from a cosmological perspective. This discussion will touch
upon the role of constructivist mathematics and the ways in which
important methodological and theoretical innovations in the social
sciences are consistent with important elements of cosmological
epistemology.
We begin by confronting some of the important issues raised by
Smolin (2001) and others in their efforts to better understand the
problem of observation and multiple observers in a universe where no
single observer within the universe can have all information available
to him or her. It is the problem of the “God’s eye view” (GEV)
perspective vs. a “local observer view” (LOV), perspective in which
any observer can only have knowledge about phenomena in his or her
immediate space (Smolin, 2001; Borrill and Tesfatsion, 2011). While
this discussion might appear to provide a primarily metaphorical
description of what we confront in the social sciences, we want to
suggest that there is a rich vein of theory available to be tapped, and
that has potentially important implications for the behavioral and
social sciences.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 185

Applications of Nonstandard Logic to the Social Sciences 185

As described by Smolin (2001), there are two fundamental issues


in attempting to understand the universe: first, nothing can travel
faster than light, and second, the universe appears to have been cre-
ated a finite amount of time in the past, that is, nearly 14 billion
years ago. These two facts together mean that it is impossible to
observe the entire universe. We can observe objects in the universe
14 billion light years away from earth, but only as they appeared
14 billion years in the past. But for reasons that are unnecessary to
explain in detail here, the universe itself is probably far larger than
14 billion light years in any direction.1
A fundamental problem emerges: no observer existing inside the
universe can observe the entire universe. We can only hope to have
knowledge, albeit imperfect, of that part of the universe that we can
observe. For example, we might, in principle, be able to determine
exactly how many galaxies exist in the observable universe, but we
would have no way of knowing how many galaxies exist in the unob-
served part of the universe. Even extrapolating from the known to
the unknown portions of the universe is impossible, because we have
absolutely no knowledge of (including the size of) the unobservable
portion of the universe, and conventional extrapolation techniques
based on classical logic require such knowledge. Thus the utility of
nonstandard logic, extending our knowledge into situations such as
this, will require us to reevaluate some of the logical premises that
we often take for granted.
Recognition of the above facts leads to a quantum theory of cos-
mology. A quantum theory of cosmology provides a description of

1
The theory of cosmological inflation states that in the extremely early stages of
the expansion of the universe from its initial singularity, the universe expanded at
a rate of many multiples of the speed of light. For those objects that were part of
that initial expansion, the light from that object has yet to reach the earth. The
latest estimates are that the universe is at a minimum 80 billion light years in
diameter. It is also worth noting that the expansion of the universe under inflation
does not violate the limits imposed by the speed of light. The latter refers to the
limits of light speed within the universe, whereas with regard to cosmological
inflation we are referring to the expansion of the actual topology of the universe
itself.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 186

186 The Limits of Mathematical Modeling in the Social Sciences

the universe fully within the universe itself, using what Markopoulou
(2000a, b) refers to as internal observables. His discussion of this topic
revolves around Einsteinian space–time (a topic that need not con-
cern us here), but he concludes that incomplete knowledge requires
us to apply a Heyting algebra to the problem of studying the cosmos
rather than the more conventional Boolean approach. A conventional
Boolean approach would only be applicable to an observer with a
GEV of the universe, and as originally stated, such a view is impos-
sible to anyone within the universe.
In the social sciences, and particularly in economics, it is tradi-
tional to assume that there is a single observer with full knowledge
(the idea of a social welfare function is consistent with this notion),
or that there are multiple observers all of whom see the world in the
same way and have the same information, that is the notion of homo-
geneous agents. Borrill and Tesfatsion state, regarding the theory of
relational quantum dynamics (RQM), “there are no observer depen-
dent states . . . The distinction between observer and observed, cause
and effect, are replaced by a symmetric notion of mutual observers
interacting with each other . . . Moreover, physics shares with the
social sciences the need to account for multiple observers with mul-
tiple perspectives on reality where measurements necessarily entail
perturbative interaction with persistent (information flow) traces”
(Borrill and Tesfatsion, 2011, pp. 244–245). This basic concept is
quite similar to what Soros (Smolin, 2001, pp. 31–32) describes in
his theory of reflexivity, which essentially assumes a kind of positive
feedback process in which one observer makes a decision on the basis
of what they observe another observer doing, in turn, influencing the
original observer, and so on. This kind of positive feedback system
can explain bubbles, or antibubbles, in financial markets and sug-
gests that market participants are just that, participants who are
unavoidably part of an evolving and co-evolving self-referential kind
of process. There is no objective reality outside of the market, because
everyone participating is inside the market’s frame of reference.
The observer in the universe, as we have noted, is limited by the
speed of light, so different observers will draw different conclusions
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 187

Applications of Nonstandard Logic to the Social Sciences 187

from different facts. The observer in the world is limited by the behav-
ior of others around them and the continually evolving interactions
with their human and physical environment. We think of those in the
hard sciences as observing and measuring the systems they observe;
they are “outside the frame.” However, as physicists are becoming
increasingly aware (Rovelli, 1996), physics cannot be complete until
it confronts the problems posed by having multiple observers imbed-
ded in the universe (or world where measurements are local and
relative to each other and whose interactions can potentially alter
its course (Borrill and Tesfatsion, 2011). Indeed, quantum physics
long ago reconciled itself to the idea with the critical insight that
one cannot know simultaneously both the location and momentum
of a particle. One can know one or the other, but the more one
tries to know one variable, the greater the uncertainty regarding the
other. We argue, as do Borrill and Tesfatsion (2011), that some of
the lessons drawn from physics should be applicable to our efforts to
better understand the observer–observed relationship in the behav-
ioral and social sciences. The notion that there may not necessarily
be a single GEV perspective regarding all phenomena has support in
areas of research ranging from quantum physics to cognitive science.
Fuchs’ (2014, 2010) research into quantum Bayesian approaches to
reality provide new insights. The traditional assumption of quantum
theory is that a wave function describes the complete state of some
aspects of the world, and the shape of the wave function essentially
encodes the probabilities of the outcomes of measurements that any
observer chooses to make. At this point, the quantum view of real-
ity is still very much grounded in the traditional notion of a third
person observer who observes some single view of reality. Fuchs and
his colleagues interpret the wave function’s probabilities as Bayesian
probabilities or subjective degrees of belief about the system. Accord-
ing to Quantum Bayesianism (QBism), the wave function’s collapse
is the observer updating his or her beliefs after taking a measurement
or making an observation. Whereas the Copenhagen interpretation
of quantum behavior requires there be an observer representing a
GEV of the world, Fuchs treats the wave function as one observer’s
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 188

188 The Limits of Mathematical Modeling in the Social Sciences

subjective knowledge about the world (Fuchs 2014, 2010). One might
consider certain analogs to Fuchs’ quantum Bayesianism as being
applicable to social interactions with multiple observers having mul-
tiple subjective perspectives.
The cognitive scientist David Hoffman suggests, controversially,
that the traditional assumption that our perceptions represent a rea-
sonably accurate, although not perfect, representation of the real
world has to be discarded. Otherwise, so goes the argument, evolu-
tion would have prevented the human race from achieving its current
status because of its inability to adapt to the environment. Evolution,
so it is argued, must produce organisms whose perception of the envi-
ronment matches reasonably well with reality. Hoffman rejects this
argument, stressing that it assumes a strong correspondence between
truth and fitness, when in fact truth, or objective facts has little to
do with whether or not the organism is fit from an evolutionary
standpoint. For Hoffman, what describes the world are conscious
agents, all with a first-person point of view, who interact with each
other. There is no single, objective third-person perspective. This
“conscious realism” is based upon a perspective that says that the
idea that we as individuals are capable of measuring, or perceiving,
the same object at the same time in the same place and get the same
results is simply wrong, and runs counter to everything learned in
quantum physics over the last several decades (Hoffman, 2008, 2014;
Hoffman and Singh, 2012).

3. A review of intuitionistic logic


and constructivism
Certain nonstandard logics such as intuitionistic logic are appropri-
ate for describing theories in which properties are context dependent
and where truth statements cannot be determined in strictly binary
“yes” or “no” terms. Intuitionistic logic can be viewed as a rival to the
structure of classical, Aristotelian logic. It encompasses the princi-
ples of logical reasoning developed by Brouwer (1913) and expanded
upon by Bishop (1967) and it can be considered the logical basis of
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 189

Applications of Nonstandard Logic to the Social Sciences 189

constructivist mathematics. The critical difference is that many tau-


tologies of classical logic can no longer be proven within intuitionistic
logic. Notably, these include the Law of the Excluded Middle so that
if statement A is correct, its negation cannot be true. Importantly,
constructivists require a direct proof that a statement is true in
the form of a computational procedure in order to rule out whether
the statement is true, or undecidable (Borrill and Tesfatsion, 2011,
p. 228). Indeed, the Brouwer program anticipates Gödel’s famous
Incompleteness Theorem by many years, and Brouwer predicted that
any attempt to prove the consistency of complete induction on the
natural numbers would fail.2
Methodologies exist today in the social sciences that provide
important elements of constructivist mathematics and nonclassical
logic. Cellular automata is one such methodology (Wolfram, 2012).
Automata, which can be considered a kind of Turing machine, pro-
vide for simulations of complex processes whereby cell values are
a function of adjacent cells. Two dimensional or multidimensional
automata provide for LOV dynamics that are a part of constructivist
epistemics. One of the most important and adaptable methodologies
is agent-based modeling (ABM). ABM allows for the development
of what Epstein has called “generative social science,” and can be
highly relevant in those circumstances where no a priori model of
some phenomenon under study exists, or where, even if it did exist,
the likely existence of complex feedback loops makes more traditional
modeling either impossible, or at least of limited value (Epstein and
Axtell, 1996; Epstein, 1997; Borrill and Tesfatsion, 2011).
Generative social science, or small world simulation, makes no
assumptions regarding a singular, correct worldview. There is no
reality outside of the agents and the world that has been created,
and within which agents and their world co-evolve. ABM allows for
the modeling of individual agents whereby each agent is essentially
computer code specifying the possible behavior of the agent in the

2
See Bridges (1999) for an excellent discussion of intuitionistic logic and construc-
tivist mathematics.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 190

190 The Limits of Mathematical Modeling in the Social Sciences

presence of different environments and different agents (see Dean


and Elliott, n.d., this volume). ABM assumes that each agent has
only an LOV, in which agents respond to adjacent agents and the
immediate environment. Moreover, agents are capable of using deci-
sion rules that relax traditional assumptions of classical logic (Borrill
and Tesfatsion, 2011, p. 230). In particular, it is not necessary in this
world for the law of the excluded middle to strictly apply. Agents can
simultaneously believe A, not A, or they can be uncertain. We suggest
agents could, in theory, possess contradictory beliefs about a topic,
just as individuals in the real world can hold contradictory beliefs. So
there needs to be a means of accounting for such phenomena. As Bor-
rill and Tesfatsion note, “ABM agents can have uncomputable beliefs
about their world that influence their interactions. These uncom-
putable beliefs can arise from inborn (initially configured) attributes,
from communications received from other agents, and-or from the use
of non-constructive methods . . . . These uncomputable beliefs enable
agents to make creative leaps, to come up with new ideas about their
world. . . ” (Borrill and Tesfatsion, 2011, pp. 230–231).
The role of ABM is almost certain to expand in the future,
as computational capabilities expand, and as our understanding of
complex systems increasingly bring us to the conclusion that such
methodologies are appropriate for modeling heterogeneous agent
behavior and where such agents can be imbued with simulated
cognitive qualities that allow for heuristic decision making. This is
particularly the case in complex informational environments and in
environments where heterogeneous agents form beliefs based upon
their unique LOV processes.3
Intuitionistic logic and, by extension, constructivist mathematics
may have a useful role to play in the social sciences. More specifically,
we may want to consider these approaches as offering more realistic

3
One of the great advantages of agent based modeling is its ability to model
processes in which one does not assume homogeneity of agents, unlike the assump-
tions of traditional utility minimizing standard economic theory. Velupillai (see
later discussion) makes a point of applauding the utility of ABM for modeling
complex environments.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 191

Applications of Nonstandard Logic to the Social Sciences 191

standards of proof than would otherwise be the case. It is clear,


Smolin (2001) says, in advocating the application of topos theory to
the social sciences, “Here in the real world we almost always reason
with incomplete information. Each day we encounter statements
whose truth or falsity can be derived on the basis of what we know
. . . we recognize, almost explicitly, that different observers have access
to different information . . . ” (Smolin, 2001, p. 31). We also allow
ourselves to change our minds or become convinced of a new propo-
sition without adding information to our set of observations. Gilboa,
Postlewaite, and Schmeidler suggest, unlike standard assumptions
that people have probabilistic beliefs over any source of uncertainty
(consistent with Bayesianism), “There are many instances in which
people adjust their beliefs in the face of arguments that do not present
new information but suggest that different conclusions should be
drawn from the same body of knowledge” (Gilboa et al., 2008, p. 186;
Aragones et al., 2005). Moreover, due to computational complex-
ity problems, such learning may be unavoidable even with rational
agents. Moreover, since we are not only observers within the world we
seek to describe, but also participants capable of affecting the truth
or falsity of statements we make about the world, a new approach to
standards of proof may be worthy of consideration. These innovative
approaches may offer new insights into the development of innovative
epistemological approaches to theory building and observation in the
social sciences.
One area in which nonstandard logic that addresses this need for
new standard are beginning to be made important inroads is eco-
nomics. It is increasingly recognized that the highly mathematized,
deductive approach that has dominated economics and is in need of
revision. Velupillai’s career has been dedicated to the belief that aca-
demic economics as currently practiced is at something of a dead end.
For Velupillai and his colleagues, the efforts by mathematicians to
create a kind of Hilbert-dominated field in which all of economics can
be formally axiomatized are doomed to failure. More than 80 years
after Gödel demonstrated conclusively that the Hilbert program in
mathematics could not succeed due to the indeterminate nature of
certain mathematical claims, the same problems confront economics.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 192

192 The Limits of Mathematical Modeling in the Social Sciences

Velupillai’s project is to create in place of the mathematical for-


malism of the last century, an “algorithmic” revolution in economic
theory. His idea is to construct a path that exposes the noncom-
putational content of classical mathematics, which is founded upon
set theory and the tertium nondatum. He would, on his terms, free
economics of its noncomputational content and create a naturally
algorithmic applied science that would be forced to address the unde-
cidable nature of many decision problems that economics confronts
(Velupillai, 2005, 2007, 2010).
Velupillai (2010) demonstrates in his research that many of the
formalizations of modern economic theory are lacking in empirical
content (also see Doria, n.d., this volume). This has important impli-
cations for set theoretic approaches to economic theory, which are
founded upon classical logic. For example, he makes the case that the
following have no closed-form algorithmic solutions, although they
may be computationally determined in certain instances:
(1) Nash equilibria of finite games are constructively indeterminate.
(2) Computable general equilibrium (CGE) has no grounding in
empirical content.
(3) Fundamental elements of welfare economics are noncomputable.4
(4) Recursive competitive equilibria underpinning the stochastic
dynamic general equilibrium is uncomputable.
(5) There is no effective means of generating preference orderings.
Point (5) is especially intriguing for our purposes. An overarching
principle of any system requiring rationality as traditionally under-
stood, that is, utility maximization, is the ability to select a maximal
set of alternatives. Velupillai shows that selection of alternatives can
be represented by a Turing machine, in turn, represented by a simple
dynamical system. Since any Turing machine can be shown to be a
universal Turing machine, “no path of a universal dynamical system
can in any useful sense be related to a maximum problem” (Velupillai,

4
Specifically, the Second Welfare Theorem says that “any Pareto optimum can,
for suitably chosen prices, be supported as a competitive equilibrium” (Velupillai,
2005, p. 238).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 193

Applications of Nonstandard Logic to the Social Sciences 193

2010, p. 341).5 So, maximization is noncomputable. Were it indeed


computable, the Turing machine would be capable of solving the
Halting problem. This point also recognizes that rational behavior
has to be viewed as adaptive or procedural, or bounded in nature.
Utility maximization, the foundation of modern rational choice the-
ory, cannot be unambiguously determined with universal dynamical
systems.
The fact that the social sciences are limited to an LOV of the
world makes the social sciences a good candidate for the applica-
tion of the tools of constructive mathematics. At a minimum, con-
structive mathematics can be viewed as an important supplement to
classical logic and mathematics. At its heart, constructive mathemat-
ics and the nonstandard logics that flow from it allow for different
observers to reach contradictory conclusions, or for different observa-
tions to result in uncertain statements about the world. Under those
logics, statements that are not necessarily algorithmic, and hence
uncomputable are not uncommon (Bridges, 1999). Those approaches
allow for belief revision and for the recognition that agents acting in
complex environments are subject to beliefs that are logically con-
tradictory without those beliefs being necessarily trivial. In other
words, those inconsistencies and contradictions can say something
meaningful about the agent holding those beliefs, and about the
world.
Much of a potential constructivist agenda for the social sciences
is, in part, hampered by its origins in higher-level, abstract, and non-
standard logical foundations. Indeed, many social scientists would
be surprised to learn that there is even an area of mathematics
known as nonstandard logic. Thus, constructivist approaches based

5
The existence of Gödelian dilemma is no different than Turing’s undecidabil-
ity problem. Turing approaches the problem of knowing from a computational
perspective rather as formal logical perspectives. For Turing, there are problems
for which solutions exist using a Turing machine (an abstract computing device),
but for which we cannot know when the correct answer has been obtained. Thus,
certain problems are noncomputable. Both Gödel and Turing illustrate that there
are important cases where truth cannot be known in any final sense.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 194

194 The Limits of Mathematical Modeling in the Social Sciences

on non-standard logics have been inaccessible to many who might


otherwise be able to make use of them. For example, category theory
and graph theory are likely to make future inroads in areas of the
social sciences, and may well be a successor to axiomatic set the-
ory (see Borrill and Tesfatsion, 2011, pp. 229–231). Category theory
is essentially a metamathematical construct, which is an abstrac-
tion of other mathematical concepts. One example of a category is
a category of sets. Category theory allows us to develop an under-
standing of the equivalence between one category of sets and another
and is already finding applications in fields ranging from biology
to music and philosophy. Clearly, there is an important topologi-
cal and topos-theoretic component to category theory. Importantly,
category theory is not as subject to the rigid rules of axiomatic set
theory and the foundations of classical logic (Rehmeyer, 2013; Spivak,
2013; Leinster, 2014). Graph theory, the formal study of graphs, are
mathematical structures that model relationships between objects.
It shares certain commonalities with category theory, and is used to
model different kinds of relationships and processes in the physical,
biological, and social systems. It may be especially useful in areas
such as linguistics and the study of different kinds of social net-
works (Barabási, 2002). Indeed, graph theory is critically important
as a mathematical tool in the treatment of network theory, which,
in turn, can be integrated into generative social science methodolo-
gies such as ABM (Borill and Tesfatsion, 2011). Importantly, many
problems that can be addressed by graph theory have been deter-
mined to be NP-complete or NP-hard (see Newman (2010) and
Bondy and Murty (2008) for an excellent introduction to graph
theory).
Topos theory, noted above, extends and deepens the earlier
advances made in intuitionistic logic described above (Smolin, 2001).
Generally speaking, a topos (or topoi) is a type of category that
behaves like a category of sheaves of sets on a topological space,
where a sheave provides a means of formalizing the description of
a problem in a topological space even where a “true” topological
foundation is lacking.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 195

Applications of Nonstandard Logic to the Social Sciences 195

Topoi in logic (elementary topoi) allow us to formalize a prob-


lem in terms of an advanced version of set theory. In set theory,
all mathematical objects are represented, or can be represented, by
sets. The foundation is generalized using topoi, whereby a category
of sets forms its own topos. Each topos defines its own mathematical
framework. A topos could be defined in a way that is characterized
by traditional classical logic. But a topos can also be defined in which
the traditional axiom of choice and the law of the excluded middle
do not apply, thus our interest in topos theory.
Topos theory may also have important implications for under-
standing a range of human interactions. Under classical probability
assumptions, an object, regardless of the number of potential states
in which it can exist, can be in only a single state at a particular
point in time. A coin can only be heads or tails; a die can only
take on one of six states. In quantum physics, qubits or quantum
bits of information may provide information denoting that an object
exists in multiple states. Thus, topos theory may offer important
insights when we address problems in which intuitionistic logic and
constructivist mathematics may apply.

4. Logical contradictions
Logical contradictions and inconsistencies appear in both the physi-
cal world and social world and others, perhaps most relevant, among
those studying artificial intelligence. Consider a contemporary prob-
lem in particle physics that has yet to be resolved. Referred to as
the “proton radius problem,” recent efforts to make a more precise
measurement of the radius of the proton, the most common sub-
atomic particle found in nature, have created more heat than light.
Efforts to more accurately measure the proton using two highly com-
plementary experiments have yielded substantially differing results.
As Pohl et al. (2010) and Bernauer et al. (2010) suggest, physicists
do not fully understand the proton, or we do not understand the
physics that goes into our understanding of the proton. In either
case, we have a situation where careful research has yielded inconsis-
tent findings. We do not know which is correct, or whether, perhaps
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 196

196 The Limits of Mathematical Modeling in the Social Sciences

in some way unbeknown to us, and perhaps caused by the very act
of measurement itself, both values could be correct in different con-
texts. Given this degree of uncertainty, one could argue that classical
logic is not epistemologically sufficient to allow for a more complete
understanding of the phenomenon.
Other examples of inconsistencies abound. The classic example
is the liar’s paradox, as exemplified by “this sentence is not true,”
or “everything I say is a lie.” Either the two statements are true, or
not. If true, then what they say is, in fact, the case. If not true, then
that is what it says. The statements are, or can be, both true and
not true.
McAllister (2007, 2010–2012) provides a radical approach to our
understanding of inconsistent observations expressed in the form of
empirical data sets. He shows how different data patterns do not
have to be interpreted in a way that suggests only a single pattern
represents a true or correct interpretation about the structure of the
world.
Are there certain phenomena that exist in the world for which
it is not possible to determine, given the information available at a
particular time, the truth or falsity of a particular claim? For exam-
ple, there may be competing theories about true phenomena; and we
have believed that through the application of certain epistemological
principles, the truth or falsity of certain claims can be discovered.
But what if that is not the case?
Logical positivists hold that if any claim is true or false, then
there are observations that can be made which would prove the
truth or falsity of the claim. Quine showed that no claim is ever
tested against particular observations, but that the evidence for any
particular hypothesis is mediated by other assumptions that depend
upon our background theory. Thus we might agree about a particular
observation, but disagree as to how to interpret that finding. McAl-
lister goes beyond Quine and notes the assertion that any empirical
data sets exhibit all possible patterns, which each pattern exhibiting
a certain error term. Moreover, all patterns exhibited in the dataset
constitute evidence of a particular pattern about the world. Thus,
the world contains all possible structures.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 197

Applications of Nonstandard Logic to the Social Sciences 197

If this statement is indeed true, then the traditional efforts


by empirical science to rely upon the principle of the evidential
correspondence (which leads to the assumption that the existence
of a particular feature or pattern in a dataset is true), means that
the world has a particular, specific structure. So, the existence of
one structure precludes the existence of other valid structures. Some
structures represent the truth about the world; other statements
about the world are false. As McAllister goes on to note, the principle
of evidential correspondence is central to the epistemology, method-
ology, and ontology of the social sciences. Once this principle is
rejected, it may be necessary to reexamine our traditional assump-
tions about the nature of truth and truth determination (McAllister,
2007, 2010, 2011).
McAllister assumes that a particular phenomenon generates an
empirical data set. Within that data set, particular patterns emerge,
all of which can be valid given the existence of statistical error. While
there might be a true pattern that exists, it cannot be discerned
simply by reliance on the empirical dataset and the patterns we
observe. Moreover, as a particular phenomenon becomes more com-
plex, the number of particular patterns that exist invariably become
greater, and the likelihood increases that multiple patterns of data
are observed as being equally valid. Also, improvements in technol-
ogy and our ability to observe the phenomenon in question mean that
more information is available; these, in turn, can be subjected to var-
ious methodological tests. The very existence of increased computa-
tional power means the ability to analyze data in ways not heretofore
possible. Again, this has the consequence of expanding the number of
empirical patterns in the data that can plausibly be considered valid.
It has been argued that the prevalence of highly polarized ide-
ological debate in the social sciences is a result of the fact that
there are often multiple valid explanations of the world around us
(McAllister, 2011). Individuals interpreting some phenomena relying
upon different patterns derived from empirical datasets are sure to
come to different conclusions, and those differences are translated
into ideological conflict, or at least interpreted as ideological conflict.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 198

198 The Limits of Mathematical Modeling in the Social Sciences

One could argue that as the complexity of phenomena increases, the


more likely we are to observe such differences. This fact is exacer-
bated by the knowledge that any given individual is uncertain about
what others know. Indeed, we can only know with certainty what
we ourselves know. To the extent that uncertainty about what oth-
ers think increases, so does the potential for ideological polarization
(McAllister, 2011).
A related issue involving the ability to make sense of the world
is limited by the fact, as we discussed earlier, that we exist in a
world that is observer dependent where different observers may arrive
at different conclusions about the world, but those conclusions are
more or less equally valid. While it is possible that the truth exists
in the sense that the law of the excluded middle applies (if A is
correct, B cannot be correct), we are nonetheless unable to determine
what the truth is, because we cannot project ourselves outside of our
universe in which we exist. This is the analogue of Gödel’s project, in
which he demonstrated that within some “well-behaved” axiomatic
systems that nonetheless cannot be proven, because such a proof
would require us to find means outside of the mathematical system,
which is used to construct the proof. In other words, we cannot avoid
in our proof self-referential constructs that lead to a collapse of our
efforts to establish a proof.
Examples from the social sciences seem to satisfy the “multiple
observer” description used by Smolin. Recent debates over climate
change raise relevant points about the simultaneous validity of mul-
tiple truths. Policymakers and advocates debate the pros and cons of
the dynamics of climate change. Is climate change human induced or
does it result from exogenous, physical processes? The lesson might
be that both are correct, or at least that both are deserving of further
study.
This problem is also highlighted by the fact that if we are agents
in the world, it is not possible to merely observe some objective real-
ity and then evaluate the truth or falsity of some statement. It is not
as though the agent is able to stand outside the world, observe it, and
draw the appropriate conclusions. Rather, one is unavoidably “in the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 199

Applications of Nonstandard Logic to the Social Sciences 199

world” and this very fact influences the behavior of others, and their
behavior, in turn, influences one’s own behavior. It is something like
the social analog of quantum theory, whereby we know that the mere
act of seeking to observe a subatomic particle alters the behavior of
the particle. This concept is very similar to what Soros has described
as the theory of reflexivity (Smolin, 2001) discussed earlier.6 By its
very nature, reflexivity generates a higher nonlinear feedback loop,
the result of which is that cause and effect are interdependent. An
observer in the social world observes (and acts) only within the sys-
tem and has knowledge of that part of the system over which they
have knowledge.
The above discussion has suggested that highly complex systems
may generate empirical data that can provide support for multiple
interpretations of phenomena of interest. Moreover, as our computa-
tional capabilities increase, the chances may also increase (under cer-
tain circumstances) the uncertainty with which any particular model
can be considered a correct interpretation of the world. Thus under
such conditions, traditional concepts of logic may require substantial
revision.
Highly complex phenomena also play another role in forcing a
reevaluation of the applicability of standard logic under certain con-
ditions, our subjective evaluations of the world are affected. We live
in a world characterized by multiple, often overlapping complex sys-
tems that exist at different levels and different spans of influence.
We have already discussed the basic characteristics of those systems.
They are interconnected, with each part responding to the other
in often nonlinear ways. Positive feedback loops are a common fea-
ture, and such feedback loops assume that equilibrium outcomes can-
not be obtained. Under these extraordinarily complex environmental
contexts, humans are confronted with truly daunting challenges in
decision-making.
The standard approach used by mainstream economists has been
the utility maximization model. A mainstay of economic analysis

6
Smolin acknowledges that reflexivity does not precisely describe a topos theoretic
reality, but does capture the essence of such processes (Smolin, 2001, p. 32).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 200

200 The Limits of Mathematical Modeling in the Social Sciences

for much of the 20th century, it assumes that individuals have the
computational skills to be able to optimize in whatever environment
he or she finds oneself. Such optimizing requires adherence on the
part of the decision maker to certain canons of logic. But, do most
of us, at least under many circumstances, behave in this fashion?
How do we actually decide? The traditional approach assumes that
individuals exhibit a comprehensive, or unbounded, rationality.
Over the last few decades, the standard model of economists has
been challenged. Beginning with the work of Simon (1955, 1957), who
pioneered this research and continuing through the work of Kahne-
man and Tversky (1979, 1996), and Gigerenzer (2008), and Gigeren-
zer and Brighton (2009), an alternative model has been proposed.
The fundamental assumption is that individuals are boundedly ratio-
nal. A term coined by Simon, the concept of bounded rationality
has been vastly misunderstood. It does not simply mean optimiza-
tion under constraints, nor does it refer to irrationality. It says that
individuals act with agency, but the complexity of the task environ-
ment creates challenges that our cognitive architecture cannot han-
dle. Thus, we develop alternative strategies for decision-making. For
example, we limit the number of alternatives from which to choose;
we develop cues or heuristic devices (cognitive roadmaps) or heuris-
tics to guide our decision-making.
The concept of heuristics is very important; heuristics are simple
efficient rules governing how individuals will act under certain cir-
cumstances. Gigerenzer, a leading theorist in the study of decision-
making, refers to “fast and frugal heuristics”. Gigerenzer has also
published numerous studies that demonstrate that heuristic-based
decision-making does not necessarily yield inferior results compared
to more orthodox models of decision-making. Indeed, heuristics can
offer superior outcomes. While the details cannot concern us here,
we suggest that heuristics require at some level a relaxation of stan-
dard logic. The more complex the decision-making environment, the
greater the likelihood that traditional logic is relaxed in favor of
a dynamic and variegated series of cognitive maps that guide our
passage through complex and uncertain environments.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 201

Applications of Nonstandard Logic to the Social Sciences 201

The intellectual framework provided by the theory of complex


adaptive systems (CAS) captures at least part of the logic that our
arguments seek to convey. CAS are webs or networks of agents that
interact with each other — where changes in the behavior or prop-
erties of one agent can produce effects that ripple throughout the
system. Agents are bound in loops of mutual influence; agents are
interconnected whereby all action is the result of actions by others.
Arthur, Holland, and others asset that asset markets possess these
important features of radical social interdependency, in that each
agent’s expectations of all other agents’ behavior include anticipa-
tion of others’ expectations, resulting in a massively interdependent
system.
Different agents have different experiences and evaluate those
experiences differently, which includes the way those experiences
affect different observers’ observations of the same phenomenon.
Indeed, agents could experience the identical phenomenon, but,
unless their prior experiences were identical, they evaluate that phe-
nomenon differently.
As observers, we do not merely observe some fixed reality. Rather,
what we observe may at least, in part, be a function of the expecta-
tions of what we observe (i.e., “A’s” observation of “B” alters “B’s”
behavior — a kind of uncertainty principle for the social sciences).
This, in turn, changes what “A” observes. In the social world, we
are often not observing some fixed, exogenous set of “facts.” Rather,
everything is endogenous. What we observe is a function of the expec-
tations of others, which, in turn, affects our own expectations.
It must certainly be true that the way in which a researcher
evaluates a particular discovery in the physical universe, whether
at the cosmological or at the quantum-mechanical level, bases his or
her evaluation on the prior beliefs and organized frameworks in which
they operate. The same is certainly the case in the social sciences; but
more importantly, any observer, or agent, existing in the world acts
on the basis of a set of beliefs or expectations that have developed
about the world around them through the development of heuristic
learning (Gigerenzer and Brighton, 2009; Gigerenzer, 2008).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 202

202 The Limits of Mathematical Modeling in the Social Sciences

The information available to multiple observers may be largely


the same or completely disjoint. Regardless, no observer can know
what the other observer knows. In the case of financial traders, for
example, each trader is acting on the basis of those empirical facts
they observe in the world. Different traders (i.e., observers) experi-
ence different slices of larger global reality.
There are obvious implications for noncosmological social scien-
tific phenomena. One might speculate that the economic assumptions
of a utility-maximizing agent with incomplete information comes
close to the analogy of multiple cosmological observers in different
locations, all arriving as different conclusions. But there remains an
overarching omniscience built into any model with an unambiguously
defined set of “states of the world” that can be “coarsened” in differ-
ent ways to model different incomplete views of the same unchanging
and singular truth of the system. In contrast, an economic model
inspired more deeply by topos theory might require a completely
different set of states of nature at each position in the environment,
with no assumption that there is a single master set of states that
unambiguously describes the entire system. In social science theory,
we imagine, that there can be no singular description of the entire sys-
tem. Equivalently, descriptions of the system always depend on the
position from which the system is observed. Different systems vary
in the extent to which the family of all descriptions corresponding
to different positions overlap or contradict each other. Future work
might aim to characterize the structure of environments in which the
multiplicity of descriptions generates largely overlapping vs. largely
divergent sets of descriptions of the system, and these descriptions
of the system are reflected in heterogeneous agents acting through
different heuristic devices to arrive at conclusions about the world.

5. Conclusion
We suggest here that the long-standing belief of the social sciences
in classical logic, and its attendant’s reliance needs revision. While
not suggesting that there is a need to abandon classical logic, we
suggest that the social sciences will benefit from certain nonstandard
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 203

Applications of Nonstandard Logic to the Social Sciences 203

approaches to understanding phenomena in the social world. As has


been the case in the past, some of the inspiration for this recom-
mendation comes from physics and related disciplines. In particu-
lar, the epistemological problems that confront cosmologists suggest
a rethinking of the traditional paradigm. The assumptions of intu-
itionistic theory and constructivist mathematics, the paraconsistent
logic of da Costa and the extension of aspects of their nonstandard
approaches to topos theory and related mathematical innovations
point to new possibilities in the social sciences. This is particularly
the case given the structure of a world in which we find ourselves
imbedded in thin complex, interacting systems where cause and effect
are not always clear and where we are forced to rely upon heuristics
and cognitive roadmaps in making decisions. In such a world, it seems
reasonable to relax traditional assumptions of logic, and to ground
it more in contextual frameworks. McAllister’s work suggesting the
indeterminacy of empirical data sets partially illustrates the problem,
as does Velupillai’s path breaking work in algorithmic economics.
While not advocating necessarily a revolution in the social sci-
ences, we suggest a rethinking of the traditional standards of logic
may be justified and worthwhile.

References
Aragones, E., Gilboa, I., Postelwaite, A., & Schmeidler, D. (2005). Fact-free learn-
ing. The American Economic Review, 95(5), 1355–1368.
Axtell, R. L., et al. (2002). Population growth and collapse in a multi-agent model
of the Kayenta anasazi in the long house valley. Proceedings of the National
Academy of Sciences, 99, 7275–7279.
Barabási, A.-L. (2002). Linked: The New Science of Networks. New York: Perseus
Books Group.
Bernauer, J. C., et al. (2010). High-precision determination of the electric and
magnetic form factors of the proton. Physical Review Letters, 105(24).
Bishop, E. (1967). Foundations of Constructive Analysis. New York: McGraw-
Hill.
Bondy, J. A. and Murty, U. S. (2008). Graph Theory. New York: Springer.
Borrill, P. L. and Tesfatsion, L. (2011). Agent-based Modeling: The Right Mathe-
matics for the Social Sciences? Ames: Iowa State University-Department of
Economics.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 204

204 The Limits of Mathematical Modeling in the Social Sciences

Bridges, D. S. (1999). Constructive mathematics: a foundation for computable


analysis. Theoretical Computer Science, 219(1), 95–109.
Brouwer, L. E. J. (1913). Intuitionism and formalism. (A. Dresden, Trans.). Bul-
letin of American Mathematical Society, 20(2), 81–96.
Chaitin, G. J., Doria, F. A. and da Costa, N. C. (2012). Gödel’s Way: Exploits
into an Undecidable World. Boca Raton: CRC Press.
da Costa, N. C. and French, S. (2003). Science and Partial Truth: A Unitary
Approach to Models and Scientific Reasoning. Oxford: Oxford University
Press.
da Costa, N. and Doria, F. A. (2005). Computing the future. in K. V. Velupillai
(ed.), Computability, Complexity and Constructivity in Economic Analysis,
pp. 15–50. Hoboken: Wiley-Blackwell.
Dean, D. J. and Elliott, E. (n.d.). Complex systems, decision-making and com-
putability: a social science perspective. This volume.
Doria, F. A. (n.d.). Axiomatics, the social sciences, and the Gödel phenomenon:
A Toolkit. This volume.
Epstein, J. (1997). Agent-based computational models and generative social sci-
ence. Complexity, 4(5), 41–57.
Epstein, J. and Axtell, R. (1996). Growing Artificial Societies: Social Science
From the Ground Up. Cambridge: MIT Press.
Fuchs, C. A. (2010). Coming of Age with Quantum Information. Cambridge: Cam-
bridge University Press.
Fuchs, C. A. (2014). Introducing QBism. in New Directions in the Philosophy of
Science, pp. 385–402. Zürich: Springer International Publishing.
Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Sci-
ence, 3(1), 20–29.
Gigerenzer, G. and Brighton, H. (2009). Homo heuristicus: why biased minds
make better inferences. Topics in Cognitive Science, 1(1), 107–143.
Gilboa, I., Postlewaite, A. W. and Schmeidler, D. (2008). Probability and uncer-
tainty in economic modeling. The Journal of Economic Perspectives, 22(3),
173–188.
Hoffman, D. D. (2008). Conscious realism and the mind-body problem. Mind and
Matter, 6(1), 87–121.
Hoffman, D. D. (2014). The origin of time in conscious agents. Cosmology, 18,
494–520.
Hoffman, D. D. and Singh, M. (2012). Computational evolutionary perception.
Perception, 41(9), 1073–1091.
Kahneman, D. and Tversky, A. (1979). Prospect theory: an analysis of decision
under risk. Econometrica, 47, 263–291.
Kahneman, D. and Tversky, A. (1996). On the reality of cognitive illusions. Psy-
chological Review, 103, 502–591.
Koppl, R., Kauffman, S., Felin, T. and Longo, G. (2015). Economics for a creative
world. Journal of Institutional Economics, 11(1), 1–31.
Leinster, T. (2014). Basic Category Theory, Vol. 143. Cambridge: Cambridge
University Press.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch06 page 205

Applications of Nonstandard Logic to the Social Sciences 205

Markopoulou, F. (2000a). An insider’s guide to quantum causal histories. Nuclear


Physics B — Proceedings Supplements, 88(1), 308–313.
Markopoulou, F. (2000b). The internal description of a causal set: what the uni-
verse looks like from the inside. Communications in Mathematical Physics,
211(3), 559–583.
McAllister, J. W. (2007). Model selection and the multiplicity of patterns in
empirical data. Philosophy of Science, 74(5), 884–894.
McAllister, J. W. (2010). The ontology of patterns in empirical data. Philosophy
of Science, 77(5), 804–814.
McAllister, J. W. (2011). What do patterns in empirical data tell us about the
structure of the world? Synthese, 182(1), 73–87.
McAllister, J. W. (2012). Climate science controversies and the demand for access
to empirical data. Philosophy of Science, 79(5), 871–880.
Newman, M. (2010). Networks: An Introduction. Oxford: Oxford University Press.
Pohl, R., et al. (2010). The size of a proton. Nature, 466(7303), 213–216.
Rehmeyer, J. (2013). One of the Most Abstract Field in Math Finds Application
in the ‘Real’ World. Retrieved from Science News: https://www.sciencenews.
org/article/one-most-abstract-fields-math-finds-application-real-world.
Rovelli, C. (1996). Relational quantum mechanics. International Journal of The-
oretical Physics, 35(8), 1637–1678.
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal
of Economics, 69(1), 99–118.
Simon, H. A. (1957). Models of Man: Social and Rational. New York: John
Wiley & Sons.
Smolin, L. (2001). Three Roads to Quantum Gravity. New York: Basic Books.
Spivak, D. I. (2013). Category theory for scientists. Preprint, arXiv:1302.6946.
Velupillai, K. V. (ed.). (2005). Computability, Complexity and Constructivity in
Economic Analysis. Hoboken: Wiley-Blackwell.
Velupillai, K. V. (2007). The impossibility of an effective theory of policy in a
complex economy. in Complexity Hints for Economic Policy, Salzano, M.
and Colander, D. (eds.), pp. 273–290. Milan: Springer.
Velupillai, K. V. (2010). Computable Economics. Oxford: Routledge.
Wolfram, S. (2012). A New Science. Champaign: Wolfram Media.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 207

Chapter 7

Is Economics a Science?
A Gödelian Perspective

Sami Al-Suwailem
Islamic Development Bank Group,
Jeddah, Saudi Arabia

Science is only possible because some things are impossible.


— John Barrow (1998, p. vii)

1. Introduction
Emanuel Derman, director of the Financial Engineering Program at
Columbia University, and probably the first “quant” on Wall Street
with a Ph.D. in particle physics, makes some interesting comparisons
between financial engineering and (real) engineering.
According to Derman (2007), science — mechanics, electrody-
namics, molecular biology, and so on — seeks to discover the funda-
mental principles that describe the world. Engineering is about using
those principles constructively for a useful purpose. Thus, mechan-
ical engineering is based on Newton’s laws. Electrical engineering
is based on Maxwell’s equations and solid-state physics. Similarly,
bioengineering is based on the principles of biochemistry, physiology,
and molecular biology. Derman then asks the obvious question: what
is the science behind financial engineering? Having spent about 30

207
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 208

208 The Limits of Mathematical Modeling in the Social Sciences

years as a financial engineer, Derman’s answer is unequivocal in its


assertion:

Financial engineering rests on a shaky basis . . . There is as yet no


truly reliable financial science beneath financial engineering. . . . So,
while we in financial engineering are rich in techniques (stochas-
tic calculus, optimization, the Hamilton–Jacobi–Bellman equation,
and so on), we don’t (yet? ever?) have the right laws of science to
exploit. (Derman, 2007, 2011, 2014).

Since modern finance is the largest application (in terms of


monetary value) of the neoclassical economic theory, it becomes
apparent that mainstream economics, as it stands now, is still way
behind. Some would argue that an economist is an “engineer” and
not a scientist (e.g., Mankiw, 2006; Shiller, 2013). But the ques-
tions raised by Derman are still unanswered: There can be no engi-
neering without science. If an economist is considered an engineer,
then it begs the question: what is the science behind this kind of
engineering?
This is not a harmless matter. In 2008, the world witnessed the
collapse of the largest credit bubble in history, and the worst eco-
nomic recession since the Great Depression, from which the world
is still suffering and struggling to find its way out. Millions of citi-
zens lost their homes, trillions in wealth evaporated, and trillions in
sovereign debt are waiting to be paid by the same innocent citizens
who were the victims of the Global Financial Crisis. The future might
see an even worse crisis, that of climate change, as we will see. These
events could have been avoided, or at least mitigated, if economics1
were to perform its scientific functions.
The question: “Is Economics a Science?” has been raised prob-
ably hundreds of times since economics became a distinct field of
intellectual investigation. As we shall argue, there are good reasons
to believe that economics of the pre-World War II era performed the

1
From now on, by “economics” we mean mainstream economics, particularly neo-
classical theory, unless otherwise stated.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 209

Is Economics a Science? A Gödelian Perspective 209

functions of science. The works of economists like Smith, Ricardo,


Mill, Marshall, Wicksell, Veblen, Keynes, Hayek, Schumpeter, among
others, can be viewed as scientific explanations behind economic phe-
nomena. However, post-World War II economics, the one dominated
by the works of Arrow and Debreu, takes a different direction. This
latter approach to economics, which became the mainstream, raises
deep questions about the scientific nature of economics.
In addressing the question: “Is Economics a Science?” this chap-
ter takes a practical approach: What can we, economists, learn from
modern science and mathematics? This might sound strange given
the heavy use of mathematics in economic theory. As we shall see,
mainstream economics learned little from Gödel and other major
contributors to modern mathematical logic. Our objective is not to
simply add one more technique to the already existing arsenal in
economists’ repertoire. Rather, it is to examine the consistency of
economic theory, and to integrate the criteria of natural science,
namely conservation laws, with the results of Kurt Gödel on the con-
sistency of formal systems. This approach brings interesting insights
into this matter, and hopefully, points to the way forward in the
reformulation of economics into becoming a proper science that con-
tributes positively to the welfare and prosperity of the human civi-
lization.
The chapter is organized as follows. Section 2 presents the main
functions of science and examines to what extent economics has
served these functions. Section 3 presents the criteria of natural
sciences, namely conservation laws and invariance principles. Since
economics should be no different from natural sciences in principle,
as Milton Friedman argues, economics is evaluated based on these
criteria. Section 4 discusses the logical foundations of conservation
laws. It is argued that conservation in physical systems is equivalent
to consistency in formal systems. In both, there cannot be something
for nothing. Section 5 applies this criterion to economics. In particu-
lar, it is shown that economic theory fails to prevent many forms of
“something for nothing,” for example, Ponzi schemes and bubbles.
Logical formulation of these activities is presented, which displays
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 210

210 The Limits of Mathematical Modeling in the Social Sciences

their contradictory and paradoxical nature. These contradictions


translate in violating conservation laws, and therefore, render the
economic system unsustainable. Section 6 presents yet another exam-
ple of how economic theory fails to satisfy conservation laws, that
of perpetual growth. Section 7 is a synopsis of the previous sec-
tions highlighting the fallacies arising from the misuse of mathe-
matical models in economics. Section 8 discusses important lessons
that we need to learn from science. The conclusion is presented in
Section 9.

2. Economics and the functions of science


In physics, three laws explain 99% of all phenomena. In economics,
99 laws explain 3% of all phenomena.
— Andrew Lo (Mangelsdorf, 2010)

In general, science aims to perform the two following functions


regarding the phenomenon under investigation (Blaug, 1992, Chap. 1;
Elster, 2015, Chap. 1):

(1) Explanation;
(2) Prediction.

Explanation is the framework provided to understand the phe-


nomenon at hand. More accurately, it highlights (possible) causal
relations that culminate into the observed phenomenon. For exam-
ple, to explain the Great Depression, a theory provides a framework
that shows how different factors contributed to the event. Explaining
the Depression, or the Global Financial Crisis, or any economic phe-
nomenon, is more or less like storytelling, as D. McCloskey (1998)
argues. This requires detecting influential links between different
parts in a manner that brings together the full story or the big picture
in line with the observed reality. The objective, using the words of
Adam Smith, is to “introduce order into [the] chaos of jarring and
discordant appearances” (in Heilbroner and Milberg, 1995, p. 16).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 211

Is Economics a Science? A Gödelian Perspective 211

Probably most economic studies prior to World War II were con-


cerned mainly with explaining economic phenomena. J. M. Keynes’s
General Theory, for example, was an explanation of the Great
Depression, which Classical Economics failed to explain, let alone
predict. From this perspective, economics was an explanatory sci-
ence, aiming at understanding the nature and causes of the wealth
of nations.
After World War II, however, formal mathematics became the
language of economic theory (Weintraub, 2002). Mathematics, obvi-
ously, is of immense value in clarifying ideas and communicating
arguments. It is particularly necessary for providing precise predic-
tions of the future, which is the second function of science.
Both prediction and explanation seek to rationalize a kind
of “expectation.” Explanation rationalizes retrospective, ex post,
expectations. Given the relevant factors, we would expect a par-
ticular outcome to be the result; hence we “understand” the phe-
nomenon. Prediction, on the other hand, rationalizes prospective, ex
ante, expectations. Given a specific set of relations and initial con-
ditions, we would expect a particular outcome to obtain. Both per-
spectives, therefore, seek a rational and consistent view of the world.
From this point of view, both explanation and prediction constitute
a scientific endeavor to learn and discover the truth. Explanation,
by nature, provides a kind of prediction, qualitative or “pattern pre-
diction,” as Friedrich Hayek describes it (Caldwell, 2004, p. 310).
Precise or quantitative prediction, on the other hand, doesn’t natu-
rally follow from explanation. There is a sort of tradeoff between the
two, as we argue below.

Price of prediction
The ability to quantitatively predict the future using mathemati-
cal models is unquestionably a valuable objective of science. But
it comes at a price. The price is the abstraction and the simplify-
ing assumptions that must be adopted for the mathematical model
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 212

212 The Limits of Mathematical Modeling in the Social Sciences

to be tractable and, therefore, useful. For economic models, such


assumptions are obviously not always realistic, as Friedman (1953)
famously hypothesizes. Businesses, Friedman argues, need not be
“profit maximizers” in reality, but they behave as if they are. What
matters at the end is not the specific assumptions the model adopts,
but the predictions of the model. Friedman (p. 149) argues that,

. . . theory is to be judged by its predictive power for the class of


phenomena which it is intended to explain . . . the only relevant test
of the validity of a hypothesis is comparison of its predictions with
experience.

So precise prediction comes at the expense of realistic explanation


of the phenomenon, or, following Mayer (1993), there is a trade-off
between truth and precision. This seems to be a reasonable price to
pay — we would be happy to give up on a deep understanding of
economic reality to gain precise predictions of its future behavior.
But was economics able to make this trade-off?
It is clear that economic models failed miserably at predicting the
major events since World War II. First was the impact of the oil shock
in 1973, after which structural models and Keynesian economics came
under severe attack. Then came the 1987 crash of the stock market.
Then the Long Term Capital Management Fund collapsed in 1998,
which threatened not only the US economy but global markets at
large. And then the world was hit by the mother of all financial
crises: The Global Financial Crisis 2007–2009.
Shortly before the crisis, in 2007, Olivier Blanchard reviewed
the state of macroeconomics and concluded: “The state of macro
is good” (Blanchard, 2009, p. 210). Similarly, Gregory Mankiw, in
2006, reviewed the development of macroeconomics, and wrote: “a
new consensus has emerged about the best way to understand eco-
nomic fluctuations” (Mankiw, 2007, p. 38). Daniel Hausman, in 2007,
wrote in the introduction to his anthology Philosophy of Economics:
“Although it is overly optimistic to claim that the central economic
problems have been solved . . . such a claim today, unlike a generation
ago, would not strike most people as absurd” (Hausman, 2008, p. 2).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 213

Is Economics a Science? A Gödelian Perspective 213

Similar comments were made prior to the crisis by Robert Lucas and
Ben Bernanke (see Keen, 2011, pp. 203–267), among others.
For mainstream economics, the Global Financial Crisis is an out-
lier that could have never been predicted. Lucas (2009) writes in The
Economist:

One thing we are not going to have, now or ever, is a set of models
that forecasts sudden falls in the value of financial assets,2 like the
declines that followed the failure of Lehman Brothers.

Similarly, Eugene Fama, father of the “Efficient Market Hypothesis,”


argues that the Global Financial Crisis was driven by an economic
decline that was “unpredictable at the time” (Cassidy, 2010). If eco-
nomics can never predict or explain such major economic events, how
could it be a science? And what value does it offer to the society?
The fact is, however, many reputable economists have predicted and
warned against the crisis (Bezemer, 2011). Still, the majority of eco-
nomic forecasters missed the crisis by a wide margin (Wieland and
Wolters, 2013). Mainstream economics today is apparently at the
same position as that of classical economics after the Great Depres-
sion. Both failed to predict the disaster, and both were unable to
explain it after the event. Eugene Fama was clear about this (Cas-
sidy, 2010):

There’ll be a lot of work trying to figure out what happened [in


the Global Financial Crisis] and why it happened, but we’ve been
doing that with the Great Depression since it happened, and we
haven’t really got to the bottom of that.

From this perspective, mainstream economics seems to have pro-


gressed little, if at all, over the past 80 years.
In summary, modern economic theory gave up explanation to
gain prediction. But it seems it lost both. Neoclassical theory today
provides little understanding of the major events around us and fails
miserably in predicting any. To achieve the main objectives of science,

2
A “sudden” event is not foreseeable by definition.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 214

214 The Limits of Mathematical Modeling in the Social Sciences

economics needs to take a closer look at science to see what is missing


in its domain.

3. Characteristics of modern science


If there is a single property characterizing modern science, it is the
recognition of its internal limits: limits of what we can know and of
what we can do. The work of Kurt Gödel, the focus of this volume,
is a landmark in this journey, of which more will be presented later.
According to Karl Popper (1959, pp. 48, 247), a scientific law
is a set of “prohibitions.” It prohibits or restricts a particular set
of otherwise possible outcomes. From this perspective, scientific dis-
covery brings in more restrictions or limits. These limits, however,
are not harmful as such. The law of gravity prevents us from flying.
However, it provides us the incentives to invent the tools and means
to fly without violating the law.
In the 20th century, however, scientific progress went even
further. Werner Heisenberg’s Uncertainty Principle, Kurt Gödel’s
Incompleteness Theorem, Alan Turing’s Halting Theorem, among
other discoveries, launched a new era whereby science and reason
could discover their own limits. While the law of gravity puts limits
on our abilities to act, these new discoveries put limits on our abilities
to know. As Barrow (1998, p. 252) points out:

Our knowledge about the Universe has an edge. Ultimately, we may


even find that the fractal edge of our knowledge of the Universe
defines its character more precisely than its contents; that what
cannot be known is more revealing than what can.

This is a distinguishing feature of modern 20th-century science.


Economics seems somehow isolated from all these developments.
There are no limits in economic theory on how much the economist,
or the agents in an economy, can know. Nor how much they can com-
pute or calculate. All these limits are only “practical,” that is, can be
bypassed with proper advances in technology. From this perspective,
economics fails to share the distinguishing features that characterize
modern science.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 215

Is Economics a Science? A Gödelian Perspective 215

Economics and natural science


In his Nobel Prize lecture, Friedman (1976) asks the question: Do
not the social sciences require fundamentally different methods of
investigation than the physical and biological sciences? Should they
not be judged by different criteria? He answers (p. 267):

I have never myself accepted this view. I believe that it reflects


a misunderstanding not so much of the character and possibilities
of social science as of the character and possibilities of natural
science. In both, there is no “certain” substantive knowledge; only
tentative hypotheses that can never be “proved”, but can only fail
to be rejected . . . In both social and natural sciences, the body of
positive knowledge grows by the failure of a tentative hypothesis
to predict phenomena the hypothesis professes to explain . . . In
both, experiment is sometimes possible, sometimes not (witness
meteorology). In both, no experiment is ever completely controlled,
and experience often offers evidence that is the equivalent of con-
trolled experiment. In both, there is no way to have a self-contained
closed system or to avoid interaction between the observer and
the observed. The Gödel theorem in mathematics, the Heisenberg
uncertainty principle in physics, the self-fulfilling or self-defeating
prophecy in the social sciences all exemplify these limitations.

This seems a sensible view that is shared by other economists (e.g.,


McCloskey, 1994; Smith, 2008). There is no reason a priori to treat
social sciences fundamentally differently from natural sciences. At
the end, social activities take place within nature, and by agents who
abide by the laws of nature. While social phenomena might have its
own regularities, they cannot violate the laws of nature. Hence, social
and natural sciences have much in common. Accordingly, we discuss
below the defining characteristics of science and see how economic
theory can learn from them.

Conservation laws
Arguably, the most important principle that had a major impact on
the development of science was conservation laws. Scientists since
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 216

216 The Limits of Mathematical Modeling in the Social Sciences

Galileo Galilei realized that a physical system cannot generate more


energy than what it starts with, without an external source of energy
(Coopersmith, 2010, Chap. 2). Newton’s third law reflects this prin-
ciple: For each action, there is a reaction equal in magnitude and
opposite in direction. An action cannot, as such, generate a reaction
of a different magnitude.
During the mid-19th century, the law of conservation of energy
was established (Lightman, 2000). Other conservation laws were dis-
covered concerning momentum and electric charge, among others.
No scientific theory is to be acceptable if it violates these laws. Con-
servation of energy is the first law of thermodynamics, which states
that, in an isolated system, total energy is constant.
The second law of thermodynamics, another major law of nature,
states that, while total energy in an isolated system is constant, avail-
able energy for useful work cannot increase over time. In other words,
useless energy, or entropy, tends to rise over time. The third law of
thermodynamics states that entropy can never become zero. Alterna-
tively, no system can have a temperature of absolute zero (−270◦ C).
As theoretical physicist Kaku (2008) points out, the first law
says “You can’t get something for nothing.” The second law says,
“You can’t break even.” The third law says, “You cannot even get
out of the game.” Despite the obvious importance of these laws to
economic activities, mainstream economics pays little attention, if
any, to them. We will discuss this connection in detail in the next
section.

Perpetual motion machines


The conservation law is a statement that holds as a general princi-
ple across all scientific theories and applications. The most obvious
example of deviations from the law is perpetual motion machines
(PMM). These are machines that are claimed to systematically gen-
erate more energy over time than what they start with, without an
external source of energy. PMMs have an interesting history (see Ord-
Hume, 2005). Proposals for PMMs suffer from no shortage of imag-
ination and creativity. They only lack proper accounting: Somehow,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 217

Is Economics a Science? A Gödelian Perspective 217

they think they can get more energy than they have. None of these
machines stood the test.
To be precise, there are two kinds of perpetual motion (Dewdney,
2004):
The first type is inherently present in the universe. A body mov-
ing in outer space, that is, without friction, will stay moving as long as
nothing interrupts it. This is Newton’s first law of motion. Electrons
are in continuous motion in atoms. Atoms and molecules are con-
tinuously moving in what is known as “molecular chaos,” producing
the Brownian motion. This motion does not decay or degrade but
might be transferred from one molecule to another, in which case,
one will become slower while the other will become faster. However,
total energy is conserved: no atom or molecule will generate more
energy than it possesses.
The second type of perpetual motion is the one in which the
system is able to generate more energy than what it starts with,
without any external source of energy. In other words, a PMM is
able to not only use its energy with 100% efficiency but is also able
to generate additional energy to produce additional work. PMMs,
therefore, violate the conservation law. The conservation law is a
universal law; it applies to both classical and quantum physics. No
PMM can, therefore, exist, neither at the classical level nor the quan-
tum level.

Nature of conservation laws


Conservation laws, by nature, are observable at the system’s level,
that is, when there are two or more interacting parts. The laws might
not be observable at the level of each individual part. The reason is
that energy can take many forms, and it gets transformed as it is
transferred from one part to another. When two balls or molecules
collide, for example, total energy is conserved, that is, is constant,
but it gets transferred from one body to the other.
This feature of conservation laws, particularly conservation of
energy, was instrumental in many discoveries. When scientists realize
that there seems to be “something for nothing,” this indicates the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 218

218 The Limits of Mathematical Modeling in the Social Sciences

existence of a particle or a force that was not accounted for. Time and
again, the results confirm conservation of energy (Lightman, 2000,
pp. 55–57; Schumacher, 2010, Lecture 4).
One important feature of conservation laws is that they are
deeply ingrained in the universe. There are many local or detailed
laws governing various forms of matter and energy. But, “across the
variety of these detailed laws there sweep great general principles
which all the laws seem to follow,” such as the principles of conser-
vation, notes Feynman (1965, p. 53).
This means that the impossibility of PMM is deeply rooted in the
laws of the universe. Mathematician and computer scientist Dewd-
ney (2004) points out that, for each proposal of a PMM, we might
simply revert to the general principle of the conservation law and
conclude that this proposal is not workable. Alternatively, we might
analyze each step in the proposed scheme, only to find out that it
is actually impossible, even without invoking the conservation law.
That is, if we analyze each step employing the simplest physical and
mathematical concepts, as Dewdney points out, we always arrive at
the same conclusion: The machine is impossible. Dewdney elaborates
(p. 27):

This is a very strange phenomenon when you think about it. It


implies the consistency between our applied mathematical analyses
and a general fiat about conservation of energy, a consistency that
cannot be explained within any framework of knowledge currently
available to us. If we had no notion whatever that all physical
systems were contained by the law of conservation of energy, we
would still be reaching conclusions of impossibility for these [per-
petual motion] machines based purely on applied mathematical
reasoning.

This might indicate that conservation laws are a kind of “emergent”


laws. They are not imposed externally; rather, they are the outcome
of local rules. Even if we ignore laws of conservation at the system
level, if we follow through the local rules, we end up with the same
result. However, the two perspectives, the conservation law at the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 219

Is Economics a Science? A Gödelian Perspective 219

system level and the local rules at the parts’ level, complement each
other. Nature is internally consistent, from the bottom-up.
Economic theory lacks the system-level perspective when study-
ing economic activities. It focusses on individual agents, and at the
margin, but pays little attention to the system as a whole (as we
shall see later). This gap leads to divergence of the microbehavior
from the macroperformance. It frequently leads to the “fallacy of
composition,” whereby the payoffs of the individual are at odds with
the overall outcome.
Furthermore, economic theory adopts “expected utility”
approach, whereby what matters is ex ante, that is, expected values,
not actual ex post outcomes. Conservation Laws, in contrast, apply
before and after the transformation. Total energy at the start must
equal total energy at the end. A PMM is always impossible, not “on
average.” We shall see later how these aspects impact the approach
of neoclassical theory toward analyzing economic phenomena.

4. Logical foundations of conservation laws


Mach (1911, Chap. IV) argues that the “principle of excluded perpet-
ual motion” logically precedes conservation laws. “It is not possible to
create work out of nothing,” he elaborates (p. 69). There is, therefore,
something logically incorrect about PMMs.
The remarkable nature of conservation laws that Dewdney (2004)
points at might be traced back to pure logic. Mathematically, to get
something from nothing is equivalent to having 0 = 1. This can be
easily seen if we formulate the conservation law as a constant sum.
For a given system, total input energy must be equal to total output
energy: Einput = Eoutput . More accurately, total energy at any point
of time t > 0 must equal total energy at t = 0 (see Klainerman,
2008, p. 479). A PMM would violate this law by somehow having
the right-hand side increases without a corresponding increase on
the left-hand side. Mathematically, this violates what mathematician
Benjamin (2015, p. 27) calls the “Golden Rule of Algebra: Do unto
one side of the equation as you do unto the other.” Violating this
rule, therefore, leads to 0 = 1.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 220

220 The Limits of Mathematical Modeling in the Social Sciences

If a mathematical theory produces the statement 0 = 1, then


it can prove any statement. In other words, the theory provides no
valid reasoning. For a formal system to be consistent, it must not
allow this result. In this manner, the system avoids contradiction.
The “law of non-contradiction” (LNC) is a fundamental principle of
logic. According to Aristotle, it is a principle of scientific inquiry, rea-
soning, and communication, that we cannot know anything without
it (Gottlieb, 2015).3
The role of LNC as the basic, indemonstrable principle is also
affirmed by Gottfried Wilhelm Leibniz. For Leibniz, LNC is equiv-
alent to the law of identity (LI), which states that everything is
identical to itself: “Nothing should be taken as first principles but
experiences and the axiom of identity, or (what is the same thing)
contradiction, which is primitive, since otherwise there would be no
difference between truth and falsehood” (cited in Horn, 2014). The
law of identity implies that, if x = x, then it is impossible to have
x = x+ 1, and thus we cannot have 1 = 0. The LI and LNC, therefore,
are equivalent, as Leibniz points out.
We can see these principles reflected in natural laws. The law
of identity translates into the Newton’s first law of motion, the law
of inertia: an object remains in its state, of rest or motion, unless
acted upon by an external force (see Meyerson, 1930). The law of
non-contradiction, that is, the impossibility of 0 = 1, translates into
Newton’s third law: For each action, there is an action equal in mag-
nitude and opposite in direction. If the reaction were of a different
magnitude, we would end up with 0 = 1.
From this perspective, we can see how the conservation law is
fundamental in science. We can also see how the conservation law
emerges as a general principle even if it were ignored at the local
level. As discussed above, the same conclusion would be reached
based purely on applied mathematical reasoning, as Dewdney (2004)

3
It should be pointed out that paraconsistent logic does not violate the LNC.
Paraconsistent logic accommodates inconsistency in a sensible manner that treats
inconsistent information as informative. However, it does not entail the view that
there are true contradictions (Priest et al., 2013).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 221

Is Economics a Science? A Gödelian Perspective 221

points out. The reason is that, mathematically, we simply cannot


have 1 = 0, and thus, using purely logical arguments, total energy
must be conserved.
This shows how mathematics and science are in a surprising har-
mony. Wigner (1960) famously wondered about the “unreasonable
effectiveness of mathematics in natural sciences.” Freeman Dyson
notes that “one of the central themes of science [is] the mysterious
power of mathematical concepts to prepare the ground for physical
discoveries, which could not have been foreseen or even imagined by
the mathematicians who gave the concepts birth” (cited in Hersh,
2006, p. 320). Our brains and neural systems are parts of the uni-
verse, and thus, they are governed, at least in principle, by the same
laws. Further, as Reuben Hersh (2006, p. 321) points out, since the
origin of mathematics is physical reality, “mathematics can never
escape from its inner identity with physical reality.”
Not only is there deep harmony between the two disciplines,
physics, moreover, can be used to prove mathematical results. While
mathematics is used as a tool for physics, physics also can be a tool
for mathematics. Mathematician Mark Levi shows how the law of
conservation of energy can be used to derive the famous Pythagorean
theorem (Levi, 2009, Chap. 2). Levi provides many other examples
in which physical concepts are used to demonstrate or solve math-
ematical problems. As Levi (p. 2) points out, “many of fundamen-
tal mathematical discoveries (Hamilton, Riemann, Lagrange, Jacobi,
Möbius, Grassmann, Poincaré) were guided by physical considera-
tions.” According to mathematician Kline (1959), development of
mathematics was driven primarily by the need to understand and
master nature. Hence (p. 40),
. . . the greatest mathematicians, Eudoxus, Apollonius, Archimedes,
Fermat, Descartes, Newton, Leibniz, Euler, Lagrange, Laplace,
Hamilton, Cauchy, Gauss, Riemann, and Poincaré, to mention just
a few — were also giants of science. All of these men would have
earned enduring fame for their physical researches alone.

We shall see later how this fact relates to the (mis)use of mathematics
in economics and in science in general.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 222

222 The Limits of Mathematical Modeling in the Social Sciences

Paradox of zero-sum games


From an economic point of view, the statement 0 = 1 can be written
as 1 = −1, which represents the outcome of a zero-sum game, that
is, a transaction in which one party gains while the other loses.4
Obviously, trade, in principle, is supposed to achieve mutual gain
where both parties win. A zero-sum game does not create value; it
only transfers wealth from one party to another. A zero-sum game
might be mutually acceptable only on ex ante basis, where the final
outcome is uncertain; however, ex post, that is, the actual outcome,
will never be mutually acceptable.
Recall that conservation laws must be satisfied both before and
after the process takes place, that is, ex ante and ex post.
Logically, a zero-sum game involves a contradiction. A logical
contradiction arises when a statement p in a formal system is both
true and false. One famous example is the “Liar’s Paradox,” whereby
we have the following statement:
p This statement is false.
Is statement p above true or false? If it is true, it is false; if it is false,
it is true. This means that it is true and false at the same time, that
is, 1 = 0. This is a contradiction.
A zero-sum game is a transaction that also involves a paradox:
Each party seeks to win, when in fact, one can win only if the other
loses. We might model such a game using a variant of the cyclical
Liar (see Clark, 2002, p. 268):
p(1) This statement is true if and only if p(2) is false.
p(2) This statement is false if and only if p(1) is true.
Statement p(1) reflects the payoff of player 1. Player 1 wins only if
player 2 loses. Player 2 loses only if player 1 wins. Suppose p(1) is
true, then p(2) says that it is false. If p(2) is false, then it is true,
since that is what it says — a contradiction. Conversely, suppose

4
In an exchange, the ex ante payoffs of the two parties are supposed to be equal.
In a zero-sum game, this implies, ex post, that 1 = −1.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 223

Is Economics a Science? A Gödelian Perspective 223

p(2) is true, then p(1) says that it is false, which is true, and we end
up with a similar contradiction. Put differently, if p(1) is true, p(2)
becomes a Liar; if p(2) is true, p(1) becomes a Liar.
Another way to phrase a zero-sum game in a Liar-style structure
is the one proposed by Daynes et al. (2015, p. 40):

p(1) Statements p(1) and p(2) cannot be both true.


p(2) Statements p(1) and p(2) cannot be both true.

Each statement can be viewed as representing the payoffs of the two


parties. The payoff cannot be positive for both. Hence the contra-
diction. The contradiction is reflected in the repeated attempts of
players to recover their losses from the previous rounds.
As game theorist Roy Gardner (1995, p. 37) elaborates:
No value can be created in a zero-sum game; the players are forever
at each other’s throats. In such a society, economic change is nearly
impossible, and when it is possible, it comes only at a very high cost.

Paradoxes such as the Liar have no stable truth value: from one
perspective, they are true, from the other, they are false. They are,
therefore, “logically unstable” assertions (see Kremer, 2014; Bolan-
der, 2014). When such paradoxes are translated into a programming
code, they create an infinite loop, which, if implemented on a physical
machine, will exhaust its resources and result in a crash (see Stuart,
2013, p. 263).
Accordingly, if market activities involve such contradictions, the
market will likely involve instability, since players are “forever at
each other’s throat,” as Gardner points out. Instability can also be
inferred from the violation of conservation laws, as we shall see in
the next section.

Enter Gödel
Kurt Gödel (1906–1978) was the greatest logician in the 20th cen-
tury and was considered by the best minds of his contemporaries as
possibly the greatest logician of all time since Aristotle (Wang, 1996;
Dawson, 1997). Following Wang (1996, p. 3), a long-time pupil of
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 224

224 The Limits of Mathematical Modeling in the Social Sciences

Gödel, the famous Incompleteness Theorem of Gödel can be stated


in any of the following forms:
• Mathematics is inexhaustible.
• Mathematics is mechanically (or algorithmically) inexhaustible (or
incompletable).
• Any consistent formal theory of mathematics must contain unde-
cidable propositions.
• No theorem-proving computer (or program) can prove all and only
the true propositions of mathematics.
• No formal system of mathematics can be both consistent and
complete.5
Gödel’s Incompleteness Theorem represents a revolution in logic and
philosophy of mathematics and has substantial impacts on the phi-
losophy of science and social sciences. Yet, the theorem receives less
than the attention it deserves in these areas.
The theorem shows that to avoid contradiction, a formal system
must be incomplete, that is, the system cannot prove all true state-
ments formulated in the system. Truth extends beyond deductive
proof. Consistency of a formal system cannot be proved from within
the system itself. Since a system that involves the contradiction 1 = 0
can prove anything, such a system must be complete. This means
that to avoid contradiction, the system cannot prove any statement,
and thus, it must be incomplete (see Berto, 2009, pp. 105, 179). If
a system can prove its own consistency, then it must be inconsistent
(Smullyan, 1987, Chap. 13). To be consistent, therefore, it cannot
prove its own consistency, and thus it must be incomplete.
Consistency in formal systems, it can be argued, plays a similar
role as conservation laws in physical systems — both forbid “some-
thing for nothing.” If this is indeed the case, then we might ask, is
economic theory, represented by Arrow–Debreu general equilibrium

5
There are many introductory expositions of Gödel’s work to the common reader.
For example, Nagel and Newman (1958), Hofstadter (1979), Kline (1980), Berto
(2009), and Chaitin et al. (2012), in addition to the first chapter of this volume
by F. A. Doria.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 225

Is Economics a Science? A Gödelian Perspective 225

system, consistent? Obviously, the theory cannot prove its own


consistency. Since the theory admits the paradoxes pointed out above
(and more to follow), moreover, there is a good reason to doubt its
consistency.

Novelty and invariance


Formal systems must start from given axioms. From these axioms,
using a set of rules, theorems are derived. For the theorems to be
valid, the derivation shall not imply or lead to a contradiction. A con-
tradiction means we can prove anything, so no need for the axioms
that we start with in the first place.
Physical systems start with initial conditions and rules (physi-
cal laws). From these, we can construct many interesting and useful
products. But these products must comply with conservation laws
since humans cannot create something out of nothing. Hence, con-
sistency in formal systems and conservation in physical systems play
more or less a similar role.
According to Gödel, consistency implies incompleteness. This
means there will be (infinitely many) statements formulated in
the system that cannot be proved within the system. In other
words, we will never be able to predict systematically the state-
ments that can be generated from the system. Consistency implies
novelty.
Similarly, in physical systems, we will never be able to systemat-
ically predict the outcomes of a change in the system, a phenomenon
usually referred to as “emergence.” Emergent properties, Nobel lau-
reate Robert Laughlin argues, are generally impossible to deduce
from first principles. Collective organizational phenomena cannot
generally be anticipated through pure deduction, and they must be
discovered experimentally (Laughlin, 2005, pp. 87, 108–109, 212).
Although certain quantities are constant due to the conservation law,
other quantities or qualities might vary unexpectedly. Perfect knowl-
edge of one cannot guarantee the knowledge of the other. In fact,
the constancy of conserved properties might be the reason behind
the nonconstancy and unpredictability of emergent properties. If so,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 226

226 The Limits of Mathematical Modeling in the Social Sciences

then we can see how formal and physical systems can be analogous to
each other: consistency implies incompleteness; conservation implies
emergence.
In economics, scarcity is the first principle that governs market
activities. Scarcity in economics plays a comparable role to conserva-
tion in natural systems (Mirowski, 1989, p. 218). In both, we cannot
get something from nothing, which is the same logic for mathemat-
ical consistency. Emergence in economics is reflected in growth and
surplus value, as will be discussed shortly.
There is something problematic here. How is it that invariance
implies novelty, predictability implies unpredictability, decidability
implies undecidability, and scarcity implies abundance? Philosopher
Emily Meyerson points out to this tension and suggests a reconcilia-
tion. We might have a set of given elements, but the arrangements of
these elements can create different manifolds, “just as with the aid of
the same letters one can compose a tragedy or a comedy” (Meyerson,
1930, pp. 92–93). Examples from science include magnetism and
other phase-transition phenomena, which arise due to reorganization
without changing the underlying molecules (Ball, 2004). More on the
role of organization in emergence follows shortly.
This duality might tell us something about the universe we live
in: It is continuously balancing change and persistence. If there were
only change, it will be completely unpredictable, and life would not
flourish in such a chaotic environment. On the other hand, if it were
completely stagnant, life would not have been possible as well. We,
therefore, live in a complex world “at the edge of chaos,” a character-
istic feature of complex systems (Kauffman, 1995, p. 26). As Barrow
(1998, p. 191) elaborates, the universe has constraints without which
there would be no patterns in nature. Further,

If there were no patterns of behavior in the Universe then total


chaotic anarchy that would exist could not give rise to conscious
intelligence like ourselves. No form of organized complexity could
exist or develop unless there were limits to what can occur in the
Universe.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 227

Is Economics a Science? A Gödelian Perspective 227

The following table summarizes the three systems: formal, physical,


and economic.

Formal system Physical system Economic system

Invariance Consistency Conservation Scarcity


Novelty Incompleteness Emergence Surplus value

The puzzle of value


Early economists were puzzled by a simple question: Scarcity requires
that commodities are traded in competitive markets at their fair
value; no one can get something for nothing. If this is the case, how
could there be profits or “surplus value”? In an “exchange of equiv-
alents,” the two parties of the trade, from an accounting point of
view, are exactly at the same position before and after the exchange.
So how could they both be better off and make gains from trade?
Even more, how could the economy grow at all out of “exchange of
equivalents”? In other words, if the exchange of equivalents conserves
value, how could value expand? (See Mirowski, 1989, Chap. 4; Keen,
2011, Chaps. 10 and 17; Varoufakis et al., 2011, Chap. 3.)
The above framework might contribute to resolving this old puz-
zle. Although the two sides of a fair exchange are equivalent, trade, in
principle, is not a zero-sum game. In normal circumstances, fair trade
is a win-win, mutual-gain transaction. Each party will be better off
because of the resulting rearrangement of resources. This rearrange-
ment improves the productivity of each party, allowing the total out-
put to grow. Trade improves the efficiency of resource utilization even
without production. Reallocation of goods and services can release
untapped resources and thus create surplus value. Emergence, notes
Laughlin (2005), is an organizational phenomenon. Many, if not most,
of the natural laws arise from collective self-organization. He writes
(p. 218):
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 228

228 The Limits of Mathematical Modeling in the Social Sciences

In passing into the Age of Emergence, we learn to accept . . . that


organization is important in and of itself — in some cases even the
most important thing.

Trade is a primary means for reorganizing resources among market


participants. From this perspective, trade, in principle, plays a crucial
role in the emergence of surplus value and, consequently, growth.
Historical discussions of surplus value were predominantly reduc-
tionist: the value of a commodity is reduced to the sum of the values
of its inputs (e.g., Keen, 2011, p. 153). But this approach ignores
emergent properties arising from the rearrangement of resources, as
pointed out above. Further, the ways a particular commodity can be
combined with various other commodities to create innovative goods
and services cannot be systematically predictable. Paradoxically, neo-
classical theory, while it endorses free trade, gives no room for novelty
and innovation (Al-Suwailem, 2010). In the world of “complete mar-
kets” of Arrow (1964), innovation is inconceivable.
It can be argued, therefore, that novel uses of commodities can-
not be systematically deduced from input commodities. Producers
or sellers, in general, cannot foresee all the uses of their products
in order to incorporate them into their prices. Hence, the price of a
commodity in a free market has an irreducible component due to the
emergent properties. The value of the commodity is, therefore, not
always the sum of the values of the inputs — the whole is greater
than the sum. In fact, this might be a major function of a free market:
to allow for novel and unforeseeable uses of commodities, which is a
driving force of growth. If a commodity were completely reducible to
its inputs, then there would be no point in producing the commodity
in the first place; in a free market, it would be completely redundant.
A commodity produced in a free market, therefore, must have value
beyond its inputs. This added value, however, is not formally quan-
tifiable, that is, it cannot be systematically quantified in advance.
This is probably one area where economics can incorporate valuable
insights from Gödel’s theorem.
So, if the “exchange of equivalents” is viewed as a “conserva-
tion principle,” as Mirowski (1989, p. 121) suggests, this does not
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 229

Is Economics a Science? A Gödelian Perspective 229

contravene surplus value, in the same way that emergence does


not contravene conservation. Economic growth, just like biological
growth, does not contravene conservation laws. The question: “How
could the economy grow in the presence of scarcity?” is no different
than the question: “How could the biosphere grow, in mass and com-
plexity, in the presence of conservation of energy?” The general trend
of increasing complexity in the universe does not contradict the first
nor the second laws of thermodynamics (Kauffman, 2000; Chaisson,
2001; Schneider and Sagan, 2005).
Although value is conserved in a fair trade, productivity of each
party to the trade is not conserved. Trade reorganizes resources
among participants, and this reorganization allows for novel prop-
erties to emerge. Because of conservation, emergence and economic
growth are possible as conservation would guide the market toward
productive and fruitful allocation of resources, rather than destruc-
tive activities like Ponzi schemes and bubbles, as will be discussed
later.

5. Does economic theory respect conservation laws?


Economists were well aware of the critical role of conservation laws
in science, and how economics was clearly lacking in this regard. For
example:

• Schultz (1938), one of the founders of mathematical and statistical


economics, writes: “. . . what laws of conservation of comparable
scope [to those of physics] do we have in economics? To ask the
question is to answer it. There are none . . . ” (Mirowski, 1989, pp.
359–360; emphasis added).
• Mathematical economist Szegö (1982, p. 3) writes: “Contrary to
the situation in mechanics, no invariant law of conservation or
variational principle seems to hold for economic systems.”
• Nobel laureate Samuelson (1960, p. 673) even questions the need
for conservation laws: “Why should there be laws like the first or
second laws of thermodynamics holding in the economic realm?”
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 230

230 The Limits of Mathematical Modeling in the Social Sciences

Yet, Samuelson himself invokes what he calls “economic law of grav-


ity,” and other scientific metaphors (see Samuelson, 1958, p. 184;
Mirowski, 1989, pp. 379–386). As Philip Mirowski (1989, p. 9) points
out, while fathers of neoclassical economics borrowed the formalism
of physics into economics, they “neglected to appropriate the most
important part of the formalism, not to mention the very heart of the
metaphor, namely, the conservation of energy.” The implications of
dismissing conservation laws for actual economic behavior are dev-
astating.

Ponzi schemes and bubbles


One major consequence of the absence of conservation laws in eco-
nomics is the possibility of Ponzi schemes and bubbles in a purely
neoclassical, rational expectations economy.
To be clear, consider the following scheme. Player A takes money
from B, promising him to invest it and generate acceptable returns.
A then takes money from C, giving him a similar promise and pays
B the promised returns; A then repeats the process and pays C from
D, ad infinitum. Player A, therefore, is able to get something for
nothing. Or, A could borrow from B, then borrow from C to pay
B, and from D to pay C, ad infinitum. This is a Ponzi debt game.
In a bubble, agents trade a security in order to make returns from
reselling it to other agents. The security might have no intrinsic value;
yet, each member in the chain is making money from the next. It is a
continuous musical chairs game. As long as the music plays, and there
is a continuous inflow of funds or liquidity, the game will be played.
Does economic theory allow for this kind of behavior? Could
Ponzi schemes be played by “rational” agents in a neoclassical econ-
omy? The answer, surprisingly, is yes! Despite the central role of
scarcity in economics, Ponzi games and “something for nothing” is
strangely acceptable in economic theory. Samuelson (1957, pp. 215–
216) realized this problem very early on:

. . . any speculative bidding up of prices at a rate equal to carrying


costs can last forever. This is precisely what happens in a tulip
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 231

Is Economics a Science? A Gödelian Perspective 231

mania or new-era bull stock market. The market literally lives on


its own dreams, and each individual at every moment of time is
perfectly rational to be doing what he is doing.
Of course, history tells us that all tulip manias have ended in
finite time. Every “Ponzi scheme” and every chain letter scheme
finally runs out. . . . But I have long been struck by the fact, and
puzzled by it too, that in all the arsenal of economic theory we
have absolutely no way of predicting how long such a “Stage (i)”
will last.

“Thus,” comments Rosser (2000, p. 124), “from the beginning,


the New Classical school was shadowed by the specter of multiple
rational expectations equilibria with most of them being bubbles or
sunspots.”
Nobel laureate Jean Tirole (1985, p. 1521) shows how bubbles
might arise within the overlapping-generations model, then con-
cludes: “I hope to have convinced the reader that in our current state
of knowledge we would be best advised to believe that bubbles are
not inconsistent with optimizing behavior and general equilibrium.”
Blanchard and Weil (2001) review various models involving Ponzi
debt games and conclude: “Ponzi games may be feasible. And if they
are, they may — but need not be — Pareto improving.” According to
some estimates, the bubble component of the S&P 500 index ranges
from 50% in 1988 to 67% in 2010 (Scherbina, 2013).

Economic constraints
There are two conditions that typically constrain the behavior of
rational agents in economic models: (1) transversality condition
(TVC), together with its mirror condition: no Ponzi game (NPG),
and (2) Walras’ law.
In simple terms, the TVC prevents overaccumulation of wealth
or savings (Kamihigashi, 2008). TVC requires that the present dis-
counted value of the agent’s consumption matches the present dis-
counted value of his savings. In other words, asymptotically, the agent
shall consume all his savings in the course of his (infinite) lifetime.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 232

232 The Limits of Mathematical Modeling in the Social Sciences

This is consistent with a rational, self-interested agent maximizing


his consumption. In this economy, a Ponzi debt scheme will not be
possible. For a Ponzi scheme to be possible, the borrower will roll
over his debt forever, and this requires that a lender (or more) to
accept to have assets unconsumed forever. With TVC, however, this
is not possible and, therefore, a Ponzi scheme cannot arise in such
an economy.
In an overlapping generations economy, in contrast, there are two
or more generations, each living a finite number of periods with an
infinite stream of generations. Young generations save in their early
life, while old ones dissave in their late periods. Although each agent
satisfies his own TVC, the economy as a whole, at any period of
time, will have aggregate excess savings due to the heterogeneity of
agents’ life cycles (Romer, 2011, pp. 588–589) In this economy, a
rational Ponzi scheme becomes possible, since an agent can borrow
from the young generation and pay the old, and keep doing so forever.
The Ponzi game is “rational” in the sense that it is constrained by
aggregate excess savings, and thus will not explode over time. Thus,
an agent running a Ponzi scheme is able to get something for noth-
ing as long as other agents or lenders satisfy their respective TVCs.
As O’Connell and Zeldes (1988, p. 437) point out: “Each individual
continues to satisfy his transversality condition, but the economy
of lenders as a whole does not satisfy any aggregate transversality
condition.” The absence of an aggregate TVC allows for excess aggre-
gate savings, which makes it rational for an agent to run a Ponzi
game.
Now consider the following scenario. Each agent would reason
that, as long as other agents satisfy their respective TVCs, there is an
opportunity to make money from nothing by running a Ponzi scheme
against the rest of the economy. However, if everyone behaves in this
manner, there will be no aggregate excess savings, and the economy
will be prone to exploding Ponzi schemes, making everyone worse off.
This is a classic example of the tragedy of the commons or n-person
prisoners’ dilemma game problem. This shows the paradoxical nature
of individualistic rationality in an interactive, complex environment.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 233

Is Economics a Science? A Gödelian Perspective 233

A mirror image of the transversality condition is the no Ponzi


game (NPG) condition. NPG requires that the present discounted
value of the debt of an agent shall be asymptotically zero (Blan-
chard and Fischer, 1989, pp. 49–50). The agent running the Ponzi
scheme in an overlapping generations economy would satisfy the
NPG condition with respect to each individual lender; however, the
NPG does not hold for the aggregate for the same reasons that the
TVC doesn’t. The present value of the debt, in general, for the bor-
rower is strictly positive (O’Connell and Zeldes, 1992). Thus, an
agent running a Ponzi scheme is perfectly rational at the individ-
ual level, yet is able to get something for nothing at the aggregate
level.
Needless to say, in reality, the TVC conditions are frequently
violated. Yet models continue to assume that the conditions still
hold (Himino, 2010). Market players use models that assume ideal
conditions to hold, and this allows each player to deviate and play
a Ponzi scheme against the others. The way each player is using
the model, therefore, collectively invalidates the model itself. As the
former chief economist of the International Monetary Fund Raghu-
ram Rajan (2010, p. 146), comments in the context of the behavior
of banks and traders before the crisis, “their own collective actions
precipitated the events they should have feared.” This invalidation
of models is another application of the well-known “Lucas Critique”
(Al-Suwailem, 2014).

Walras’ law
The second economic constraint is Walras’ law. The basic idea behind
this law is simple: in a market economy, for each purchase, there is
a sale and vice versa. Thus, for each demand there is supply, and
vice versa. It follows that whenever there is an excess demand in one
market, there is an excess supply in another, such that the value of
total aggregate excess demand has to be zero. Walras’ law captures
the interdependence between markets implied by the budgetary con-
straint that all individual agents must take into account when they
formulate purchase and sales plans. As Dixon (2008) points out, the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 234

234 The Limits of Mathematical Modeling in the Social Sciences

law assumes that “no individual transactor in a market economy is


so misguided as to suppose that he or she can acquire something
for nothing.” If this is the case, then total demand must equal total
supply, so that there can be no excess demand or excess supply in
the aggregate, regardless of whether individual markets clear or not.
Walras’ law is a major principle in general equilibrium models (see
Blanchard, 2000; Jehle and Reny, 2011).
Since the law requires that no one can get something for noth-
ing, it is obvious that Ponzi schemes are inconsistent with the law.
It is not difficult to see why: In a Ponzi debt game, for example,
the budget constraint fails to bind (Romer, 2011, p. 589). Walras’
law, therefore, need not hold. Although “rational” Ponzi schemes
are constrained by aggregate excess savings, such schemes are frag-
ile and can easily transform into irrational, exploding schemes due
to the fallacy of composition, as discussed earlier. With a growing
Ponzi scheme, supply (of bonds) is endogenously created ex nihilo.
Since this supply is decoupled from the rest of the economy, there
need not be a corresponding demand in other markets. Not only
does this create a persistent excess supply in the Ponzi debt mar-
ket, but it might also lead to an excess aggregate supply since
real resources are diverted from other markets to the Ponzi debt
market.
Again, models typically assume that Walras’ law holds, even
those that attempt to integrate the financial sector (e.g., Brunner-
meier and Sannikov, 2012). It is puzzling why economic models would
assume that Walras’ law holds, and thus no one could get something
for nothing, when the user of the model is using it to place zero-
sum bets that are inconsistent with the law. One interpretation is
that Walras’ law was laid by the founding fathers of economics, who
were concerned primarily with the real wealth of nations. However,
modern economics moved from the real economy to the imaginary
world of financial speculation, which, by nature, invalidates the law
and other principles of economics. It is a self-defeating strategy as
we have seen earlier.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 235

Is Economics a Science? A Gödelian Perspective 235

Since these economic models fail, by design, to detect bubbles


and Ponzi schemes, they encourage speculative behavior that leads
to these same ills that the model fails to identify. The user of the
model will feel safe to place larger bets and finance larger specu-
lative positions since there seems to be no bubble and all markets
are in equilibrium — until the crash hits. Such a model is not far
from a thermometer that cannot read temperature beyond 37.5◦ C.
The patient, despite the pain, seems in good condition, and thus
there is no need for special attention or treatment. The patient then
“suddenly” perishes or becomes paralyzed. Doctors are perplexed
with this “black swan” event that they were unable to foresee. With
such a flawed instrument, it is obviously impossible to foresee the sad
outcome. However, the fact remains that the signals were there, but
the instrument failed to detect them. This failure is not accidental or
beyond control; it is a direct result of flawed design and poor engi-
neering. Bad models, like bad instruments, contribute to the disaster.
Bouchaud (2008, p. 1181) has a point when he remarks that “it was
the very use of a crash-free model that helped to trigger a crash.”

Alchemy of finance
Failure of mainstream economics to impose conservation laws opened
the door widely to all kinds of “financial alchemies”: financial con-
tracts derived from other financial contracts derived from — without
limit. Financial markets became, using the words of Keynes (1936,
p. 159) and Nobel laureate Maurice Allais (1989), gigantic casinos
of betting on betting on betting, on economic activities. Financial
activities are no more anchored in the real economy. Former chair-
man of the Federal Reserve System, Paul Volcker, in an interview,
reports that he was attending a business conference, where he found
himself sitting next to one of the inventors of financial engineering
who won a Nobel Prize. Volcker recalls (Murry, 2009):

I nudged him and asked what all the financial engineering does
for the economy and what it does for productivity? Much to my
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 236

236 The Limits of Mathematical Modeling in the Social Sciences

surprise, he leaned over and whispered in my ear that it does noth-


ing — and this was from a leader in the world of financial engi-
neering. I asked him what it did do, and he said that it moves
around the rents in the financial system — and besides, it’s a lot
of intellectual fun.

When investor George Soros (2003), Nobel laureate Joseph


Stieglitz (2008), and former governor of Bank of England Mervyn
King (2016), among many others, talk about “financial alchemy,”
they refer to a serious deficiency in mainstream economics, not simply
a metaphor or an exaggeration. Economic theory appears incapable
of avoiding this zero-sum, rent-seeking alchemy. Samuelson (1967; in
Burmeister, 1980, pp. 266–267) points to this problem:

One feels that the real world of tools, plants, and inventory con-
trasts with the purely financial dream world of indefinite group
self-fulfillment. But can this feeling be given documentation and
plausible explication?

It is disappointing that mainstream economics fails to distinguish


between real productive activities and pure side-betting and out-
guessing games. It is hoped that, if we are to learn from modern
science and mathematics, we can identify logical paradoxes that lead
to the alchemy of perpetual motion.

Paradox of bubbles and Ponzi schemes


Earlier, we argued that the conservation law is in line with a logical
imperative: the LNC. Contradiction, as we have seen, translates into
a zero-sum game. More accurately, a statement 1 = 0 represents a
2-person zero-sum game, where each side of the equation represents
one party to the transaction. A Ponzi scheme is an n-person zero-sum
game (see Allen and Gorton, 2012, pp. 14, 23; Al-Suwailem, 2014).
It is not difficult to see how a 2-person game can transform into an
n-person game: In a competitive market, the loser in a 2-person zero-
sum game, if unable to retaliate, has to shift his losses to someone
else. Shifting this loss to a third player transforms the game into an
n-person game.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 237

Is Economics a Science? A Gödelian Perspective 237

There seems to be a counterpart of logical contradiction to


n-person zero-sum games: Yablo’s paradox (Yanofsky, 2013, pp. 24–
26). In this paradox, we have an infinite sequence of statements:
p(1) Statement p(i) is false for all i > 1
p(2) Statement p(i) is false for all i > 2
.. ..
. .
p(n) Statement p(i) is false for all i > n.
Suppose statement p(1) is true. Then statements p(2) onward are
false. But p(2) says that statements p(3) onward are false. If p(3)
onward are false, which is what p(2) claims, then p(2) must be true.
But this contradicts the assertion of p(1) that p(2) is false. So if p(1)
is true, we get a contradiction with respect to p(2). The same applies
to any set of statements p(n). There is a debate in the literature on
whether Yablo’s paradox is circular or not (see Priest, 1997; Beall,
2001; Cook, 2014). But it is obvious that the system as a whole is
paradoxical, since each sentence is both true and false, which is a
contradiction.
Ponzi schemes are naturally finite but have the potential, in the-
ory, to be infinite. Nonetheless, at any moment in time, there is a
finite number of players, and the last one is a loser. So, we might
modify the Yablo’s paradox to fit these properties as follows:
p(1) This statement is false unless the following statement is false
p(2) This statement is false unless the following statement is false
p(3) This statement is false unless the following statement is false
.. ..
. .
Each statement represents the payoff of each player in the Ponzi
scheme according to their sequence in the game. Each player is a
loser unless there is a subsequent player (a “greater fool”), in which
case the previous player is a winner, and thus the statement is true.
At any moment in time, there is a finite number of players. If player 3
is the last player at a particular period, then he is a loser. Statement
p(3), therefore, is false because there is no false statement that follows
it, and thus, no “greater fool” to bear the loss. But if p(3) is false,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 238

238 The Limits of Mathematical Modeling in the Social Sciences

then it is true, because that’s what it says. So p(3) is paradoxical,


and therefore, all previous statements will be paradoxical as well.
A Ponzi scheme, therefore, is paradoxical: each player aims to win
while the scheme cannot allow all to win. Paradoxes, however, show
that the neoclassical theory is internally inconsistent. This inconsis-
tency leads to unsustainable perpetual motion and recurrent bubbles
and crashes.

Truth value vs. economic value


Another way to look at bubbles and Ponzi schemes is that the bubble
confers value on an asset that has otherwise no “intrinsic” or “fun-
damental” economic value. This is the discounted stream of income
generated by the asset. Income, in turn, is generated from real busi-
ness activities (e.g., production and trade) performed outside the
financial market. In this manner, the value of the asset is deter-
mined exogenously, that is, outside the domain of financial trading.
Although business activities involve a series of interdependent trans-
actions (e.g., producers sell to wholesalers, who then sell to distrib-
utors, who then sell to consumers) the chain must be finite, and it
depends ultimately on the use-value of the commodity for the end-
user. The process, therefore, is self-liming (Daly and Farley, 2011,
p. 287).
In a bubble, in contrast, the value of the asset is derived from
valuations of other players, who base their valuations on other play-
ers’ valuations ad infinitum. So we end up with a circular reference
of value. With undefined economic value, markets become prone to
sudden shifts in expectations or “animal spirits” without an objec-
tive change in economic fundamentals. “Sunspot equilibrium” refers
to cases in which the market is driven by future expectations rather
than the fundamentals, and for which “sunspots”, that is, irrelevant
factors, might shift the market and affect its outcomes (Shell, 2008;
Guesnerie, 2001). Bank runs, for example, are particularly prone
to sunspots. As is the case with bubbles, sunspot equilibria arise
in neoclassical, rational expectations environments. David Bowman
and Jon Faust (1997) argue that introducing derivatives (options)
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 239

Is Economics a Science? A Gödelian Perspective 239

to an otherwise complete market leads to sunspot equilibria and


therefore market incompleteness. State-contingent securities (Arrow-
securities) provide insurance against exogenous events; however,
options provide insurance against prices. Prices are individually
exogenous, but collectively endogenous; options are therefore price-
contingent. This creates additional uncertainty that was absent from
the economy before the introduction of the derivatives (see also Kahn
and Krasa, 1993).
Experimental evidence seems to support the above line of reason-
ing. The work of Nobel laureate Vernon Smith and his collaborators
shows a stark difference between markets for consumption goods and
markets for assets. Markets for consumption goods, where goods are
traded to be consumed, not to be re-traded, are found to be in line
with predictions of economic theory. In asset markets, in contrast,
the asset can be resold, and thus its value depends on expectations of
others’ expectations, etc. In these experiments, substantial bubbles
were generated during trading sessions, even for experienced traders
(see Gjerstad and Smith, 2014, Chap. 2). Hommes (2013, Chap. 8)
provides experimental evidence of substantial deviations of expecta-
tions from the fundamental value of speculative assets, in contrast
to nonstorable commodities.
Logically, the truthfulness of statements in a formal system (rich
in arithmetic) cannot be defined within the system itself, as has been
argued by Tarski (1933).6 The reason is intuitively simple: The sen-
tences defining the truth value of the system have to have their own
truth value defined, ad infinitum. It has to be defined outside the
system (see Smullyan, 1992, pp. 9, 27; Berto, 2009, pp. 153–157;
Gómez-Torrente, 2015).
When an investor invests in an asset, the investment decision is
determined by a property p. If p is dependent on the expectations
of other agents, who in turn base their decisions on expectations of
other agents and so on, the value of the asset becomes undefined,

6
In fact, arithmetic truth is not formalizable in any elementary formal system.
See Smullyan (2013, Chap. XIX).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 240

240 The Limits of Mathematical Modeling in the Social Sciences

leading to a contradiction. This contradiction, as we have seen ear-


lier, leads to violation of the conservation law. Thus, we can see how
economic value, like truth value, has to be properly defined outside
the system, that is, outside the market. As Hommes (2013, p. 5)
rightly points out, one important difference between science and eco-
nomics is that the behavior of humans is largely influenced by their
expectations, which is absent in the case of natural materials. Unless
expectations are suitably anchored in the real world, they might eas-
ily transform into self-perpetuating dreams, to borrow Samuelson’s
words. These dreams, however, must ultimately come down to real-
ity, and the costs for the evaporation of unrealistic dreams might be
substantial.

Conservation laws and stability


Theoretical physicist Michio Kaku (2008, p. 267) recalls how, one
day while in graduate school, he was left speechless when he finally
learned the true origin of the conservation of energy. In 1918, math-
ematician Emmy Noether discovered that whenever a system pos-
sesses a symmetry, this implies the existence of a conservation law,
and vice versa (Lederman and Hill, 2004). Symmetry over time
means that the laws of physics are the same now as they were in
the past, and they will be the same in the future; they are time-
invariant. Time-symmetry implies conservation of energy. Space sym-
metry means that the laws are the same on Earth as on the moon
as on Jupiter. Space symmetry implies conservation of momentum.
The relationship between symmetry and conservation is “the single
most profound result in all of physics,” says Nobel laureate Frank
Wilczek (2015, p. 280).
What that means from an economic point of view is that stability
implies conservation and vice versa. Hence, if we allow Ponzi schemes,
we must end up with instability. We don’t need Noether’s theorem
to know that bubbles lead to crashes, but we do need it to know
that without conservation principles, markets will be systematically
unstable. Neoclassical theory, as we have seen, fails to adopt such
principles. As such, it is clear that the theory is unable to prove the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 241

Is Economics a Science? A Gödelian Perspective 241

stability of a neoclassical economy. And what we observe in reality


is predominantly consistent with this result. Econophysicists Didier
Sornette and Peter Cauwels (2014) argue that the repeated bubbles
and crashes over the last 30 years reflect a shift from a regime of real-
productivity growth toward a growth regime based on the illusion of
“perpetual money machine”. The authors highlight the difference
between the two domains, economics and science, with respect to
perpetual motion (p. 104):

While it is true that, like energy, wealth cannot be created out


of thin air, there is a fundamental difference: whereas the belief
of some marginal scientists in a perpetual motion machine had
essentially no impact, its financial equivalent has been the hidden
cause behind the current economic impasse.

Let us move a step further and ask the following question: Is there
any “benefit” from having an unstable system? Mathematician Ian
Stewart (2013, p. 313) provides an answer:

Engineers know that the way to get a rapid response is to use an


unstable system: stability by definition indicates an innate resis-
tance to change, whereas a quick response requires the opposite. So
the quest for ever greater profits has caused an ever more unstable
financial system to evolve.

And, in fact, the literature on endogenous business cycles shows that


global stability of an economic system requires small discount rate,
that is, that agents should be patient (Benhabib, 1992, p. 206). The
urge for quick profits, therefore, might be responsible, at least in part,
for the increasing instability of the system, and therefore the willing-
ness to accept such status despite great losses to society’s welfare.

6. Perpetual growth
Long-term growth is necessary for improving living standards and
the quality of life. It is the engine for development and material
prosperity. These are facts that we can observe and verify over the
past 200 years in many parts of the world.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 242

242 The Limits of Mathematical Modeling in the Social Sciences

According to standard neoclassical theory (e.g., Barro and Sala-


i-Martin, 2003, Chap. 1), diminishing marginal returns imply that
the economy should converge to steady-state growth, whereby all
economic variables (consumption, investment, etc.) grow at constant
rates, which need not be zero. Even at the steady state, the econ-
omy could have positive growth rates. But for how long? The theory
puts no limits on how long economic growth can last. In princi-
ple, it could go on forever. This is where science and economics
diverge.
From a scientific point of view, if perpetual motion is impossible,
so is perpetual growth and even more clearly so. Steady-state growth
means that the economy is growing exponentially forever. A perpet-
ual motion machine is impossible even if it were of a fixed size and
consumed a constant amount of energy per unit time. If the machine
is growing, it will consume an increasing amount of energy per unit of
time and, thus, its impossibility becomes even more obvious. It is no
surprising that scientists were unhappy with this outcome, and con-
cluded that “economics is too important to leave to the economists”
(Hall et al., 2001). In 1972, a group of scientists at MIT published a
report on the “Limits to Growth.” The report argued that continued
economic growth based on the then prevailing patterns would collide
with the Earth’s finite resources, eventually leading to a future over-
shoot and collapse in the 21st century.
Economists were not very enthusiastic about the report (Nord-
haus, 1992). Nonetheless, over time, it became clear that the limits
to growth are not only due to limited resources; more importantly, it
is the capacity of the environment to absorb the increasing industrial
waste and carbon dioxide emissions that are threatening to change
the climate of the entire globe.
In 2012, during the United Nations’ Earth Summit in Rio de
Janeiro, world leaders took stock of all existing research and ini-
tiatives taken over the interim four-decade period since the first
warning in 1972. According to the director of Earth Institute at
Columbia University, Jeffery Sachs, the evidence showed that the
diagnosis first made in 1972 was fundamentally correct (Sachs, 2015,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 243

Is Economics a Science? A Gödelian Perspective 243

p. 481). The world economy is threatening Earth itself. We are


on a collision course with the environment. According to Sachs
(pp. 34, 40):

. . . humanity has become a serious threat to its own future wellbe-


ing, and perhaps, even its own survival, as the result of unprece-
dented human-caused harm to natural environment.
. . . all of our civilization — the location of our cities, the crops we
grow, and the technologies that run our industry — is based on
a climate pattern that will soon disappear from the planet. The
Earth will become much warmer than it has been during the entire
period of civilization.

Nicholas Stern, chair of the Grantham Research Institute and


economist at the London School of Economics, writes: “We are the
first generation that through its neglect could destroy the relation-
ship between humans and the planet, and perhaps the last generation
that can prevent dangerous climate change” (Stern, 2015, p. xxvii).
Because of the environmental threats, the United Nations
adopted Sustainable Development Goals (SDGs), which include the
goal: “Achieve economic development within planetary boundaries”
(Sachs, 2015, p. 486). Mainstream economics pays little attention
to “planetary boundaries.” Many economists are still debating on
whether the limits are “real” to take into account. The gap between
economics and science is still substantial.
One major reason why neoclassical theory fails to take into
account the limits to growth is that these limits are “external” to
the market. Externalities, in general, are not reflected in the market,
and that’s why they are called “externalities.” Climate change is “the
mother of all externalities” (Tol, 2009, p. 29). Externalities show the
inherent limits of markets. The market performs well in allocating
goods and services traded within the market. But, in general, it is not
able to properly allocate resources from the environment to itself. As
Daly (1996, p. 31) explains,

The market cannot by itself register the cost of its own increasing
scale relative to the ecosystem. Market prices measure the scarcity
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 244

244 The Limits of Mathematical Modeling in the Social Sciences

of individual resources relative to each other. Prices do not measure


absolute scarcity of resources in general.

In Nordhaus’s (2013, p. 18) words: “Markets do not automatically


solve the problems they generate.” Markets, therefore, might be able
to decide the relative prices of goods and services offered in it, but
cannot decide its own price relative to the environment. It has to
be decided outside the system. This sounds familiar by now. As we
have seen, Gödel’s theorem, and other theorems in mathematical
logic, show that the truthfulness and consistency of a system cannot
be decided within the system. This shows why we need institutions
to foster and nurture healthy markets. The assumption that markets
are, in principle, able to solve (almost) all of their own problems is
inconsistent with modern mathematical logic. As we have seen, the
contradiction involved intimately lead to violation of the conservation
law. The resulting system, therefore, is unsustainable.

Perils of “risk-free” rate


Perpetual growth leads naturally to the concept of “risk-free” rate
of return. If the economy can grow in steady state at a constant rate
forever, this rate can be used as a benchmark for risk-free return.
This is clearly a theoretical concept. In practice, though, the rate on
government’s bonds is used to approximate the risk-free interest or
“pure rate of interest” (see Damodaran, 2008). The implied assump-
tion is that, since the government has control over taxes, it can meet
its obligations from the economy’s growth. Accordingly, “risk-free”
return usually reflects the growth rate of the economy (see Lilico and
Ficco, 2012, and references therein).
There are many practical issues involved in the above reasoning,
but we shall not bother ourselves with them here. Economic the-
ory is concerned with an ideal, friction-free world; frictions can be
addressed at the implementation level. So we ask the following basic
question: In a friction-free world, is the concept of “risk-free” return
possible in principle? We argue that it is not, for the simple reason
that perpetual motion is impossible.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 245

Is Economics a Science? A Gödelian Perspective 245

As a simple example, suppose that the economy is growing at a


constant rate r. The government needs to borrow k and has to repay
by the end of the period (1 + r)k. In a neoclassical world, there is no
money; so k is real capital. k is transformed into ordinary goods and
services g, which is then transformed back upon maturity into k plus
interest. Naturally, without additional resources, k cannot be fully
recycled back into k; the second law of thermodynamics prevents
this possibility. But let us put the second law on hold for now, and
assume instead that we are in a completely frictionless, zero entropy
world.
Even in this ideal, hypothetical world, by the conservation law,
there is no technology f that can generate from k more than k
without additional energy or resources; that is, (1 + r)k = f (k) is
impossible. In order to generate the additional interest payment rk,
the government has to have additional resources, s > 0, such that
(1 + r)k = f (k, s). Unless s is infinite, s will be consumed in a finite
time, and by then the government will not be able to pay rk. Thus r
cannot be risk-free.
But there is something more problematic here. We assumed that
the economy grows at rate r. Since resource s is consumed to produce
rk, the economy cannot be growing at rate r: The economy must give
up something in order to generate the additional rk, and by doing so,
the economy would not be able to grow at r. Put differently, if the
economy consists only of k and s, then if the economy is growing at
r, it must at the end of the period grow by r(k + s). However, by the
end of the period, the economy grows only by rk since s is consumed
to generate the interest payment. The economy, therefore, must be
growing at a lower rate r  = rk/(k + s) < r. But this means that r
cannot be risk-free since it is not sustainable.
The above discussion might be elaborated by what is known as
Stevin’s principle, after the polymath Simon Stevin, a contemporary
of Galileo, who was one of the early scientists to recognize the impos-
sibility of perpetual motion. The principle states that a frictionless
machine cannot produce net work unless it undergoes net change. If
the initial and final states of the machine are indistinguishable, the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 246

246 The Limits of Mathematical Modeling in the Social Sciences

machine cannot produce work without external energy; otherwise, it


is a PMM (Schumacher, 2010, Lecture 3; Simanek, 2012). To gen-
erate a riskless return, the capital must at the end of the period be
in the same state at which it started. But this is impossible without
external energy — capital cannot generate work without undergoing
change. If it can, it becomes a PMM.
From the above discussion, we can see why the economy as a
whole cannot have a perpetual, and therefore riskless, growth. The
economy simply cannot make something out of nothing. To grow,
it must rely on additional resources, and the process of finding new
resources and innovating new technologies to economize on existing
ones cannot be risk-free. We cannot avoid the fact that growth is
risky, and that “risk-free” return is inconsistent with the universe in
which the conservation law prevails.
To link this result with that of the previous section, we note that
if the economy is growing over time at a riskless rate, this means that
it is growing simply by virtue of the mere passage of time. If this is
the case, then the system is not time-invariant, that is, the system
is not preserving its identity over time, which violates the LI. The
breach of the LI necessarily implies the breach of the LNC, since the
two are equivalent, as discussed earlier. It follows that riskless growth
breaches the conservation law, thus admitting perpetual motion.
Furthermore, from the previous section, we find that:

consistency ⇐⇒ conservation of energy ⇐⇒ time symmetry.

This means that neoclassical theory breaches the conservation law


from both directions: The theory admits paradoxical zero-sum games,
which make it inconsistent. This leads to violation of the conservation
law. On the other hand, the theory admits perpetual growth, by
which the system fails to be time-invariant. By Noether’s theorem,
time symmetry means that the conserved energy is time-independent
(Rosen, 2010, p. 65). But, if the system grows by the mere passage
of time, this means that energy is not conserved.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 247

Is Economics a Science? A Gödelian Perspective 247

Is risk quantifiable?
Is it possible to quantify the risks facing a growing economy? Can
we systematically compute the distribution of resources, output, and
returns, so that we can calculate the likelihood and magnitude of
economic expansions and downturns?
Based on the work of da Costa and Doria (2005), among others,
we argue that, in principle, this is not possible (see Al-Suwailem et al.,
2016). Even if we have the most comprehensive mathematical model
of the economy with the most comprehensive and accurate data and
the most powerful computing power, Gödel’s incompleteness theorem
shows that there are fundamental limits to our ability to predict
the future and therefore to quantify the risks ahead. There is no
systematic way to compute and quantify risk.
Another way to see this result is that, for a growing economy,
if we were able to quantify systematically the risks facing the econ-
omy, then, in a complete world akin to that of Arrow (1964), we
can design a perfect insurance system to hedge these risks. This will
allow the economy to achieve perpetual growth and generate riskless
returns. But, the latter is impossible. It follows that risk cannot be
quantifiable. We will not be able to build an insurance system that
fully neutralizes risk. Risk is inevitable.

Risk–return relationship
A well-documented economic phenomenon is the positive correlation
between risk and returns. “The tradeoff between risk and expected
return is the most fundamental notion in modern finance” (Poitras,
2010, p. 44). The phenomenon might be explained based on the laws
of thermodynamics. We have seen that the first law, the conser-
vation law, implies that growth cannot continue forever, and thus,
there is always a risk emanating from the impossibility of perpetual
motion.
If we take the second law into account, then there is another
source of risk: the wear and tear of capital and physical systems.
The second law states that it is impossible in real-world systems to
transform energy entirely to work; some of it will be transformed into
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 248

248 The Limits of Mathematical Modeling in the Social Sciences

heat and useless energy. Put differently, efficiency can never reach
100% in real-world processes. There will always be a leakage, called
entropy. Entropy, the measure of useless energy, always tends to rise
in any real-world process of energy transformation.
Faster processes, however, tend to produce more entropy per unit
of energy per unit of time than slower processes, other things being
equal. Hypothetically, an infinitely slow process produces no entropy.
Such processes are perfectly reversible. Real-life processes produce
entropy and are, therefore, irreversible. The faster the process is, the
higher the entropy it produces per unit of energy per unit of time
(Handscombe and Patterson, 2004; Schmitz, 2007). “Haste makes
waste” is an age-old wisdom that probably reflects the nature of
entropy (Rifkin, 2011, p. 207).
From an economic point of view, faster production implies lower
efficiency. Also, with fast production, a larger proportion of capital is
diverted toward activities that are not directly related to production
but rather to the delivery of the product in a shorter period. Hence,
for a given amount of capital, there will be a higher chance that
output will not be sufficient to recover the capital and the expected
rate of return. The well-established risk–return relationship might
therefore very well originate from the laws of thermodynamics.
If entropy correlates with risk, then there is an additional reason
why we cannot have a riskless system. The third law of thermody-
namics states that we can never reach absolute zero. This means that
entropy of a system cannot be zero. Although we may hypothetically
have a zero-entropy process, that is, a process that produces zero-
entropy, the total entropy of the system nonetheless will not be zero.
Since entropy correlates with risk, the third law clearly prohibits
having a zero-risk system.
If we take into account that entropy indicates missing informa-
tion (Pierce, 1980; Schumacher, 2015), then a zero-entropy system
does not have any potential for new information whatsoever. All the
information has been utilized. The third law prevents this outcome.
Unfortunately, neoclassical theory assumes agents (and economists)
to have complete and full information of the system. A riskless rate
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 249

Is Economics a Science? A Gödelian Perspective 249

or price carries no new information. It is only in the presence of risk


that prices provide new information, and thus, contribute to market
efficiency. New information means surprises, and in a riskless system,
there are no surprises. A riskless system is a stagnant system, and
such a system, by definition, cannot grow. Wealth creation and eco-
nomic growth necessarily require information inflow, and therefore,
risk. As econophysicists Sornette (2012) remarks7 :

Risk is life, life is risk. Only death corresponds to sustainable equi-


librium. Any interesting out-of-equilibrium system (biological, eco-
logical, societal, economic and so on) is prone to downside as well as
upside risks, the non-certain outcomes that give rises to disruptions
as well as opportunities.

7. Synopsis
We have so far presented two sets of characteristics of science: (1)
explanation and prediction; and (2) conservation laws. These two
sets are interconnected. As already discussed, failure to satisfy con-
servation of energy implies the failure of the system to be time-
invariant, and thus the system becomes inherently unstable. Without
the framework provided by the principles of invariance and conser-
vation, crashes and crises seem unexplainable, let alone predictable.
Conservation laws, therefore, provide us with a framework that
helps us explain and predict, qualitatively at least, major economic
events.
As has been argued, there seems to be a deep connection between
conservation laws in science and consistency in formal systems. To
get something out of nothing is equivalent to having 1 = 0, a
contradiction that renders the system inconsistent. Consistency in
mathematics and conservation in physics, therefore, seem to have a
common thread. As Richard Feynman (1965, p. 28) writes: “Nature
uses only the longest threads to weave her patterns, so each small
piece of her fabric reveals the organization of the entire tapestry.”

7
www.er.ethz.ch.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 250

250 The Limits of Mathematical Modeling in the Social Sciences

If we follow the thread of consistency and invariance to economics,


there would be a lot to learn that would help to make economics a
true science.
The heavy use of mathematics in economics is not sufficient to
make it science. Historically, astrology used to involve heavy mathe-
matics (Levinovitz, 2016). But this didn’t make it science. The math-
ematics employed in astrology, while valid in itself, was applied in
an un-scientific manner and for improper objectives. Stewart (2012,
p. 314) asks the following question on the role of Black and Scholes
option pricing formula in the financial crisis: “Was an equation to
blame for the financial crash, then?” He answers:

An equation is a tool, and like any tool, it has to be wielded by


someone who knows how to use it, and for the right purpose. The
Black–Scholes equation may have contributed to the crash, but
only because it was abused. . . . The blame for the failure of tools
should rest with those who are responsible for its use.

Economics as a discipline was developed to explain the wealth of


nations. The main concern of leading economists before the World
War II was production and distribution of real goods and services.
Subsequent mathematical models were developed, by Arrow and
Debreu, among others, with this objective in mind. Gradually, how-
ever, these models were applied to zero-sum financial bets. The
two domains assume inconsistent paradigms, as the experiments of
Vernon Smith and Cars Hommes indicate. Further, this misappli-
cation of models is self-defeating. As we have seen, models assume
ideal conditions, like the transversality condition and Walras’ law.
This invites market players seeking above-average returns to take
advantage of the rest of the market and deviate from the group.
When many market players behave in this manner, not only models
break down, but the agents’ strategies become unsustainable, leading
to market turbulence and instability. There is a serious gap between
models and the way they are applied by selfish agents in a competi-
tive game. Unfortunately, economic theory fails to recognize this gap,
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 251

Is Economics a Science? A Gödelian Perspective 251

as most clearly reflected in rational expectations (see Frydman and


Goldberg, 2011).
Another example of misuse of models relates to The Efficient
Market Hypothesis. The hypothesis, at best, is applicable at the
microlevel, that is, at the individual-investor level. But it fails for the
market as a whole. This is known as “Samuelson’s dictum.” Accord-
ing to Paul Samuelson (Shiller, 2001/2015, pp. 256, 271; emphasis in
original):
. . . the market is micro efficient but macro inefficient. That is, indi-
vidual stock price variations are dominated by actual new infor-
mation about subsequent dividends, but aggregate stock market
variations are dominated by bubbles.
Modern markets show considerable micro efficiency . . . In no con-
tradiction to the previous sentence, I had hypothesized considerable
macro inefficiency, in the sense of long waves in the time series of
aggregate indexes of security prices below and above various defi-
nitions of fundamental values.

Samuelson’s dictum was empirically confirmed by Shiller and oth-


ers (Jung and Shiller, 2005; Bernstein, 2007a, pp. 72–75). Bernstein
(2007b) concludes: “Modern finance theory is micro at its heart. We
can’t apply it to the market as a whole.” This obviously is true for
economic theory in general:
• The well-known theorem of Sonnenschein, Mantel, and Debreu
shows that aggregate demand fails to carry over neoclassical prop-
erties of individual demand functions (Mas-Colell et al., 1995).
• The Arrow Impossibility Theorem shows that transitivity of indi-
vidual preferences fails to apply to aggregate choice (Maskin and
Sen, 2014).
• We have already seen that the transversality condition at the indi-
vidual level may fail to apply at the aggregate level.
• And we have also seen that while price can be treated as exogenous
for individual agents, collectively it is endogenous.
The divergence between individual rationality and collective behavior
leads to conflict of interest and social dilemmas, which renders the
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 252

252 The Limits of Mathematical Modeling in the Social Sciences

theory internally inconsistent, as reflected in the paradoxes discussed


above.
Another example of misuse of models relates to applying models
of pricing goods and services within the market to pricing the market
itself with respect to the environment. The limits of markets with
respect to the environment cannot be decided by the market itself,
as discussed earlier. This leads to contradictions and, therefore, to
violation of conservation laws.
The above examples of misuse of models show that we need to
have a proper framework for formulating and applying economic mod-
els. This framework imposes certain restrictions to obtain consistency
and invariance, which are necessary to reap the fruits of such models.
This crucial framework is missing, and this, perhaps, is a major les-
son that economics can learn from science. We know that, in nature,
restrictions are already in place to ensure consistency. Natural laws
are essentially restrictions, and these laws keep the universe intact and
in harmony; otherwise, we would have not observed the surprising
congruence between mathematics and nature. Without restrictions,
economic theory would fail to produce a consistent whole.
Gödel shows that consistency of a formal system (rich in arith-
metic) cannot be proved from within the system. It can be proved
only via a higher system with additional axioms. Tarski shows the
same on defining the truthfulness of a system. For a system to be
sound and consistent, therefore, external restrictions are required. If
the market is represented as a formal model, then meta-mathematics
requires an analogous meta-market framework to ensure the consis-
tency and soundness of the market. While economic theory utilizes
heavy mathematics to model the economy, the theory is not suffi-
ciently cautious about the meta-mathematics involved. This leads to
applying models in a self-defeating manner. Consequently, economic
models fail to be invariant. Lack of invariance is a major deficiency
of economic theory in contrast to modern science.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 253

Is Economics a Science? A Gödelian Perspective 253

Metamathematics and metacognition


There might be biological reasons behind the insufficient attention
we pay to meta-mathematics. Our brains find it difficult to per-
form “metacognition” — cognition about cognition or thinking about
thinking (Cox, 2005). To the extent that the brain is an algorith-
mic system (Gödel held this view; see Wang, 1996, Chap. 6), we
know from Gödel’s theorem that the brain must face undecidabil-
ity. Metacognition involves a blind spot for the brain — the brain
simply cannot figure out systematically whether it has the neces-
sary information or knowledge concerning the decision at hand. This
is particularly true when addressing “unknown unknowns” (Broth-
erton, 2015). The brain tends to fill in blind spots using available
information, and so, it is difficult for us to identify these spots.
So one reason why neoclassical theory has blind spots is that
economists, as humans, do. One important lesson we can learn from
science, therefore, is that we economists must address our own limi-
tations first.

8. Learning from science


In his Nobel Prize Lecture, Vernon Smith (2002, p. 510, fn. 24)
remarks:
I importune students to read narrowly within economics, but widely
in science . . . The economic literature is not the best place to find
new inspiration beyond traditional technical methods of modeling.

There is a lot to learn from science, for the simple reason that it is
as rich and elegant as nature. But most important is the relationship
between science and mathematics. For science, mathematics is a tool,
albeit an important and indispensable one. The scientist’s primary
objective is to understand and control natural phenomena. Smith
(2008, p. xv), again, notes that formal economic modeling has little
to do with how subjects in the lab or participants in real markets
actually perform in real time. This observation has nothing to do
with theoretical sophistication;
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 254

254 The Limits of Mathematical Modeling in the Social Sciences

. . . put theorists in the experiment, as I have done, and they cannot


articulate an explanation of their own behavior interacting with
others through an institution. Moreover, their behavior is no more
or less efficacious than the typical student subject in these dispersed
private information markets.

The formal approach to economics fails to adopt this science-based


strategy (see Huber and Sornette, 2016). Mathematical modeling
becomes an end rather than a means. In line with what we have
seen earlier, mathematics, according to Weintraub (2002, p. 17), for
most of the 19th century, required connected physical reasoning to
be considered rigorous. This connection, however, was broken by the
end of the century. When economics embraced the formal axiomatic
approach after World War II, mathematics was already disconnected
from science. This, by implication, led to the disconnection of eco-
nomic theory from the reality. Gerald Debreu, in his famous The-
ory of Value (1959, p. x), writes: “Allegiance to rigor dictates the
axiomatic form of the analysis where the theory, in the strict sense,
is logically entirely disconnected from its interpretations” (emphasis
added). Former mathematical economist Glenn Loury admits that
he was drawn to economics because “one could do math while main-
taining the credible pretence of relevance by labelling variables” (Dia-
mond, 1996, p. 657). Not only the mathematical tools become entirely
disconnected from economic reasoning, but they become the objec-
tive of the analysis. McCloskey (1994, pp. 131, 145) remarks:
The economists, to put it another way, have adopted the intel-
lectual values of the math department — not the values of the
departments of physics or electrical engineering or biochemistry
they admire from afar . . . The economists are in love with the wrong
mathematics, the pure rather than the applied.

In science, mathematics is only a tool to understand nature, whereas


in economics it is the other way around. Samuelson (1983, p. xix)
elaborates (emphasis added):
But the fine garments [of Neoclassical theory] sometimes achieved
fit only by chopping off some real arms and legs. The theory of
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 255

Is Economics a Science? A Gödelian Perspective 255

cones, polyhedra, and convex sets made possible “elementary” the-


orems and lemmas. But they seduced economists away from the
phenomena of increasing returns to scale and nonconvex technol-
ogy that lie at the heart of oligopoly problems and many real-world
maximizing assignments. Easy victories over a science’s wrong
opponents are hollow victories

Mathematical economist Morishima (1992, p. xi) notes that “post-


war mathematical economists have lapsed into the conservatism of
not adapting their theories to reality, but tending to accommodate
the reality to their theories. Like Procrustes, they have ruthlessly
excised any element of reality which fails to conform to their mod-
els.” Milton Friedman, in an interview in 1999, expressed his disap-
pointment of the evolution of economics: “What I would say is that
economics has become an arcane branch of mathematics rather than
dealing with real economic problems” (Spiegler, 2015, p. 8).
These dangers sometimes arise in science as well, but the scientific
community can resist and guard against them. In 1909, Constantin
Carathéodory, a skilled German mathematician, published a seminal
work on an axiomatic approach to thermodynamics, which allowed
a rigorous mathematical formulation of the consequences of the
second law of thermodynamics (Pogliani and Berberan-Santos, 2000).
Despite the elegance of Carathéodory’s work, it did not gain a large
audience among physicists and physical chemists. The axiomatic
method, with some exceptions, never arrived on the main pages
of widely used physics, physical chemistry, or even just thermody-
namics textbooks. Even for the exceptions, the axiomatic method is
presented more as a pure curiosity than as a general foundation for
thermodynamics (ibid).
The main reason behind this lack of wide acceptance can be
summarized by the position of Max Planck, one of the leading physi-
cists of the 19th and early 20th century. Planck criticized the formal
method pointing out that the standard approach (Thomson–Clausius
treatment) was much nearer to experimental evidence, that is, to nat-
ural processes, which at the very end are the only ground on which
all natural laws are established (ibid).
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 256

256 The Limits of Mathematical Modeling in the Social Sciences

A more recent example might lend additional support to the


above point. String theory is considered the most advanced theory
that attempts to unify quantum mechanics and general relativity.
The attempt to find a unified theory is obviously a scientific endeavor.
However, the theory seems to have drifted toward pure mathemat-
ical modeling without any roots in reality. String theory seems to
offer no possibility of being tested, and this is its greatest weakness.
According to Smolin (2006), the situation threatens to impede the
very progress of science. Physicist Richard Muller argues that string
theory is not a true theory. A true theory, like the theory of elec-
tromagnetism or quantum physics or general relativity, must lead to
testable predictions. Using a criterion developed by philosopher of
science, Karl Popper, to qualify as a theory, it must be able to show
a test that could, in principle, falsify it (Muller, 2016a).
Proponents of string theory argue that there is no need to spec-
ify how their theories might be tested or falsified; that Popper is a
philosopher, not a scientist; that falsifiability is an old, 20th-century
criterion. Muller (2016b) responds

. . . all of the great theories of physics, those that we now accept,


were put forth using Popper’s standards of being falsifiable, and
that when we abandon this standard, we are engaging not in physics
but in fantasy. True, it is physics/math-based fantasy, but it is
fantasy nevertheless.

So physicists are aware of the dangers of “fancy mathematics” that


has no grounds in reality. von Neumann (1947) recognized, long time
ago, the consequences of disconnecting mathematics from reality. He
writes (p. 9):

As a mathematical discipline travels far from its empirical source,


or still more, if it is a second and third generation only indirectly
inspired by ideas coming from “reality” it is beset with very grave
dangers. It becomes more and more purely aestheticizing, more and
more purely I’art pour I’art. . . . In other words, at a great distance
from its empirical source, or after much “abstract” inbreeding, a
mathematical subject is in danger of degeneration.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 257

Is Economics a Science? A Gödelian Perspective 257

Paul Romer, the chief economist at World Bank, compares the fall
of science in physics, as in string theory, to the fall of science in
economics. He notes that “the parallel is that developments in both
string theory and post-real macroeconomics illustrate a general fail-
ure mode of a scientific field that relies on mathematical theory”
(Romer, 2015, p. 15).

9. Conclusion
The table below summarizes the main differences between neoclas-
sical economics and modern science, based on which we can draw
valuable lessons to learn.

Science Neoclassical economics

1 Perpetual motion is impossible Perpetual motion is admissible

2 Conservation laws are cornerstones No conservation laws


3 Recognizes its own limits Does not recognize its own limits

4 The world is inherently incomplete The world is inherently complete


5 Axiomatic approach is marginal Axiomatic approach is central
6 Primary objective is study of nature Primary objective is abstract rigor

7 Adopts both reductive and Methodological individualism is the


collective methodologies primary approach

If there is a single lesson to learn from modern science, it is the


impossibility of perpetual motion. Nonetheless, the idea of perpet-
ual motion is deeply ingrained in neoclassical theory. Ponzi schemes
and bubbles are perfectly possible in a neoclassical economy. Per-
petual growth is not only possible, it is an objective of economic
policy. These two forms of perpetual motion contributed to the worst
two crises in the history of human civilization: The Global Financial
Crisis, and the impending Global Climate Crisis. Perpetual motion
involves logical paradoxes. These paradoxes are contradictions that
render the formal system inconsistent. If the system represents the
market, the market thus becomes unstable and unsustainable.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 258

258 The Limits of Mathematical Modeling in the Social Sciences

Consistency in mathematics, conservation in science, and scarcity


in economics, all share the same basic concept: we cannot get some-
thing for nothing. While mathematics and modern science success-
fully live up to their principles, neoclassical theory fails remarkably.
Economics as a discipline was developed to explain causes of the
wealth of nations. The main concern of leading economists before
World War II was understanding production and distribution of real
goods and services. A shift, however, took place during the second
half of the 20th century, whereby economic models were developed
to provide precise quantitative predictions of market variables. The
shift from explanation to prediction (or from truth to precision) was
accompanied by a shift in another direction: From real economic
activities to zero-sum financial bets. The twin crises of the 21st
century prove that these shifts were not successful. Without scien-
tific laws governing economic activities, mathematical models can do
more harm than good.
It is fair to note that economics as a discipline is witnessing sub-
stantial changes over the years. Behavioral economics, complexity
economics, experimental economics, among others, are steadily grow-
ing. Still, there is a lot to learn even for these new fields from the
gap between neoclassical theory and modern science. The first three
points in the table above are still lacking for many alternative schools
of economics. To build future economics on solid grounds, we better
address these deficiencies early on to safeguard human civilization
against future crises.

Acknowledgments
The author is indebted to professor Francisco A. Doria for valu-
able discussions and generous comments. The author also acknowl-
edges helpful comments of professors Herman Daly, University of
Maryland, USA; Cars Hommes, University of Amsterdam, Nether-
lands; M. Abdellah El Idrissi, Mohamed V University, Morocco; Nabil
Maghribi, Wakayama University, Japan; Richard Muller, University
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 259

Is Economics a Science? A Gödelian Perspective 259

of California Berkeley, USA; Didier Sornette, ETH Zurich, Switzer-


land. The author, however, is solely responsible for the views pre-
sented in this chapter.

References
Al-Suwailem, S. (2010). Behavioural complexity. Journal of Economic Surveys,
25, 481–506.
Al-Suwailem, S. (2014). Complexity and endogenous instability. Research in Inter-
national Business and Finance, 30, 393–410.
Al-Suwailem, S., Doria, F. A. and Kamel, M. (2017). Is risk quantifiable? in
Complex Systems Modeling and Simulation in Economics and Finance, Chen,
S.-H., Kao, Y.-F., Venkatachalam, R. and Du, Y.-R. (eds.), Springer, Pro-
ceedings in Complexity, forthcoming.
Allais, M. (1989). Wall street has become a veritable casino. Wall Street Journal,
Oct. 26.
Allen, F. and Gorton, G. (2012). Churning bubbles. in New Perspectives on Asset
Price Bubbles, Evanoff, D., Kaufman, G. and Malliaris, A. (eds.). pp. 13–40.
Oxford University Press. The paper was originally published in Review of
Economic Studies, 1993, 60, 813–836.
Antinolfi, G. and Keister, T. (1998). Options and sunspots in a simple monetary
economy. Economic Theory, 11, 295–315.
Arrow, K. (1964). The role of securities in the optimal allocation of risk-bearing.
The Review of Economic Studies, 31, 91–96.
Ball, P. (2004). Critical Mass: How One Thing Leads to Another. Farrar, Straus,
and Giroux.
Barro, R. and Sala-i-Martin, X. (2003). Economic Growth. MIT Press.
Barrow, J. (1998). Impossibility: The Limits of Science and the Science of Limits.
Oxford University Press.
Beall, J. C. (2001). Is Yablo’s paradox non-circular? Analysis, 61(3), 176–187.
Benhabib, J. (ed.). (1992). Cycles and Chaos in Economic Equilibrium. Princeton
University Press.
Benjamin, A. (2015). The Magic of Math. Basic Books.
Bernstein, P. (2007a). Capital Ideas Evolving. John Wiley & Sons.
Bernstein, P. (2007b). Capital Ideas: Past, Present, and Future, Lecture delivered
at CFA Institute Annual Conference.
Berto, F. (2009). There’s Something About Gödel: The Complete Guide to the
Incompleteness Theorem. Blackwell.
Bezemer, D. (2011). The credit crisis and recession as a paradigm test. Journal
of Economic Issues, XLV, 1–18.
Blanchard, O. (2000). What do we know about macroeconomics that Fisher and
Wicksell did not? Quarterly Journal of Economics, 115, 1375–1409.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 260

260 The Limits of Mathematical Modeling in the Social Sciences

Blanchard, O. (2009). The state of macro. Annual Review of Economics, 1,


209–228.
Blanchard, O. and Fischer, S. (1989). Lectures on Macroeconomics. MIT Press.
Blanchard, O. and Weil, P. (2001). Dynamic efficiency, the riskless rate, and debt
ponzi games under uncertainty. Advances in Macroeconomics, 1, 1–23.
Blaug, M. (1992). The Methodology of Economics. Cambridge University Press.
Bolander, T. (2014). Self-reference. in The Stanford Encyclopedia of Philosophy,
Zalta, E. N. (ed.). https://plato.stanford.edu.
Bouchaud, J. P. (2008). Economics needs a scientific revolution. Nature, 455,
1181.
Bowman, D. and Faust, J. (1997). Options, sunspots, and the creation of uncer-
tainty. Journal of Political Economy, 105, 957–975.
Brotherton, B. (2015). The brain has a blind spot for ‘unknown unknowns’. Dis-
cover Magazine, Nov. 17, blogs.discovermagazine.com.
Brunnermeier, M. K. and Sannikov, Y. (2012). A macroeconomic model with a
financial sector. National Bank of Belgium, October.
Burmeister, E. (1980). Capital Theory Dynamics. Cambridge University Press.
Caldwell, B. (2004). Hayek’s Challenge: An Intellectual Biography of F. A. Hayek.
University of Chicago Press.
Cassidy, J. (2010). Interview with Eugene Fama. The New Yorker, January 13.
Chaisson, E. J. (2001). Cosmic Evolution: The Rise of Complexity in Nature.
Harvard University Press.
Chaitin, G. J., Doria, F. A. and da Costa, N. C. A. (2011). Gödel’s Way: Exploits
into an Undecidable World. CRC Press.
Clark, M. (2002). Paradoxes: From A to Z. Routledge.
Cook, R. (2015). The Yablo Paradox. Oxford University Press.
Coopersmith, J. (2010). Energy: The Subtle Concept. Oxford University Press.
Cox, M. (2005). Metacognition in computation: a selected research review. Arti-
ficial Intelligence, 169, 104–141.
da Costa, N. C. A. and Doria, F. A. (2005). Computing the future. in Com-
putability, Complexity and Constructivity in Economic Analysis, Velupillai,
K. (ed.). Blackwell Publishers.
Daly, H. (1996). Beyond Growth. Beacon Press.
Daly, H. and Faley, J. (2011). Ecological Economics: Principles and Applications.
Island Press.
Damodaran, A. (2008). What is Risk-Free Rate? Stern School of Business, NY
University.
Daynes, A., Pagas, P., Latimer, D. and Andrikopoulos, P. (2015). Paraconsistent
games and the limits of rational self-interest. Australian Journal of Logic,
12, 17–42.
Derman, E. (2007). Defining financial engineering. Emanuel Derman’s Blog, Will-
mott. www.willmot.com.
Derman, E. (2011). Financial engineering as a career: part 1. blogs.reuters.com.
Derman, E. (2014). Speech at commencement 2014 to Berkeley MSE grads.
www.emanuelderman.com.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 261

Is Economics a Science? A Gödelian Perspective 261

Dewdney, A. (2004). Beyond Reason. John Wiley & Sons.


Diamond, A. (1996). Book review: Economics — mathematical politics or sci-
ence of diminishing returns? by Alexander Rosenburg. Journal of Political
Economy, 104, 655–659.
Dixon, R. (2008). Walras’ law. International Encyclopedia of the Social Sciences.
www.encyclopedia.com.
Elster, J. (2015). Explaining Social Behavior. Cambridge University Press.
Feynman, R. (1965). The Character of Physical Law. The Modern Library.
Friedman, M. (1953). The methodology of positive economics. In Hausman, D.
(ed.). Philosophy of Economics: An Anthology. Cambridge University Press,
pp. 149–178.
Friedman, M. (1976). Inflation and unemployment. Nobel Memorial Lecture, in
Economic Sciences, 1969–1980: The Sveriges Riksbank (Bank of Sweden)
Prize in Economic Sciences in Memory of Alfred Nobel, Lindbeck, A. (ed.).
World Scientific, pp. 267–286.
Frydman, R. and Goldberg, M. (2011). Beyond Mechanical Markets: Asset Price
Swings, Risk, and the Role of State. Princeton University Press.
Gardner, R. (1995). Game Theory for Business and Economics. John Wiley &
Sons.
Gjerstad, S. and Smith, V. (2014). Rethinking Housing Bubbles. Cambridge Uni-
versity Press.
Gottlieb, P. (2015). Aristotle on non-contradiction. in The Stanford Encyclopedia
of Philosophy, Zalta, E. N. (ed.). https://plato.stanford.edu.
Gómez-Torrente, M. (2015). Alfred Tarski. The Stanford Encyclopedia of Philos-
ophy (Spring 2015 Edition), Zalta, E. N. (ed.). https://plato.stanford.edu.
Guesnerie, R. (2001). Assessing Rational Expectations: Sunspot Multiplicity and
Economic Fluctuations. MIT Press.
Hall, C., Lindenberger, D., Kümmel, R., Kroeger, T. and Eichborn, W. (2001).
The need to reintegrate the natural sciences with economics. BioScience, 51,
663–673.
Handscombe, R. and Patterson, E. (2004). Entropy Vector: Connecting Business
and Science. World Scientific.
Hausman, D. M. (ed.). (2008). The Philosophy of Economics: An Anthology, Third
Edition, Cambridge University Press.
Heilbroner, R. and Milberg, W. (1995). The Crisis of Vision in Modern Economic
Thought. Cambridge University Press.
Hersh, R. (2006). Inner vision, outer truth. in 18 Unconventional Essays on the
Nature of Mathematics, Hersh, R. (ed.). Springer, pp. 320–326.
Himino, R. (2010). Safer instability. Risk.net, 23, 60–63.
Hofstadter, D. (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic
Books.
Hommes, C. (2013). Behavioral Rationality and Heterogeneous Expectations in
Complex Economic Systems. Cambridge University Press.
Horn, L. (2014). Contradiction. in The Stanford Encyclopedia of Philosophy,
Zalta, E. N. (ed.). https://plato.stanford.edu.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 262

262 The Limits of Mathematical Modeling in the Social Sciences

Huber, T. and Sornette, D. (2016). Can there be a physics of financial markets?


Methodological reflections on econophysics. The European Physical Journal
Special Topics, 225, 3187–3210.
Jehle, G. and Reny, P. (2011). Advanced Microeconomic Theory. Prentice-Hall.
Jung, J. and Shiller, R. (2005). Samuelson’s dictum and the stock market. Eco-
nomic Inquiry, 43, 221–228.
Kahn, C. and Krasa, S. (1993). Non-existence and inefficiency of equilibria with
American options. Economic Theory, 3, 169–176.
Kaku, M. (2008). Physics of the Impossible. Doubleday.
Kamihigashi, T. (2008). Transversality conditions and dynamic economic
behaviour. in The New Palgrave Dictionary of Economics, in Durlauf,
S. N. and Blume, L. E. (eds.). Second Edition. Palgrave MacMillan.
Kauffman, S. (1995). At Home in the Universe: The Search for Laws of Self-
Organization and Complexity. Oxford University Press.
Kauffman, S. (2000). Investigations. Oxford University Press.
Keen, S. (2011). Debunking Economics: The Naked Emperor Dethroned? Zed
Books.
Keynes, J. M. (1936). The General Theory of Employment, Interest and Money.
Harcourt.
King, M. (2016). End of Alchemy. Little, Brown.
Kirman, A. (2009). The economic crisis is a crisis for economic theory. CESifo,
Nov.
Klainerman, S. (2008). Partial differential equations. in The Princeton Compan-
ion to Mathematics, Gowers, T., Barrow-Green, J. and Leader, I. (eds.).
Princeton University Press, pp. 455–483.
Kline, M. (1959). Mathematics and The Physical World. Dover Publications.
Kline, M. (1980). Mathematics: The Loss of Certainty, Oxford University Press.
Kremer, P. (2014). The revision theory of truth. Stanford Encyclopedia of Philos-
ophy, Zalta, E. N. (ed.). https://plato.stanford.edu.
Laughlin, R. (2005). A Different Universe: Reinventing Physics from the Bottom
Down. Basic Books.
Lederman, L. and Hill, C. (2004). Symmetry and the Beautiful Universe.
Prometheus Books.
Levi, M. (2009). The Mathematical Mechanic. Princeton University Press.
Levinovitz, A. (2016). The new astrology: by fetishising mathematical models,
economists turned economics into a highly paid pseudoscience. Aeon, April
6. https://aeon.co.
Lightman, A. (2000). Great Ideas in Physics. McGraw-Hill.
Lilac, A. and Ficco, S. (2012). The relationship between sustainable growth and
the risk-free rate: evidence from UK government gilts. Europe Economics,
Staff Working Paper 2012.1.
Lucas, R. (2009). Robert Lucas on the Crisis. The Economist, Aug. 6.
Mach, E. (1911). History and Root of the Principle of the Conservation of Energy.
Open Court Publishing.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 263

Is Economics a Science? A Gödelian Perspective 263

Mangelsdorf, M. (2010). Overheard at MIT: why economics isn’t like physics.


http://sloanreview.mit.edu/article/why-economics-isnt-like-physics/.
Mankiw, G. (2007). The macroeconomist as scientist and engineer. Journal of
Economic Perspectives, 20, 29–46.
Mas-Colell, A., Whinston, M. and Green, J. (1995). Microeconomic Theory.
Oxford University Press.
Maskin, E. and Sen, A. (2014). The Arrow Impossibility Theorem. Kenneth J.
Arrow Lecture Series. Columbia University Press.
Mayer, T. (1993). Truth versus Precision in Economics. Edward Elgar.
McCloskey, D. (1994). Knowledge and Persuasion in Economics. Cambridge Uni-
versity Press.
McCloskey, D. (1998). The Rhetoric of Economics, Second Edition, University of
Wisconsin Press.
Meyerson, E. (1930). Identity and Reality. Routledge.
Mirowski, P. (1989). More Heat Than Light. Cambridge University Press.
Morishima, M. (1992). Capital and Credit: A New Formulation of General Equi-
librium Theory. Cambridge University Press.
Muller, R. (2016a). What does Richard Muller think of string theory? Quora,
June 28, www.quora.com.
Muller, R. (2016b). What did Richard Muller learn at the time in cosmology
conference at perimeter in June 2016? Quora, Aug. 18, www.quora.com.
Murry, A. (2009). Paul Volcker: think more boldly. Wall Street Journal, Dec. 14.
Nagel, E. and Newman, J. (1958/2001). Gödel’s Proof, D. Hofstadter, (ed.). New
York University Press.
Nordhaus, W. (1992). Lethal model 2: the limits to growth revisited. Brookings
Papers on Economic Activity, 2, 1–59.
Nordhaus, W. (2013). The Climate Casino: Risk, Uncertainty, and Economics for
a Warming World. Yale University Press.
O’Connell, S. and Zeldes, S. (1988). Rational ponzi games. International Eco-
nomic Review, 29, 431–450.
O’Connell, S. and Zeldes, S. (1992). Ponzi games. in The New Palgrave Dictio-
nary of Money and Finance, P. Newman and Milage, M. (eds.). Palgrave
Macmillan.
Ord-Hume, A. (2005). Perpetual Motion: The History of an Obsession. Adven-
tures Unlimited Press.
Pierce, J. (1980). An Introduction to Information Theory: Symbols, Signals, and
Noise. Dover Publication.
Pogliani, L. and Berberan-Santos, M. N. (2000). Constantin Carathéodory and
the axiomatic thermodynamics. Journal of Mathematical Chemistry, 28, 1–3.
Poitras, G. (2010). Valuation of Equity Securities. World Scientific Publishing.
Popper, K. (1959/1992). The Logic of Scientific Discovery. Routledge.
Priest, G. (1997). Yablo’s paradox. Analysis, 57.4, 236–242.
Priest, G., Tanaka, K. and Weber, Z. (2013). Paraconsistent logic. The Stanford
Encyclopedia of Philosophy, E. N. Zalta (ed.). https://plato.stanford.edu.
Rajan, R. (2010). Fault Lines. Princeton University Press.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 264

264 The Limits of Mathematical Modeling in the Social Sciences

Rifkin, J. (2011). The Third Industrial Revolution. Palgrave MacMillan.


Romer, D. (2011). Advanced Macroeconomics. McGraw-Hill.
Romer, P. (2016). The trouble with macroeconomics. Commons Memorial Lec-
ture of the Omicron Delta Epsilon Society. Forthcoming in The American
Economist.
Rosen, J. (2010). Symmetry Rules: How Science and Nature Are Founded on
Symmetry. Springer.
Rosser, B. (2000). From Catastrophe to Chaos: A General Theory of Economic
Discontinuities. Kluwer Academic Publishers.
Sachs, J. (2015). The Age of Sustainable Development. Columbia University Press.
Samuelson, P. (1957). Intertemporal price equilibrium: a prologue to the theory
of speculation. Weltwirtschaftliches Archiv, 79, 181–221.
Samuelson, P. (1983). Foundations of Economic Analysis, Enlarged Edition. Har-
vard University Press.
Scherbina, A. (2013). Asset price bubbles: a selective survey. International Mon-
etary Fund, Working Paper WP/13, February.
Schmitz, J. (2007). The Second Law of Life: Energy, Technology, and the Future
of Earth As We Know It. William Andrew.
Schneider, E. and Sagan, D. (2005). Into the Cool: Energy Flow Thermodynamics
and Life. University of Chicago Press.
Schumacher, B. (2010). Impossible: Physics Beyond the Edge. The Great Courses,
www.thegreatcourses.com.
Schumacher, B. (2015). The Science of Information: From Language to Black
Holes, The Great Courses, www.thegreatcourses.com.
Simanek, D. (2012). Perpetual futility: a short history of the search for perpet-
ual motion. The Museum of Unworkable Devices, https://www.lhup.edu/
∼dsimanek/museum/unwork.htm.
Shell, K. (2008). Sunspot equilibrium. in The New Palgrave Dictionary of Eco-
nomics, Durlauf, S. and Blume, L. (eds.). Palgrave Macmillan.
Shiller, R. (2013). Is economics a science? Project Syndicate, Nov. 6,
www.project-syndicate.org.
Shiller, R. (2015). Irrational Exuberance, Third Edition. Princeton University
Press (First Edition is 2001).
Smith, V. (2002). Constructivist and ecological rationality in economics. Prize
Lecture, December 8, www.nobelproze.org.
Smith, V. (2008). Rationality in Economics: Constructivist and Ecological Forms.
Cambridge University Press.
Smolin, L. (2006). The Trouble with Physics: The Rise of String Theory, the Fall
of Science, and What Comes Next. Mariner Books.
Smullyan, R. (1987). Forever Undecided: A Puzzle Guide to Gödel. Alfred
Knopf, Inc.
Smullyan, R. (1992). Gödel’s Incompleteness Theorems. Oxford University Press.
Smullyan, R. (2013). Gödelian Puzzle Book. Dover Publications.
Sornette, D. and Cauwels, P. (2014). 1980–2008: The illusion of the perpetual
money machine and what it bodes for the future. Risks, 2, 103–131.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-ch07 page 265

Is Economics a Science? A Gödelian Perspective 265

Soros, G. (2003). Alchemy of Finance, Wiley.


Spiegler, P. (2015). Behind the Model: A Constructive Critique of Economic Mod-
eling. Cambridge University Press.
Stern, N. (2015). Why Are We Waiting? The Logic, Urgency, and Promise of
Tackling Climate Change. MIT Press.
Stewart, I. (2012). In Pursuit of the Unknown: 17 Equations That Changed the
World. Basic Books.
Stiglitz, J. (2008). Capitalist fools. Vanity Fair, Dec. 9, www.vanityfair.com.
Stuart, T. (2013). Understanding Computation. O’Reilly.
Szegö, G. (1982). New Quantitative Techniques for Economics Analysis. Academic
Press.
Tarski, A. (1933). The concept of truth in the languages of the deductive sciences.
(Polish); reprinted in Tarski, A. (1983). Logic, Semantics, Metamathematics,
Papers from 1923 to 1938, Corcoran, J. (ed.). Hackett Publishing Company.
Tirole, J. (1985). Asset bubbles and overlapping generations. Econometrica, 53,
1499–1528.
Tol, R. S. J. (2009). The economic effects of climate change. Journal of Economic
Perspectives, 23, 29–51.
Varoufakis, Y., Halevi, J. and Theocarakis, N. (2011). Modern Political Eco-
nomics, Routledge.
von Neumann, J. (1947). The Mathematician. in John von Neumann: Collected
Works, Taub, A. (ed.). vol. 1, pp. 1–9.
Wang, H. (1996). A Logical Journey: From Gödel to Philosophy. MIT Press.
Weintraub, E. R. (2002). How Economics Became a Mathematical Science. Duke
University Press.
Wieland, V. and Wolters, M. (2013). Macroeconomic model comparisons and
forecast competitions. VOX, Feb 13.
Wigner, E. (1960). The unreasonable effectiveness of mathematics in the natural
sciences. Communications in Pure and Applied Mathematics, 13, 1–14.
Wilczek, F. (2015). A Beautiful Question: Finding Nature’s Deep Design. Penguin
Press.
Yanofsky, N. (2013). The Outer Limits of Reason: What Science, Mathematics,
and Logic Cannot Tell Us. MIT Press.
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-index page 267

Index

A Borges, x
Abraham–Marsden, 15 bounded rationality, 200
Ackermann’s, 84 Brouwer, 111, 116
Adler, 33 bubbles, 229–230, 241, 257
Al-Suwailem, xii Burali–Forti paradox, 96
Alan Turing, 99 Busy Beaver, 70
algorithmic, ix
algorithmic information theory, 101 C
Anasazi project, 166 calculus ratiocinator, 93
Andrey Kolmogorov, 103 Carnap, 117
arithmetical hierarchy, 44 category theory, 194
Arnold, xiii, 52 Cauchy, 162
Arrow’s, 128 cellular automata, 164, 189
Arrow–Debreu, 52 Chaitin, ix
artificial neural networks (ANNs), 163 chaos theory, vii
ash, 48 characteristica universalis, 93
astrology, 250 Cho, 18
autonomous differential, vii Christiaan Huygens, 93
axiom of choice, 7 Chuaqui, 21
axioms of physics, 10 classical logic, 168
classical logical language, x
B classical mechanics, 6
Baker, 82 climate change, 243
Bargmann–Wigner, 32 cohesive groups, 120
Becchio, x cohesive social groups, 128
Ben-David, 2 complex adaptive systems (CAS), 201
Bernoulli, 113–114 configurations, 79
BGS, 82–83 conservation, 227
BGS-like, 73 conservation laws, 215, 229, 257
Bochvar, 117 conservation principle, 228
Bohr–Sommerfeld, 19 consistency, 224, 227
Boole’s, 115 consistency in formal systems, 249

267
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-index page 268

268 The Limits of Mathematical Modeling in the Social Sciences

constructivist mathematics, 189 fast and frugal heuristics, 200


consumers’ sovereignty, 139 federal reserve system, 137
Coriolis, 15 Fermat’s, 53
Corson, 17 Feynmann, 20
Cosenza, 79 financial crisis, 250
counterexample function, 69, 73 financial divination, 136
financial engineering, 208
D first law of thermodynamics, 216
da Costa, viii forensic science, 143
David Hilbert, 94, 116 formal ethics, 111
Dean, x formalist ethics, 119
decision theory, 121, 129 Frege, 115
Delaunay, 19
Diophantine, 34 G
Doria, viii Gödel, 117
doubtful propositions, 126 Gödel phenomenon, viii
DSGE models, 136 Gödel-Turing phenomenon, 133
dynamical system, vii Gödel’s incompleteness theorem, 214
Gödel’s project, 198
E Gödelian dilemma, 193
Eco, 91 gauge, 6
ecology, 2 Gelfand, 19
economic growth, 229 general theory of regulation, 147
economic theory, 111 generative social science, 189
economics, x, 2, 113, 129 genetic algorithms (GAs), 164
efficient market hypothesis, 135, 213, Georg Cantor’s, 94
251 global climate crisis, 257
Ehresmann’s, 15 global financial crisis, 212–213, 257
electromagnetic theory, 6 God, 92
Emch, 19 God’s eye view, 184
emergence, 225, 227–228 golden rule of algebra, 219
endogenous business cycles, 241 Golem, 109
entropy, 248 graph theory, 194
ethics, 118–120, 129 gravitation, 6
Euel Elliott, xii great recession, 134
Euler–Lagrange, 21 Greenspan gaffe, 134
exotic formalization, 69 Guckenheimer, 13
expectations, 240
experimental evidence, 239 H
explanation, 210, 213 Hahn, 111
externalities, 243 Halevy, 2
halting function, 37
F Hamilton’s principle, 20
fallacy of composition, 219 Hamilton–Jacobi, 6
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-index page 269

Index 269

Hamiltonian, 6 L
Hamiltonian mechanics, 27 Lagrangian, ix, 6
Hanh, 117 Lanczos, 14
Hari Seldon, vii Lange, 57
Hayek, 118 law of excluded middle, 116, 190
Hertz, 14 law of identity, 220
heuristic decision making, 190 law of non-contradiction, 220
heuristics, 174 Leibniz, 92
Heyting, 116 Lewis, viii
Hilbert’s sixth problem, 10 liar’s paradox, 150, 222
Hirsch’s, 14 Lie, 10, 28
Hyman Minsky Prize for limits, 3
Distinguished Performance in limits of prediction, 149
Macroeconomics, xii local observer view, 184
local observer viewpoint (LOV), 169
I logic, 111
imperative logic, 127 logic of ethics, 120
Inagaki, 57 logical contradictions, 195
incompleteness, ix, 1 logical positivists, 196
incompleteness theorem, 99, 224 logical tolerance, 117
information, 135, 248 logical-deductive method to social
information choice theory, 142 sciences, 112
intuitionistic logic, 188 Lorentz–Poincaré, 26
invariance, 225–227 Lorenz, E., vii, 13
Isaac Asimov, vii Lotka–Volterra, vii, 58
Lucas critique, 233
J Luce, 127–128
Jaskowski, 117 L
 ukasievicz, 116–117, 122
Jonathan Swift’s, 92
M
K MacLane’s, 15
Kabbalah, 92 magical thinking, 134
Kaluza, 18 Mally, 124–125
Kant, 119, 128 many-valued logic, 116
Karl Menger’s, x, 111, 114–115, market, 79
118–121, 125, 127–129 mathematical expectation, 114
Kleene, 7, 117 Maxwell’s, 16
Klein, 18 maximization, 120
Knight, 48 Mayer, 113
Koppl, x Maymin’s, 79
Kraft, 106 messenger problem, 115
Kreisel, G., 66 meta-market, 252
Kronecker, 116 meta-mathematics, 98, 252
Kurt Gödel, 97 metacognition, 253
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-index page 270

270 The Limits of Mathematical Modeling in the Social Sciences

Minkowski, 24 perpetual growth, 257


Mises, 57 perpetual motion, 242, 257
Morgenstern, 113, 118, 127–128 perpetual motion machine(s), 216,
Moritz Schlick, 112 246
multiple observer description, 198 Platonic, 97
Murray, 19 Poincaré, 15
Poisson, 19
N polynomial time, 162
n-person zero-sum game, 236 Ponzi debt game, 234
n-valued logics, 111, 116, 122 Ponzi schemes, 229–230, 240, 257
Nash, viii, 33 Popper’s, 117–118
Nash equilibria, 146, 150 positive ethics, 118
negative feedback, 161 positive feedback, 161
Neumann, 19, 128 Post, 116–117, 122
Neurath, 117 prediction, 210–211, 213
new logic to social sciences, 115 Prigogine, 58
Newton’s, 93 primer, 2
Nicholas Bernoulli, 113 Principia Mathematica, 2
Nicolis, 62 prisoners’ dilemma game, 232
no Ponzi game (NPG) condition, 233 processes, 162
nonclassical logics, x PSPACE, 168
nondeterministic polynomial time, psycho-history, vii
162
nonstandard logics, 116, 129, 184 Q
normative logics, 127
quantum Bayesian, 187
O quantum Bayesianism, 170
quasi-trivial machines, 71
O’Donnell’s algorithm, 81
O’Rahilly, A., 17
R
ontology, 98
optative logic, 125 Raiffa, 127–128
oracle at Delphi, 140 rational expectations, 113
overlapping generations economy, 232 Raymond Lull, 92
realism, 122
P relational quantum dynamics (RQM),
P vs. N P , 53 186
P < N P , 84 Rice, viii, 14
PAC learning, 176 Rice’s theorem, 146
Palais, 15 Richardson, 33
paraconsistent logics, xii Riemann’s, 53
pattern prediction, 211 Riemannian geometry, 14
Peano arithmetic, 7 risk-free rate, 244
Peano’s, x, 115 road map, 2
Penrose, R., 30 Rogers, 37
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-index page 271

Index 271

Ruelle, 62 T
Russell, x, 95, 115–116 Takens, 62
theology, 94
S theoretical physics, 6
theory of economic policy, 146
Samuelson, P., 33
theory of everything (see also TOE),
Samuelson’s dictum, 251
92
scarcity, 226–227, 258 theta function, 4
Scarpellini, 33 third law of thermodynamics, 216,
School of Economic, Political and 248
Policy Sciences at the University of Tonnelat, 18
Texas at Dallas, xii Topos theory, 194
Schrödinger, 6, 115 trade, 228–229
Schrödinger–Heisenberg–Dirac, 19 transversality condition, 231
scientific law, 214 Trautmann, 18
second law of thermodynamics, 216 truth value, 238
second welfare theorem, 192 Tsuji, viii
Seligman, 57 tubular neighborhood theorem, ix
Shannon, 106 Turing machine, 167, 189
Smale, 63
small world simulation, 189 U
social dilemmas, 251 uncertainty, 111, 113
social group, 120 uncomputability, 99
social sciences, ix, 121, 129 undecidability, ix
Solomonoff, 103 Unicamp, xii
universal computer, 102
something from nothing, 226
universal polynomial, 42
St. Petersburg paradox, 111–113
universal Turing machine, 192
standard logic, 111, 116 utility maximization model, 199
standard logic (the ordinary 2-valued Utiyama, 17
logic) and the n-valued logics of
Post and L ukasiewicz, 112 V
Steenrod’s, 15
2-valued logic, 122
Sternberg’s, 15
2-valued system, 116
Stevin’s principle, 245 van der Waerden, 19
Stratton’s, 16 Velupillai, viii
string theory, 256 Verhulst, 63
substitutive goods, 126 von Neumann, 98
sunspot equilibrium, 238
Suppes predicates, 2, 5 W
surplus value, 227–229 Walras’ law, 231, 233
sustainable development goals, 243 Weyl, H., 17
symmetry, 240, 246 Wightman, 19
system of norms, 120 Willard Gibbs, 16
May 11, 2017 17:46 The Limits of Mathematical Modeling in the Social Sciences 9in x 6in b2809-index page 272

272 The Limits of Mathematical Modeling in the Social Sciences

Wittgenstein’s, 112, 115, 117, 128 Z


Wolfram’s conjecture, 145 Zermelo–Fraenkel, 96
zero-sum financial bets, 258
Y zero-sum games, 222, 246
Yablo’s paradox, 237 ZFC, 96
Yang–Mills, 17

You might also like