Professional Documents
Culture Documents
PPTC - Jeroen Van Dijk - PSS - 24 (Searchable) - Process Studies Supplements 2017 PDF
PPTC - Jeroen Van Dijk - PSS - 24 (Searchable) - Process Studies Supplements 2017 PDF
Issue 24 (2017)
Table of Contents
1. Introduction
1.1 Getting to know process physics in terms of time, life, and consciousness
2. Time
2.1 From the process of nature to the geometrical timeline
2.1.1 Aristotle's teleological physics
2.1.2 Galileo's non-teleological physics
2.1.3 The deficiencies of the geometrical timeline
2.2 From the geometrical timeline to time-based equations
2.3 From time-based equations to physical laws
2.3.1 The flawed notion of physical laws
2.4 From geometrization to the timeless block universe
2.5 Arguments against the block universe interpretation
2.5.1 The real world out there is objectively real and mind-independent
(or not?)
VAN DuK/Process Physics, Time, and Consciousness 3
List of Figures
2-1: Bronze ball rolling down an inclined plane (with s-t diagram)
2-2: The earth-moon system in a temporal universe and in a block universe
2-3: Minkowski space-time diagram
3-1: Simplified universe of discourse in the exophysical-decompositional paradigm
3-2: Rothstein's analogy (between communication and measurement systems)
3-3: Steps towards Robert Rosen's modeling relation
3-4: Universe of discourse with von Neumann's object-subject boundary
3-5: From background semiosic cycle of preparation, observation, and formalization
to foreground data and algorithm
4-1: Conscious observer as an embedded endo-process within the greater embedding
omni-process which is the participatory universe
4-2: Subsequent stages in the evolution of the eye
4-3: Stationary and swarming cortical activity patterns in non-REM sleep and
wakefulness
4-4: Varying degrees ofneuroanatomical complexity in a young, mature, and
deteriorating brain
5-1: Schematic representation of interconnecting nodes
5-2: Artistic visualization of the stochastic iteration routine
5-3: Tree-graphs oflarge-valued nodes Bii and their connection distances Dx
5-4: Emergent 3D-embeddability with "islands" of strong connectivity
5-5: [Dk-k]-diagram
5-6: Fractal (self-similar) dynamical 3-space
5-7: Gray-Scott reaction-diffusion model
5-8: Fractal pattern formation leading to branching networks
5-9: Seamlessly integrated observer-world system with multiple levels of
self-similar, neuromorphic organization
5-10: Structuration of the universe at the level of supragalactic clusters
VAN DuK/Process Physics, Time, and Consciousness 5
1. Introduction
This article is intended as a follow-up to Lee Smolin's Time Reborn
(2013). In his controversial, yet well-received, book Smolin tried to argue
against contemporary mainstream physics' firm belief in the unreality of
time. Following in many of his footsteps at first, we will eventually try
to continue where he left off: with the suggestion that nature evolves
according to a principle of precedence and that our physics should therefore
be routine-driven, instead of based on eternally valid static laws.
So, starting with the basics, we will first follow the historical path
from: (1) the debate between Parmenides and Heraclitus regarding whether
reality should be thought of as existing statically or as dynamically
becoming; (2) Plato's idea of nature as an imperfect reflection of the
eternal, perfect realm of ideal forms; (3) the geocentric teleological physics
ofAristotle, who thought of time as an abstracted measure of motion; (4)
the heliocentric non-teleological physics of Galileo, who turned time into
a quantifiable one-dimensional coordinate line; and (5) the mechanistic
physics of the Newtonian-Laplacean clockwork universe, with its concepts
of absolute space as a 3-dimensional geometrical volume that exists
independently of its contents (i.e., the physical constituents of the "clockwork
universe") and absolute time as an externally running chain of intervals
that pass by at the same rate for everyone and everything in the entire
universe.
Then, at the end of this list, we will find what is today almost
unanimously considered to be one of the great highpoints in the history
of physics and, thus, the absolute climax in the history of thinking about
time, namely, (6) Einstein's relativistic physics, which, due to Minkowski's
block universe interpretation, led to the now well-established belief that
nature is actually a giant 4-dimensional spacetime continuum in which
all of eternity exists together at once as a huge static and timeless expanse.
That is, following the line of reasoning in the block universe
interpretation, which argued that the relativity of simultaneity2 necessarily
involved the unreality of time passing by, many physicists became convinced
that our experience of time had to be illusory. Supported by the wave of
public enthusiasm that followed Arthur Eddington's confirmation of
Einstein's prediction of the bending of star light around the sun ( 1919),
this belief in the unreality of time grew in popularity until it reached the
status of a logical necessity-a rock-solid truth.
6 PROCESS STUDIES SUPPLEMENT 24 (2017)
box, namely Reg Cahill's 5 process physics, arrived on the scene and ever
since it has managed to grow into a serious habit-centered alternative for
contemporary mainstream physics. As such, process physics aims to model
the universe from an initially orderless and uniform pre-geometric pre-
space by setting up a stochastic, self-referential modeling of nature. In
process physics, all self-referential and initially noisy activity patterns
are "mutually in-formative" in the sense that they are actively making a
meaningful difference to each other (i.e., "in-forming" or "actively giving
shape to each other").
Through this internal, habit-establishing "co-informativeness," process
physics is able to avoid the info-computational approach of externally
imposed "pre-coded" symbol systems that seem to cause so much trouble
in mainstream physics. Also, due to this system-wide mutual in-formativeness,
the initially undifferentiated activity patterns can act as "start-up seeds"
that become engaged in self-renewing update iterations (see section 5.3 .3
for further details). In this way, the system starts to evolve from its initial
featurelessness to then "branch out" to higher and higher levels of
complexity-all this according to roughly the same basic principles as a
naturally evolving neural network.
Because of this self-organizing branching behavior, the process system
can be thought of as habit-bound with a potential for creative novelty and
open-ended evolution. Furthermore, nonlocality, threedimensionality,
gravitational and relativistic effects, and (semi-)classical behavior are
spontaneously emergent within the system. Also, the system's constantly
renewing activity patterns bring along an inherent present moment effect,
thereby reintroducing time as the system's "becomingness." As a final
point, subjectivity-in the form of "mutual informativeness" (which is
also used in Gerald Edelman's and Giulio Tononi's extended theory of
neuronal group selection to explain how higher-order consciousness can
emerge)-is a naturally evolving, innate feature, not a coincidental, later-
arriving side-effect.
1.1 Getting to know process physics in terms of time, life, and consciousness
In order to properly introduce process physics, first, a proper outline
of our contemporary mainstream physics and its problems must be given.
Therefore, in Chapter 2, titled "Time," we will discuss the most important
technicalities having to do with how mainstream physics deals with time.
To be more specific, we will first take a look at the role of time in Aristotle's
10 PROCESS STUDIES SUPPLEMENT 24 (2017)
2. Time
Although time has played a major role in physics ever since the early
1600s when Galileo started to specify it in terms of chronologically
arranged intervals along a geometrical, unidirectional line, there is still
no common agreement on what it actually is (see Davies, About Time,
279-283 ; Davies, "That Mysterious," 6-8; Smolin, Time, 240-242). Despite
the impressive theoretical and technological progress over the last four
hundred years, physicists and philosophers alike continue to be troubled
by the elusiveness of time and our incomplete understanding of it. So, for
sake of clarity, let us first try to reconstruct how the concept of time was
historically introduced into physics, and see how it developed over the years.
In order for this to work, he argued that the regularity in the motion of
stars and planets, which could be witnessed every night when looking up
to the nocturnal sky, was a sign of the explicitly geometrical nature of
steadily rotating celestial spheres. So, although Aristotle dismissed Plato's
realm of ideal forms, he still recognized geometry as one of the most
essential sciences. Particularly, he believed the sphere to be the most
perfect of all geometrical shapes. After all, to the naked eye, the heavenly
bodies clearly seemed to move around the Earth in an explicitly circular
manner. The combination of these two was then more than enough "proof'
for Aristotle to crown the celestial sphere as the pristine, perfect, geometry-
abiding part of nature.
Through their eternal sameness and perfectly circular regularity, the
celestial spheres far outstripped the everyday imperfection of our earthly
domain where all things die and decay without having the permanence of
the heavens. It was in this sense that geometry had a central place in
Aristotle's cosmology and that its influence was passed on for centuries
on end in support of the Aristotelian cosmology and teleological physics
( see section 2.1.1 ).
Aristotle's body of thought eventually turned out to dominate much
of the almost two millennia that followed. With his detailed and careful
study of the behavior of falling objects, however, Galileo Galilei (1564-1642)
basically established the blueprint for what-through the later work of
Newton and Einstein, among others-has now become our modem
mainstream physics. In Galileo's days, as it had been in the ancient Greek
era, geometry was still the preeminent piece of equipment in the scientific
tool box. Therefore, the accounts of motion that were formulated by
Galileo's contemporaries would typically be based on geometry, if not
directly then at least indirectly. Accordingly, all things having to do with
motion first had to be looked at through the filter of geometry, for instance,
by comparing traveled distances of thrown projectiles or the depth of
impact pits left behind by falling objects.
line with the naive belief in faster-falling heavy objects, Aristotle's rule
was converted into a neat quantitative expression, relating weight and
speed to each other in a proportional way:
The technicalities that came with this expression arguably kept Aristotle
and his followers busy enough to overlook the fact that it was actually
quite wrong. Initially, these ratios were only used in an after-the-fact
manner. But, with time, it became apparent that falling objects started
from a resting position and had to pick up their pace upon release, instead
of immediately dropping down at full speed.
This is when it was decided that the speed of the objects would have
to depend on the distance covered. Accordingly, it was concluded that
falling objects would increase their speed the deeper they fell. 9 What is
particularly noteworthy in this case is that the buildup of speed was not
being linked with the lapse of time, but with the distance covered.
To be able to understand the motives behind the linkage of speed with
covered distance rather than with time elapsed, we will first have to look
into Aristotle's thoughts of how movement and time were related with
each other:
[B]ecause movement is continuous so is time; for (excluding
differences of velocity) the time occupied is conceived as
proportionate to the distance moved over. Now, the primary
significance of before-and-aftemess is the local one of "in front of'
and "behind." There it is applied to order of position. But since there
is a before-and-after in magnitude, there must also be a before-and-
after in movement in analogy with them. But there is also a before-
and-afler in time, in virtue of the dependence of time upon motion.
Movement, then, is the objective seat of before-and-afterness both in
movement and in time; but conceptually the before-and-aflerness is
distinguishable from movement. Now, when we determine a movement
by defining its first and last limit, we also recognize a lapse of time;
for it is when we are aware of the measuring of motion by a prior and
posterior limit that we may say time has passed. And our
determination consists in distinguishing between the initial limit and
the final one, and seeing that what lies between them is distinct from
both; for when we distinguish between the extremes and what is
between them, and the mind pronounces the "nows" to be two-an
initial and a final one-it is then that we say that a certain time has
passed; for that which is determined either way by a "now" seems to
VAN DuK/Process Physics, Time, and Consciousness 17
"ith equal intenals !it : as 1neasured in 'point.s' calculated with k = 33 t_,. squared
~ l = l x- l x_J
'1 = 1 33 1.00 11 = 1
t, = 3 298 9.03 31 = 9
t, = 4 526 15.94 41 = 16
t6 = 6 1192 36. 12 61 = 36
t7 = 7 1620 49.09 71 = 49
limct - -
0
0
500
§
traveled
1000
••
•
distance 1500
2000
•
h - - - 1~
Fig. 2-1: Bronze ball rolling down an inclined plane (with s-t diagram)
20 PROCESS STUDIES SUPPLEMENT 24 (2017)
North Whitehead: "This passage of events in time and space is merely the
exhibition of the relations of extension which events bear to each other,
combined with the directional factor in time which expresses that ultimate
becomingness which is the creative advance of nature" (Whitehead, PNK,
63). Moreover: "We habitually muddle together this creative advance,
which we experience and know as the perpetual transition of nature into
novelty, with the single-time series which we naturally employ for
measurement" (Whitehead, CN, 178). Here, the word "naturally" should
perhaps better be replaced by "routinely." After all, by following Galileo's
first example of presenting such a single-time series in a table (as in Table
1, above), we are entirely taking for granted all the idealizations and
simplifications that in fact enabled him to present time as a unidirectional
geometrical line. In other words, we are thus accepting the authority of
tradition without really questioning Galileo's hidden presumptions.
Within certain contexts of use it may perhaps be quite convenient to
interpret space and time in terms of geometrical dimensions, but these
interpretations should not be taken so literally as to impose them onto the
process which is nature. We should not mistake our abstractions for reality,
so we should always remain critical towards claims that the physical real
world should sit within space and time, or exist as a 4-dimensional spacetime
continuum. At the end of the day, space and time, although all too often
interpreted as actually existing, geometrically specifiable dimensions, are
in fact artifacts of the human intellect that follow directly from the nature-
dissecting mindset on which our still well-established tradition of doing
"physics in a box" is based. This does not mean, however, that space and
time are mere illusions, but rather that what we have come to think of as
"space-time" is actually an intrinsic aspect of the process of nature and
can thus not be usefully reflected upon without taking nature's processuality
into account.
After all, while nature is all about change, process, action, evolution, etc.,
the timeline by itself is as static as can be (Cahill, "Process Physics: Self-
Referential," 2).
Furthermore, the geometrical timeline does not allow for a present
moment effect. The timeline doesn't have a unique and indisputable Now
which will automatically come to the fore during use. This lack of a
dedicated present moment is in fact a shortcoming that even Einstein had
become quite concerned about in his later years. As Rudolf Carnap reported:
Once Einstein said that the problem of the Now worried him
seriously. He explained that the experience of the Now means
something special for man, something essentially different from the
past and the future, but that this important difference does not and
cannot occur within physics .... Einstein thought that scientific
descriptions [whether they be formulated in physical or in
psychological terms] cannot possibly satisfy our human needs; and
there is something essential about the Now which is just outside the
realm of science. (Carnap 37-38)
To us, conscious human beings, things happen in a particular order.
That is, things seem to change as they pass from the present into the future,
thus leaving their past "behind" them. Accordingly, when left to its own
devices, nature typically likes to follow the path of irreversibility. Water
does not spontaneously flow upwards against the slope of a mountain;
milk will not unmix itself from the coffee in which it is poured; and, as
we will all learn in life, we do not grow younger as we age. In classical
thermodynamics-although gas particles in bulk tend towards disorderliness,
thus leading to a preferred direction of time-the microscopic laws
describing the collisions of individual particles are time-symmetric,
indifferent to any distinction between a back or forward direction of time.
Accordingly, at least until the advent of quantum mechanics, all the then
known "laws of nature" were time-reversible, and to this day most of them
still are:
The reversibility of basic physical processes comes from the time
symmetry of the laws that underlie them. This time-reversal
symmetry is usually denoted by the letter "T." You can think ofT as
an (imaginary) operation that reverses the direction oftime-i.e.,
interchanges past and future. Time-symmetric laws have the property
that when the direction of time is inverted the equations that describe
them remain unchanged: they are invariant under T. A good example
is provided by Maxwell's equations of electromagnetism, which are
VAN DuK/Process Physics, Time, and Consciousness 25
squares of the times, and this was true for all inclinations of the
plane, i.e., of the channel, along which we rolled the ball. (Galileo,
Dialogues, 178-179)
In so doing, Galileo found that the traveled distance was directly proportional
to time squared:
S 0C t2
This expression contains two variables, s for distance, and t for time, and
says that with each elapsed time interval, the traveled distance increases
quadratically. This relation may indeed seem quite obvious, since rolling
down the ramp's entire length will take the bronze ball twice the time that
is needed for the first quarter distance. But in order to really make sure
that his hypothesis would stand the test of time, it seems that, further on
into the experiment, Galileo decided to replace the not so precise water
clock with the more reliable methodology of metronome-like transit sounds
made by strategically placed frets (i.e., "speed bumps") or alarm bells.
Because this method, due to "double calibration,"21 ensures a high level
of accuracy in making equally long time intervals, it becomes possible to
introduce a standardized unit of time. And although this method left their
actual size unspecified-or simply one (i.e., 1.00) by default-it led to
all time intervals having the same duration with a very small margin of error:
The phrase "measure time" makes us think at once of some standard
unit such as the astronomical second. Galileo could not measure time
with that kind of accuracy. His mathematical physics was based
entirely on ratios, not on standard units as such. In order to compare
ratios of times it is necessary only to divide time equally; it is not
necessary to name the units, let alone measure them in seconds. The
conductor of an orchestra, moving his baton, divides time evenly with
great precision over long periods without thinking of seconds or any
other standard unit. He maintains a certain even beat according to an
internal rhythm, and he can divide that beat in half again with an
accuracy rivaling that of any mechanical instrument. (Drake 98)
It was because of this increased measurement accuracy that the proportionality
constant k, relating distance and time to each other, could be determined
with ample accuracy:
Newton was able to lay down a basic framework for describing the motion
of all of nature's physical objects within the earthly as well as the heavenly
domain. The equations dealing with the gravitation of ordinary objects
VAN DuK/Process Physics, Time, and Consciousness 29
letting them be processed by "the laws that be" is still the way to go (see
Smolin, Time, 50). And even though the strict Laplacian determinism
turned out to be untenable,23 the general Newtonian mode of operation
is still alive and well. Nowadays, rigorous determinism is not a realistic
expectation anymore, but empirical agreement has taken its place. That
is, when a physical equation achieves empirical agreement with the system
it is held to portray, it is typically thought to closely follow the target
system's behavior. Accordingly, the physical equation is customarily
considered to represent the system at hand-if not in a direct chrono-
logical sense, then at least statistically (Van Dijk, "The Process").
a)
past - - - ---+ present- - - - ---+ fu ture
!
VAN DuK/Process Physics, Time, and Consciousness 35
4. Hence, there is nothing to divide the past from the future (see
Capek 507);
friendly" work has not been able to bring about any major reputation-
shattering crisis in mainstream physics. Nonetheless, these efforts should
definitely be taken seriously-if only to provide us with new angles and
ideas on how to tackle the many unresolved matters in physics and science
as a whole.
However helpful the work of the above mentioned researchers may
be in getting a more process-oriented perspective on the physical sciences,
for now let us focus on some specific objections against the initial
assumptions, the idealizing abstractions, and also the eventual interpretation
of these abstractions as used in Minkowski's timeless block universe
framework. The line of reasoning from initial assumptions to the interpretation
of nature as a 4-dimensional block universe can be roughly summarized
as follows :
2.5.1 The real world out there is objectively real and mind-independent
(or not?)
For sake of argument, let's try to add some nuance to this seemingly
watertight step-by-step analysis. As noted, it is already quite a firm and
assertive statement to postulate an observer-independent "real world out
there." There seems to be more than enough reason to opt for another
initial assumption. For instance, quantum mechanics suggests that our
observational participation in quantum experiments plays an indispensable
role in the physical world. That is, quantum events are thought to exist in
a state of superposition when not being observed, while exposure to
observation will make this superposition "collapse" into one specific state.
Yet the more or less unanimously held assumption within the physical
sciences is that our natural universe is an entirely physical world. Minkowski
thought that this physical world had an absolute existence as a static four-
dimensional continuum. That is, although space-time allows observers to
adopt different reference frames, it exists objectively and entirely independent
of the mind. In his "world postulate" (Weinert 169), Minkowski put
forward that the four-dimensional geometry of his abstract space-time
construct perfectly matched the "architecture" of the "real world out
there"-which he simply referred to as "world" (Minkowski 83). This
point of view meant that one could basically treat the world as a Euclidian,
real coordinate space R4 with coordinates (x, y, z, t):
A four-dimensional continuum described by the "co-ordinates" x1, x2,
x 3, x 4 , was called "world" by Minkowski, who also termed a point-
event a "world-point" .... We can regard Minkowski's "world" in a
VAN DuK/Process Physics, Time, and Consciousness 41
tiny space occupied by our eyes will in some sense "contain" the goings-
on of vast portions of the cosmos-and thus, indirectly, of nature-as-a-
whole. The photons entering the pupil and hitting the retina may have
been in transit for billions of years, emitted by stars varying widely in
their historical appearance within the universe as well as in their distance
from our home planet earth. Nonetheless, all those photons come together
within the eyeball, thus "informing" and making a difference to the
conscious observer in question. As such, these photons are not just informing
the organism in a way that fits the scheme of classical information theory
(as if they were prestated signals 38 that are fed into an input port to
eventually "inform" the end station, which is often thought of as the brain's
CPU-like center of subjectivity), but as we will see later on in Section
4.2.4, they are seamlessly integrated participants in the organism's
perception-action loops-the cyclic stream of experience-through which
the process of subjectivity is kept on the go.
The process of experience can be taken to involve the universe at
large-from its earliest beginnings to its most recent occurrences. This
does not just mean that any arbitrary conscious observer should be able
to obtain sensory information about the distribution of stars and galaxies
across the universe, rather it amounts to much more than that. After all,
the absolutely critical characteristic of all these light-emitting stars (and
supernovae) is that they gave birth to all the chemical elements necessary
for life to be possible at all. That is, without their role as natural fusion
reactors, forming chemical elements during nucleosynthesis, the odds of
our current biological world having been able to appear would have been
zero. Not only is our natural universe the generative source of all the
chemical elements that have eventually enabled conscious life, but all
evolved conscious organisms get to sculpt their conscious nows-the
conscious twosome of self and scenery (see Sections 4.2.4 and 4.3 for a
more detailed account)-by entering into cyclic interaction with the same
realm from which they themselves have emerged.
Conscious organisms can thus be thought of as seamlessly integrated
endo-processes within a greater embedding omni-process (the universe
as a whole). As such, they can reflexively form a conscious now (see
Velmans 328) through the interplay between their own biological body
states and their embedding living environment. In this way, we are in fact
seamlessly embedded organisms-"equipped" with a memory-based,
anticipatory conscious now which enables us to navigate, to live from,
46 PROCESS STUDIES SUPPLEMENT 24 (2017)
and to make sense of, the larger embedding universe that forms our home
as well as the "stuff' that plays its part in our experiences. All things
considered, we are ultimately nothing less than active participants in the
reflexive process through which nature experiences itself (see Velmans
327-328; Van Dijk, "An Introduction," 81).
Hence, instead of labeling nature as an external "real world out there,"
it is far more helpful to treat it as a fundamentally indivisible whole in
which the "inner life" of conscious organism and the "outer world" of
natural phenomena are actually two intimately related aspects of the same
overarching psychophysical process. At the end of the day, all the above
should be more than enough to admit that subjectivity has to play a crucial
role in nature. Instead of adopting the detached and disembodied "point
observers" from Minkowski's block universe interpretation, with their
equally detached perspectives on an otherwise entirely "physical real
world out there," we should rather think of observers as seamlessly
embedded, living, and utterly involved participants. In other words, we
should consider them intimately nested endo-processes within the greater
omni-process of nature from which they ensue. Accordingly, we should
not think of ourselves as being the detached end-destination of passive
incoming information-whether empirical or sensory. Rather, we just as
much in-form, affect, give shape, and make a difference to the process of
nature as the process of nature informs and makes a difference to us.
Reminiscent of, but not entirely synonymous with, David Bohm's concept
of "active information" (Bohm and Hiley 35-3 7), all this can be said to
occur in an order of active mutual informativeness (see Van Dijk, "An
Introduction," 75; also "The Process").
The latter achievement was left for Galileo to flesh out. And by doing
so, he managed to put together a framework that was now thought to be
fully geometric. Thanks to Galileo's efforts, the scientific relevance of
geometry eventually grew to unprecedented heights, thereby basically
pushing the Aristotelian conception of a purposeful cosmos from the
throne. In fact, Galileo was so much enthused with the outcomes of his
experiments that he practically became a "crusader for geometry." As
such, he felt that mathematics-which to him was identical to
geometry-should be crowned as the perfect and infallible language of nature:
Philosophy is written in this grand book-I mean the
universe-which stands continually open to our gaze, but it cannot be
understood unless one first learns to comprehend the language in
which it is written. It is written in the language of mathematics, and
its characters are triangles, circles, and other geometric figures,
without which it is humanly impossible to understand a single word
of it; without these, one is wandering about in a dark labyrinth.
(Galileo, "The Assayer") 39
Later on, when Newton elaborated on Galileo's work and came up with
his laws of motion and gravitation, the massive empirical success basically
silenced all critics (with the exception of his main opponent, Gottfried
Leibniz, and his followers) . As a result, the geometrical timeline and the
associated method of time-based physical equations ended up on a high
pedestal, and once in this privileged position they could quite easily
convince virtually anyone that spatial and temporal geometry were indeed
genuine aspects of nature itself, rather than idealizing abstractions. 40
Minkowski's four-dimensional geometrical construct is ultimately an
elaboration of Newtonian absolute space and time, 41 which, in turn, was
a keen extension of Galileo's seminal work. By fusing space and time
together into one, Minkowski obtained a 4-dimensional continuum that
could provide a quite straightforward framework for the relativity of
simultaneity to make sense. Despite spacetime's origination from Newton's
and Galileo's earlier geometrical understanding of space and time, the
final knockout of Newton's absolute space and time became an established
fact.
But Minkowski' s method basically consisted of refurbishing Newton's
notions of absolute space and time, using them as the semi-finished source
materials for his end product, which is the four-dimensional spacetime
continuum. Minkowski, working from Einstein's special theory of relativity,
48 PROCESS STUDIES SUPPLEMENT 24 (2017)
Elsewhere
I
Here-Now
Elsewhere
past causal
light cone
that observers at, say, 10 billion light years from earth would be able to
make their "now slice" coincide with what we would call our remote
future-just by leisurely moving towards us (Greene 134-138). This,
according to him, is simply an inescapable consequence from the fact that
we live in a static block universe.
There is nonetheless good reason to doubt this conclusion. Namely,
the argument for the static block universe seems to depend largely on
what Whitehead called the fallacy of misplaced concreteness-the confusion
of nature itself with our theoretical abstractions of it. In this case, the
confusion is between (a) geometric conceptions of space and time, on the
one hand, and the process of nature, on the other hand, and (b) between
point-events and point-observers and actual events and live observers. On
top of that there is even another level of complication which makes things
even more mixed-up. That is, other theoretical abstractions in special
relativity-such as ideal clocks, ideal measuring rods, 48 four-dimensional
coordinate systems R4, space-time diagrams, causal light cones, photons
that basically serve as bits of information, etc.-are typically used in
further interplay with one another, thus yielding a level of meta-abstraction
which can then again be quite easily confused with the actual process of
nature as well. For instance, Alfred A. Robb argued in 1914 that:
The work of Minkowski is purely analytical and does not touch on
the difficulties which lie in the application of measurement to time
and space intervals and the introduction of a coordinate system. As
regards such measurement, one cannot regard either clocks or
measuring rods as satisfactory bases on which to build up a
theoretical structure such as is required in this subject. One knows
only too well the difficulty there is in getting clocks to agree with one
another; while measuring rods expand or contract in a greater or
lesser degree as compared with others . ... It is not sufficient to say
that Einstein's choices are ideal ones: for, before we are in a position
of speaking of them as being ideal, it is necessary to have some clear
conception as to how one could, at least theoretically, recognize ideal
clocks and measuring rods in case one were ever sufficiently
fortunate as to come across such things; and in case we have this
clear conception, it is quite unnecessary, in our theoretical
investigations, to introduce clocks or measuring rods at all. (Robb 13)
Notwithstanding criticism like this, Eddington's solar eclipse experiment
and other experiments following in its wake turned out to agree so well
with the predictions of Einstein' s relativistic physics that most people in
56 PROCESS STUDIES SUPPLEMENT 24 (2017)
relationships between distance and time, Newton was able to add force
and mass into the mix, and in so doing developed his more advanced
Newtonian equations of motion and his universal law of gravitation. In
this way he provided a multipurpose set of equations that could be used
not only to describe the movement of ordinary objects on earth, but also
those of heavenly bodies,53 thus leading to the famous "clockwork universe"
worldview in which everything is determined from beginning to end. This
even motivated Pierre-Simon Laplace to claim that it should in principle
be possible to calculate any future state for all of nature if only we were
given "at a certain moment...all forces that set nature in motion, and all
positions of all items of which nature is composed" (Laplace 4).
Nowadays we may no longer be committed to such a strict determinism,
but when push comes to shove we still seem to think of our physical
equations as being essentially deterministic. Over the years mainstream
physics has thus developed the belief that it should in principle be possible
to predict any system' s temporal evolution with perfect precision. If not
yet today, then at least not too long from now, any system's future state
should thus be attainable from its initial conditions and the laws of nature
that are thought to govern the system's behavior (Smolin, Time, 94). 54 In
this manner a system of interest is typically singled out from its environment,
then some convenient intermediate state is chosen to serve as its initial
condition, after which the system is put through the wringer of natural law.
Just as the average audience usually does not pay too much attention to
the stage-building preparatory work of the theater crew, most working
physicists typically remain quite indifferent to the very elementary acts
of decomposition that actually enable them to do physics in the first place.
Although mostly taken for granted once they have become accomplished
facts, these elementary acts of decomposition basically serve to reduce
nature to a compact and well-defined universe of discourse within which
physics can be made to work. In order to do so, usually without even
realizing that they are basically relying on several a priori, and thus at
root hypothetical nature-dissecting cuts, most contemporary mainstream
physicists commit to the following elementary acts of decomposition (Van
Dijk, "The Process"):
• the decomposition of nature into target side and complementary subject side;
• the decomposition of the subject side into the conscious observer's
"center of subjectivity" and its observation-enabling support systems,
measurement instruments, etc.;
• the decomposition of the target side into relevant system-to-be-observed
and irrelevant system environment;
• the decomposition of system-to-be-observed into its "constituent elements"56
Last, but not least, a good case can be made that these decompositions
are to be accompanied by some further, more controversial ones. The
controversy relates especially to the fact that our current mainstream
physics holds that nature is entirely timeless and thus basically non-
processual, whereas these remaining decompositions suggest that it is not
nature that is non-process, but contemporary mainstream physics itself.
Following this suggestion, below we will start out with the process
of nature. However, once our nature-dissecting gaze has partitioned the
process of nature into its alleged constituent elements, it can be looked at
as if there is no processuality left. That is, Galileo and Newton have
basically set the stage for the expulsion of both time and processuality by
the later Einsteinian-Minkowskian block universe interpretation. After
all, by decomposing the process of nature into spatial and temporal
dimensions, the following decompositions later enabled Einstein to fuse
space and time together "again" into one spacetime continuum.
However, this fusing together of space and time, when looked at from
a process perspective, may just as well be explained as a not entirely
VAN DuK/Process Physics, Time, and Consciousness 63
successful attempt to glue together what should not have been taken apart
in the first place-namely, the undivided process of nature. Nonetheless,
the Einsteinian-Minkowskian block universe interpretation presented
nature as one giant block universe with past, present, and future frozen
solid into one static whole devoid of any unique and exclusive present
moment. Starting from the fundamental assumption that nature is inherently
processual, this would then require the following acts of decomposition:
equally dubious, because they are so intimately related with the target-
subject split:
The concepts "system," "apparatus," "environment," immediately
imply an artificial division of the world, and an intention to neglect,
or take only schematic account of, the interaction across the split.
The notions of "microscopic" and "macroscopic" defy precise
definition. So also do the notions of "reversible" and "irreversible."
Einstein said that it is theory which decides what is "observable." I
think he was right-"observation" is a complicated and theory-laden
business. Then that notion should not appear in the formulation of
fundamental theory. Information? Whose information? Information
about what? On this list of bad words from good books, the worst of
all is "measurement." It must have a section to itself. (Bell,
"Against," 34)
The following question arises: could there be a way around all those feeble
foundations and their ambiguous behavior in experimental practice? A
first clue can be found by looking at how the original unbroken wholeness
of target world to be observed and observing subject world is already tom
apart in the pre-measurement stage. That is, in both classical and quantum
physics, there is a well-established tradition of "apartheid" that comes so
natural to physicists that they usually do not even think about it once, let
alone twice. This "apartheid" starts with the Galilean cut, by separating
the so-called "objective" and "subjective" aspects of observation from
each other in order to enable the explicitly quantitative way of doing
exophysical-decompositional physics.
As mentioned in Section 3.1.1, Galileo (''The Assayer," 274) paved
the way for mathematical physics by throwing overboard all subjective
and qualitative "secondary" aspects of observation (such as the colorfulness
of colors, the touch of textures, and the smell of scents). Since these aspects
cannot be quantified, tabulated, plotted against time, and turned into
mathematical relations, Galileo decided they should belong to the realm
of consciousness. Others before him never got to the point that they felt
the need to get rid of all these sensory qualities, so, therefore, Galileo was
the first to draw this cut between the "world of objectivity" and that of
"subjectivity":
[T]he supposition that material objects instantiate sensory qualities,
such as colours, shapes and odours, is incompatible with their having
an entirely mathematical nature. And hence it was necessary to strip
physical objects of their sensory qualities in order to make it
intelligible to suppose that the physical world could be completely
68 PROCESS STUDIES SUPPLEMENT 24 (2017)
captured in mathematics .... However, for all its virtues, physics has
never been in the business of giving a complete description of reality.
It aims to give a mathematical description of the fundamental causal
workings of the natural world. The formal nature of such a
description entails that it necessarily abstracts not only from the
reality of consciousness, but from any other real, categorical nature
that material entities might happen to have. (see Goff)
Although the embargo on sensory qualities made sure that only quantifiable
aspects (such as location, size, and weight) were taken into account, all
mathematical physics that followed after Galileo basically got burdened
with a built-in, hidden dualism. As long as we stick to Galileo's cut, all
kinds of problems related to this hidden dualism will continue to plague
physics. For instance, the necessity to set up a universe of discourse with
a dedicated subject side is fundamentally at odds with any attempt to apply
our physical equations to nature as a whole. For, in that case, although it
is typically taken for granted in routine, small-scale measurement situations,
the entire subject side (including measurement gear, such as clocks,
measuring rods, and also the conscious observer) has to be located outside
the natural universe, which is impossible (see Smolin, Time, 46 and 80).
Nonetheless, in quantum physics, not only is the Galilean cut required
to enable the mathematization of observed events, but "quantum particles"
also need to be "soaked loose" from their embedding environment (see
De Muynck 74-75, 83, 90-91, 94) before they can be submitted to
measurement by using, say, a bubble chamber, a photographic plate, or
some strategically positioned photodiodes.
Were we to retrace our steps, by trying to "re-submerge" those "quantum
particles" back into their embedding environment and undo the seminal
Galilean cut, what kind of decomposition would still enable us to get a
hold of nature? Would we be forced to fall back on the naked
eye-unprejudiced and unmediated by technological-mathematical tools?
Or would this still not enable us to stop putting nature through the filter
of our nature-dissecting intellect? To be sure, we are ourselves seamlessly
part of the same process we are trying to make sense of. So, therefore, we
can expect that our scientific reasoning will always have an element of
subjectivity in it, no matter how hard we try to avoid this from happening.
As Max Planck put it: "Science cannot solve the ultimate mystery of
nature. And this is because, in the last analysis, we ourselves are part of
nature and, therefore, part of the mystery that we are trying to solve"
(Planck 217).
VAN DuK/Process Physics, Time, and Consciousness 69
For this reason, when setting out to solve this mystery, we first need
to find out how we, seamlessly embedded observers, get to sculpt the
world in which we live into "conscious information" and become "knowers
of knowledge"-whatever that may tum out to mean exactly. Sections
4.3 and 4.3.1 will give a more detailed discussion of how this could work.
For now, however, we will focus on the special kind of decomposition
that seems to be necessary (but perhaps not sufficient) to pull off such a
project. It is special in the sense that it can be thought of as a form of
"nondecompositional decomposition" since it pertains to the coming-to-
the-fore of seamlessly embedded endo-processes from within a greater
embedding ecosystem of background processuality:
Natural ~
Measuring ~
Conscious
, ,
system instrument observer
typically taken for granted once a certain measurement practice has reached
maturity (see Van Fraassen 138-139; Van Dijk, "The Process").
Accordingly the involvement of all kinds of subjective qualitative
choices about competing measurement theories, background assumptions,
initial conditions, etc., is thought to become largely irrelevant when the
long-aspired ideal of picture-perfect agreement between empirical data
and algorithm comes in sight. And although such a complete and absolute
empirical fit may be impossible in practice, it is widely believed that
increasing measurement precision will allow the observer to approach it
to the limit, thus progressively approximating the perfectly isomorphic
correlations that are anticipated in representational theory.
According to this representational view-which is basically inherent
to the exophysical-decompositional paradigm-empirical data can be
mined for regularity-exhibiting patterns that are thus held to be objectively
informative about the lawful regularity of nature itself; all this regardless
of any possible later conceived interpretations. In line with Robert Rosen's
analysis (Life Itself, 58-59) exophysical-decompositional physics is possible
only by adherence to the following two premises:
• First, there must be lawfulness to nature. That is, orderly causal relations
are thought to hold between nature's observable and even its
unobservable events.
• Second, it is supposed that these causal relations can be communicated,
at least partially if not entirely, by way of informational relations
that can be established between "nature as observed" and the current
conscious observer in charge.
Along these lines, these two principles of natural law boil down to nature
having an inherent orderliness associated with it. Arguably, then, this
allegedly inherent orderliness "can be matched by, or put into correspondence
with some equivalent orderliness within the self [i.e., the mind, or the
observer's 'center of subjectivity']" (Robert Rosen, Life Itself, 59). However
successful it has been in the past, this notion of natural law comes with
some not-to-be-underestimated limitations. In particular, it leads to a
worldview in which nature is being assessed exclusively in terms of its
effect on something else-for instance, on us, conscious observers, or on
measurement equipment-rather than in terms of what it is in and to itself.
Accordingly, physics will never be able to go beyond a phenomenology
VAN DuK/Process Physics, Time, and Consciousness 73
RECEIVED
~,,~t,
MESSAGE SIGNAL SIGNAL MESSAGE
MEASURE OF MEASURED
PROPERTY VAWE
SYSTEM OF
OF INTEREST OBSERVER
MEASURING INDICATOR
INTEREST APPARATUS
ERROR SOURCE
whole ensemble of values, each of which could have given rise to the
one observed. An entropy can also be defined for this a posteriori
ensemble, expressing how much uncertainty is still left unresolved
after the measurement. We can define the quantity of physical
information obtained from the measurement as the difference
between initial (a priori) and final (a posteriori) entropies. We can
speak of position entropy, angular entropy, etc., and note that we now
have a quantitative measure of the information yield of an
experiment. A given measuring procedure provides a set of
alternatives. Interaction between the object of interest and the
measuring apparatus results in selection of a subset thereof. When the
results of this process of selection become known to the observer, the
measurement has been completed. (Rothstein, "Information," 172)
c) d)
I
p N
R F
0 E
C R
E E
s N
s C
E
e) f) DECODING
DECODING
ENCODING
Once the numerical values for the initial and all following conditions
are recorded, they are inferred to relate to one another by a mathematical
operation that closely matches this physical operation (see Fig. 3-3b). In
this way, it is supposed that the empirical data extracted from the so-called
"physical world" is represented by the mathematical world. In tum, the
mathematically calculated results will have to be verified against the next
pair of numbers in the sample of empirical data. And if sufficient agreement
between calculated and observed values cannot be found, the initially
VAN DuK/Process Physics, Time, and Consciousness 79
applied mathematical operation for getting from one state to the next will
have to be revised in order to achieve empirical congruence between
natural and formal system. This basically amounts to a more detailed re-
interpretation of the sample of raw empirical data by the newly applied
mathematical operation.
This complies very much with what we have already seen in Sections
3.2 to 3.2.2; namely, that the putting together of such a mathematical
representation is an act of abstraction. This means that any resulting
abstract equation should be converted back again into the empirical data
in order to confirm its agreement with measurement. After all, the raw
empirical data, if not the ultimate primary source of nature's information,
should at least be seen as the first level of information that is algorithmically
compressible. Accordingly, in the exophysical-decompositional paradigm,
it is actually those raw empirical data that make up the actual measurement
phenomenology of the process under investigation. And since we cannot
go beyond this phenomenology, we must realize that the thus achieved
empirical agreement is between "phenomenal" data and abstract algorithm,
not between nature and mathematics. This mode of perception-which
hinges exclusively on quantitative sense-perception, thereby excluding
all other, qualitative aspects of experience-was by Whitehead considered
a methodology that dealt with only half the evidence (MT 211; see also
Desmet, "Introduction," 15). That is, metaphorically speaking, physical
science "examines the coat, which is superficial, and neglects the body
which is fundamental" (MT 211 ). Accordingly, the processuality of nature
(on the extreme left; corresponding with "the body") can only be approximated
or implied by mathematical inference (which corresponds with the design
plan of "the coat"). In other words, when using the method of physical
equations, nature's processuality can never be grasped in full.
Despite this implicational character of data-reproducing algorithms,
our well-established physical equations are typically still thought of as
having a representational relation with the target system whose data they
are trying to replicate. In order to keep this representational interpretation
afloat, the empirical agreement between data and algorithm should reach
a level of at least near-perfection, if not beyond that. To be able to pull
this off, there always needs to be a ''post-dictive" 11 measurement encoding
in which the recorded samples of raw empirical results are converted into
more polished data that can then be mathematically compressed into a
short physical equation. Indeed, this measurement encoding amounts to
80 PROCESS STUDIES SUPPLEMENT 24 (2017)
implicational mathematization.
Subsequently, on the way back from abstract mathematical results to
concrete phenomenal results, there also needs to be a predictive decoding
that extrapolates from the thus achieved physical equation what the
numerical values of future measurement outcomes are expected to be (Fig.
3-3c). 72 This predictive decoding, then, amounts to imputational re-
phenomenalization, or in other words, the retranslation of abstract,
algorithmically generated numbers into concrete measurement phenomena
where the calculated numerics are imputed (i.e., attributed or assigned)
to their matching measurement results. In so doing, it has even become
common practice to treat these algorithmically generated numerical values
as if they were in principle entirely synonymous with the natural system
they are supposed to portray.
In any case, together these encoding and decoding encryptions serve
to filter out any unwanted irregularities, data-contaminating noise,
measurement errors, etc., from the original, raw measurement results (see
Robert Rosen, Life Itself, 59-62; Van Dijk, "The Process"). However,
there can be no procedure from which these encodings and decodings
themselves can be derived from the data or algorithms. 73 This is in fact
directly related to the impossibility of having a watertight procedure for
algorithm choice:
[B]ecause there are no neutral criteria for choosing one algorithm
over the other (Kuhn, "Objectivity"), no one algorithm can be
considered the ultimate candidate. For instance, since goodness offit,
consistency, broadness of scope, simplicity [beauty], and fruitfulness
may be at odds with each other, any choice for ranking these criteria
according to their alleged importance, or for finding an optimal
balance between them, can never be objective but is rather based on
personal preference, intuition, educated guesses, and the like. So,
together with the abovementioned encodings and decodings, all
criteria for choosing between competing algorithms are external from
the natural system N and the formal system F. Next to these already
major externalities, there are several other external criteria,
specifications, and decisions that play their part in setting up the
relation between data and algorithm. For instance, decisions have to
be made concerning (a) the frequency of sampling; (b) how many
entries the data sample should have as a minimum [in order to qualify
as a bona fide sample of empirical data]; (c) which statistical format
should be appropriate in which case; (d) which background theories
to apply in order to provide a more meaningful context for the in-
VAN DuK/Process Physics, Time, and Consciousness 81
raw sensory data around to make them into coherent pictures, much
as a computer takes millions of binary bits to make texts and pictures.
(P~chalska et al.)
However, this info-computational view still leaves much unexplained.
For instance, although sensory information typically makes a meaningful
difference to the inner life of conscious organisms in the biological world,
incoming digital data have no meaning whatsoever to information-processing
computers. Also, this info-computational view suggests that the signals
of neurons can be turned into mental imagery in roughly the same way as
binary data can be used to make up the graphical code of a digital image
file (such as a JPG-, BMP- or GIP-file) that can be shown on a computer
screen. But, contrary to a computer screen which typically has a conscious
user sitting behind it, the brain cannot be thought of as having within it
such a dedicated user who is capable to observe, interpret, and act upon
the data and pictures that are thus presented. After all, comparable to the
earlier problem of the meta-observer (Section 2.3), we could ask how such
an inbuilt center of subjectivity should work and then find that another
homunculus-like center of subjectivity would have to be invoked in order
to answer this question (for more neuroscientific context, see Edelman
and Tononi 127, 220-222).
From this we can conclude that the brain is not some kind of cinematic
theater in which lifelike scenes are shown to a first-person center of
subjectivity capable of sensorimotor control. Nor is the brain a mere black
box for converting inputs into outputs; it is not a CPU-like center of
subjectivity whose job it is to tum incoming stimuli into outgoing responses;
nor should it be seen as a transmitter-receiver unit equipped specifically
to pick up and pass on in- and outbound signal traffic. Rather, as we will
see later on, from Sections 4.2.4 to 4.3.1, the mind-brain is a seamlessly
embedded member of the ultimately indivisible organism-world system
in which a highly complex culmination of mutually informative activity
patterns facilitates the emergence of higher-order conscious experience.
Hence, the mind-brain is certainly not a pre-wired data-processing switch-
box in the sense of classical information theory. Unlike a CPU or switchbox,
the mind-brain has a neuroplastic organization and cannot retain its function
without its signal traffic or in isolation from its embedding "mother
system," which is the organism-world system as a whole (see Sections
4.2 to 4.3.1 for further details).
VAN DuK/Process Physics, Time, and Consciousness 85
•) Q) -+ . . -+
~ REPRESENTATION
-~
I
© -t+ . ©-+
I
b) f)
I
I
c) Q)-+ .
!~
-j+
I
I
I REP~ ION
d) ©-+ --~ g)
e) ©-+
-~
Fig. 3-4: Universe of discourse with Von Neumann's object-subject
boundary (commonplace conception of measurement)
has made a certain (subjective) observation; and never any like this:
a physical quantity has a certain value. (Von Neumann 418-421)
In the proceedings of a 1938 conference in Warsaw (Poland), named New
Theories in Physics, Von Neumann's view on the object-subject boundary
and psycho-physical parallelism was summarized by the editors. They
gave the following rendition of his response to Bohr's presentation on
"The Causality Problem in Atomic Physics":
Professor von Neumann thought that there must always be an
observer somewhere in a system: it was therefore necessary to
establish a limit between the observed and the observer. But it was by
no means necessary that this limit should coincide with the
geometrical limits of the physical body of the individual who
observes. We could quite well "contract" the observer or "expand"
him: we could include all that passed within the eye of the observer
in the "observed" part of the system - which is described in a
quantum manner. Then the "observer" would begin behind the retina.
Or we could include part of the apparatus which we used in the
physical observation - a microscope for instance - in the
"observer." The principle of "psycho-physical parallelism" expresses
this exactly: that this limit may be displaced, in principle at least, as
much as we wish inside the physical body of the individual who
observes. There is thus no part of the system which is essentially the
observer, but in order to formulate quantum theory, an observer must
always be placed somewhere. (Bialobrzeski et al. 44)
During the era in which quantum mechanics was still in its infancy, the
majority of physicists believed-as most of them still do-that the physical
world is a causally closed realm. Although it was apparently still acceptable
for information from the physical world to somehow appear in the subjective
stream of the conscious observer (see Von Neumann 418-421; Stapp 15),
to materialistic physicists it is inconceivable that non-physical subjective
thought should be able to have a causal effect in the physical world. This
is why the idea arose that the world of subjective observation had to be
thought of as extraphysical and without a causal linkage to events in the
physical world. It was thus quite bothersome that the so-called "wave
function collapse" seemed to depend crucially on the becoming conscious
of a measurement act, whereas, prior to an observer becoming aware of
a measurement outcome, all possible quantum states were thought to exist
all-together-at-once-in superposition with each other.
This apparent causal dependency of physical quantum states on
90 PROCESS STUDIES SUPPLEMENT 24 (2017)
••
•
.. ······· .. •
embedding •.
:
•
: context of use
..···········.... ·········...
:"preparation :" ••• observation -._
•.
·.
•
• i process :" ": process ": •
~ (ef.refere~t -\ • .. · •_: •••<~. user)
·.
: } :
.. •
•
···········
••
..········
embedding
.. .•. ••
•
..······· ..•
embedding •.
:. .·. :. .·.
: context of use •. : co ntext of use •.
....···········_::-::_···········... _ .... ···········_::-:.:············...
• _: preparabon _.. \ ~ ••• =· ~t10n :" ·. ob9ervMOf1 ··.
\ \. . 7::J;{>-.) } \ '7:$;-i) .J
• ••••• P,OOl:t4 _.... •
should take into account not only how nature works, but also how our
experience of nature works (see Wolfram 547).
Furthermore, honoring Whitehead, we should aspire to a physics that
does not reduce "nature alive" to "lifeless nature" (Whitehead, MT, 173-
232; Desmet, "On the Difference," 87), or, what is basically the same
thing, a physics that does not "objectify" nature. That is, our physics
should not rid its observers of their subjectivity, because that basically
amounts to treating live conscious observers as being equivalent to lifeless
and insentient objects-a point-observer being the most extreme example.
Whenever the conscious observer becomes the "object" of interest, we
should realize that the thus presented observer can be no more than a
"phantom observer"-an "objectified observer" whose lived subjectivity
is systematically being left out of the picture. The very essence of being
an observer is not really there anymore because it has simply been stripped
away for the sake of doing physics in a box. So, in order to find out how
subjectivity can be included within a nonexophysical-nondecompositional
physics, let us first take a closer look at what it is about consciousness
that is so hard to get a grip on.
a) b)
<) d)
e)
g) h)
w- g
cells cells transparent protective
~
fibers nerv
fibers
cup
op
,.
photoreceptor
layer (retina)
~- ...
opti
,.,
lens
photoreceptor
layer (retina)
refractive
lens
the organism and its ecological niche all involve the laying down of
habitually grooved activity patterns (see, for instance, Barrett). They all
involve the strengthening and/or weakening of latent action dispositions.
Among the things that are relevant in the formation of an organism's
dispositional perception-action repertoires we can find, for instance, muscle
memory, perception-memory patterns, an organism's inborn instincts, its
commitments and preferences, neuroplasticity, (epi)genetically laid down
propensities, behavioral tendencies, and so on.
stellar structures and the nuclear processes within stars, which have
generated the atoms and molecules from which life itself arose, are
open systems, driven by nonequilibrium processes. We have only
begun to understand the awesome creative powers of nonequilibrium
processes in the unfolding universe. We are all-complex atoms,
Jupiter, spiral galaxies, warthog, and frog-the logical progeny of
that creative power. (Kauffman, At Home, 50-51)
The universe as a whole can thus best be thought of as a giant nonequilibrium
process (Nicolis and Prigogine; Jantsch; Smolin, The Life, 158-160;
Chaisson 15, 125-131), rather than just an enormous lifeless collection of
externally interacting material "bits and pieces" in which life came into
being as a chance side-effect of entirely physical interactions. Although
this outline is admittedly a crude simplification, the latter view tries to
understand life and consciousness in terms of what is nonliving and
nonconscious, which is impossible (Griffin, "The Whiteheadian"). The
former, on the contrary, opens up the possibility of seeing the universe as
biocentric from its earliest of beginnings.
In fact, the beginning of life on earth could only occur due to nature's
nonequilibrium processuality. The interplanetary gas clouds and dust
particles that have come to form our planet earth, as well as the chemical
elements from which, later on, more complex molecules started to form,
all originate from nucleosynthesis in stars and supernovae ( see Arnett;
see also second half of Section 2.5 .1 ). All this eventually enabled the right
conditions for more complex chemical reactions to occur.
As Kauffman suggests, life is likely to have emerged spontaneously
from a "primordial soup" of such chemical substances. Under normal
conditions, such a primordial soup will accommodate numerous chemical
reactions among its different species of molecules. Most of these chemical
reactions are relatively slow-going, because reactions that go at a fast rate
quickly deplete their resources and will therefore typically fall into decline
about as fast as they got going (Kauffman, At Home 47-64).
However, such fast reactions do not have to mean the end of the
system' s chemical reactivity. Each of the system's different species has
the potential to be a catalyst for multiple other reactions. In other words,
each chemical may be able to speed up a chemical reaction that was already
occurring within the system, although initially at a much slower rate. As
soon as the system reaches a critical diversity, those catalytically accelerated
chemical reactions will no longer deplete their resources. Instead, they
VAN DuK/Process Physics, Time, and Consciousness 115
be engaged in, could come under the influence of light. That is, we have
to look at how autocatalytic cycles can tap into light energy to thus become
adaptively oriented towards light in a way that contributes to the organizational
integrity of the system as a whole. 96 If light stimuli manage to initiate a
chain of events that positively affects the well-being of the organism, then
this may offer the organism a whole new means to cope with the many
challenges of its precarious living-environment.
There are different scenarios that may lead to the development of such
light-sensitivity. For instance, an autocatalytic network may manage to
draw a photosensitive protein into one of its chemical cycles, or one of
the many proteins that are taking part in the chemical reaction network
of a primitive biological cell may start to switch to another state when
being impacted by light (for further details on the possible evolutionary
history of light-sensitive proteins and amino acids in animals see Fueda,
et al). In both cases, their biochemical networks are apt to develop adaptive
reaction pathways.
Specifically, when this newly acquired photo-sensitivity turns out to
facilitate the prolonged continuation of the cycles in which it is involved,
thereby leading to an increased organizational integrity, it can be said to
have "survival value" for the system as a whole. For instance, non-UV
light stimuli may trigger the unpremeditated production of reaction products
with UV-protective characteristics, thus enabling the networked cycles
to develop a defense against damaging UV radiation (see Fischer, et al.).
As an autocatalytic network develops an orientation towards light,
this may lead to protection against environmental threats, increased access
to nutrients and energy resources, proto-homeostatic regulation of the
organism's biochemistry, 97 and other adaptive benefits. In this way, it
becomes possible for the organism to keep its metabolism going, to carry
on with regenerative maintenance, and to keep investing in renewing
growth of its various life-supporting cycles. Light stimuli can in fact
acquire "somatic" meaning and become valuative as, during life, they
gradually get to be associated with repeatedly co-occurring favorable
and/or unfavorable internal states of the autocatalytic system.
For instance, when activation of a light-sensitive cycle will consistently
go hand in hand with access to nutrients, this will obviously promote the
network's metabolic well-being. Hence, through their coupling to the
organism's well-being and mal-being, environmental stimuli are basically
given body-related value and can thus become inherently meaningful to
VAN DuK/Process Physics, Time, and Consciousness 119
the organism. In the words of neurobiologist Gerald Edelman: "I use the
word value to refer to evolutionarily ... derived constraints favoring behavior
that fulfills homeostatic requirements or increases fitness in an individual
species" (Edelman, The Remembered, 287-88). In his book Consciousness:
How Matter Becomes Imagination, co-written with Giulio Tononi, it is
put like this: "We define values as phenotypic aspects of an organism that
were selected during evolution and constrain somatic selective events,
such as the synaptic changes that occur during brain development and ex-
perience" (Edelman and Tononi 88).
Accordingly, Edelman defines value systems as those constraining,
cycle-modulating parts of the organism on the basis of which it can: ( 1)
carve up the world of signals into re-cognizable, re-livable, somatically
relevant categories, 98 and (2) develop adaptive action repertoires (see
Edelman, "Building," 43). To an organism, value constraints enable it to
adaptively increase its fitness in the absence of pre-programmed,
quantitatively specified goals.
Unlike a boiler-heated or air-conditioned room whose change in
temperature can be initiated by turning the thermostat's temperature
selection dial up or down, the adaptive change in organisms is not directed
by such externally imposed set points. Whereas a thermostat (1) may just
as well be located outside the room whose temperature it is trying to
manipulate, and (2) is by itself just a largely passive signal-comparing
switch box, value pervasively participates in, and actively contributes to,
adaptive change in an organism's organization. Those inner-organism
cycles that, during evolution, have come to serve as "salience indicators"
for other inner-organism cycles, may thus be called value systems.
For instance, since it communicates information about environmental
light conditions to other parts of the body, the synthesis and secretion of
melatonin by the pineal gland plays a major role in the wake-sleep cycle
of human beings and other mammals. Another example can be found in
the way that the evolved shape, muscularity, and jointedness of a human
hand 99 leads to a certain repertoire of possible and impossible movements
(see Edelman and Tononi 88). This, of course, affords us, hand-equipped
human beings, to be able to manipulate and take advantage of environmental
opportunities in a specific way (see Gibson, The Ecological, 113, 224-
225). In tum, this may then direct further evolutionary adaptation of hand
morphology, the sensorimotor-somatosensory system, and, in the long
run, the human species as a whole.
120 PROCESS STUDIES SUPPLEMENT 24 (2017)
The various stages that the Gestalt cycle goes through during each full
tum, can be roughly described as follows (see Perls 69): (1) the organism
is in a state of rest; (2) the organism senses and becomes aware of a
disturbance (which may be internal or external); (3) a Gestalt is being
formed (i.e., a meaningful foreground pattern apparent from, yet seamlessly
embedded within, its background patterning and intimately related with
the entire whole and previous history of organism-environment system);
(4) the organism prepares for action and then follows up on it, thus taking
directed action with the aim of, (5) achieving a decrease in tension, which
should then result in, (6) the return to the desired organismic balance.
During all this, there is an intimate interdependence between organism
and environment to the extent that they can be seen as an inseparable
whole whose process of subjective experience does not take place exclusively
within the organism. Instead, subjectivity involves the entire bound-in-
one multiplicity of experiential organism-environment cycles. Because
the formation of a Gestalt (which can be loosely interpreted as an
"experiential foreground pattern" or "formed situation") depends on the
entire whole and history of the organism-environment system, it does not
enable the organism to see the world as it is, but to experience it in terms
of what may be called "motivational valences": 102
Valences are opportunities to engage in actions that structure a
motivated person's perception of a situation and her subsequent
actions. For a person who is motivated by hunger, a sandwich has
valences that it does not have for a sated person, but only if it is
reachable and does not belong to someone else. The key is that these
valences appear in the environment as a function of the motivations
of people, and vice versa. Valences are perceived forms that are a
function of the person's state and the environment's characteristics.
(Kaufer and Chemero 88)
Next to panexperientialism, biosemiotics, and Gestalt psychology, other
related theories of perception and conscious experience that should be
mentioned here are: J.J. Gibson ' s ecological psychology, enactivism
(Maturana and Varela), radical embodied cognitive science (Chemero),
Velmans's reflexive monism, James's neutral monism, and, of course,
Edelman's and Tononi's extended theory of neuronal group selection,
which drew heavily on James's "specious present" (James, Principles,
609; also Clay 167), and his view on the conscious stream of experience.
For now, however, I will not go further into their respective versions of
124 PROCESS STUDIES SUPPLEMENT 24 (2017)
B) Wakefulness Sensorimotor
3
A a B C
0
1
•
0.2
0 3~
0.1 288 288
6
0.0
100 300 500
b 3
0.3
b
•
0
1
0.1
0.0
100 300 500 288 288
C >
-~ 0.3 C
·.:: 6
•
o 02
~
c 0.1 3
co
0.0
G>
~ 100 300 500 0
Time 1
288 288
to "sense" the global system state based on local information (Hesse and
Gross 10). In a similar vein, we may say that everywhere within the system
there is a locally available, but globally distributed, "knowing-by-doing"
regarding how to remain close to overall, system-wide criticality. As such,
the system can even be thought of as having developed a primordial form
of adaptive, self-preserving behavior under the pressure of precarious
conditions, external impact events and/or the influx of matter, energy, and
information.
In addition to this list, there are also some of John Archibald Wheeler's
requirements (Wheeler, "Information," 313-315) that are well worth being
mentioned:
6. No "tower of turtles," i.e., there should be no infinite regress of would-
be elementary constituents;
7. No pre-existing space and no pre-existing time, but rather a pre-geometry.
8. No laws, but rather "law without law."" 3
Although implicit in some of the other criteria, there is one final requirement
that should not be overlooked:
This last point follows the same logic as Wheeler's "law without law"
requirement. Just as there were no gears, no pinions, no engineers, and
no building plans in the earliest beginnings of the universe, there were no
true foundations in the hierarchical sense of the word. That is, a priori
VAN DuK/Process Physics, Time, and Consciousness 141
5.2 Process physics as a possible candidate for doing physics without a box
Whereas doing physics in a box typically requires us to get involved
in pre-theoretical interpretation, nature-dissecting acts of decomposition,
and the like, process physics basically enables us to avoid much of this
by doing physics without a box. It does so by setting up a model that
manages to give rise to its own foundation-free foundations, so to speak.
Accordingly, the model starts out with an as good as patternless homogeneity
that can perhaps best be likened to what in quantum field theory is called
"the vacuum state" or "quantum vacuum." Despite its name, this vacuum
state is usually not so much thought of as an entirely empty void, but
rather as a fiercely fluctuating ocean of virtual energy potential which
contains all of existence in latent form (see Dewitt 178). From this vacuum-
like stage, then, the initial uniformity in the process physics model should
get its internal pattern formation "up and going" through recursive loops
that "bootstrap" themselves into actuality from their otherwise undifferentiated
background (see Cahill and Klinger, "Bootstrap," 109).
To get this internal pattern formation going, process physics depends
on only a few general nondecompositional preconditions, namely: universal
interconnectedness, holarchic instead of hierarchic organization, self-
reference, and initial lawlessness. In our conventional way of doing physics
in a box, we typically rely on well-trusted assumptions that we have so
much gotten used to that we tend to totally forget about their actual status
as metaphors, approximations, and idealizations.
142 PROCESS STUDIES SUPPLEMENT 24 (2017)
That is, prior to writing down the physical equations for specifying
the behavior of electrons, photons, electromagnetic fields, and all other
phenomena in nature, physicists usually do not think too much about all
their pre-theoretical interpretation, acts of decomposition, and the like.
Instead, they basically start out by assuming that all these things actually
already exist as such (see Chown 25). To be fair, however, physicists most
often do not literally assume that things like electrons really exist before
they formulate physical equations about them. Instead, they typically like
to think of "elementary particles" like the electron as transient fluctuations,
manifesting from underlying quantum fields into the classical world. By
assuming these fields, however, the same problem arises all over again,
thus leading to something quite akin to Wheeler's "tower of turtles"
problem. The physicist's solution seems to be to just choose a particular
level of description, postulate it as fundamental, and then proceed from
there onwards. So, in this way, it is still a valid diagnosis to state that
physics as we know it typically presumes the existence of what it is trying
to describe.
This, however, confronts us with a foundational problem: if not by
mere postulation, how can we actually be sure that a physical equation
does indeed pertain uniquely to its intended referent? 117 On top of that,
taking into consideration that, just after the "Big Bang" many of these
referents had not even come into existence yet, we might ask ourselves
what the most elementary initial conditions of these physical equations
should be, and why this should be so? Lee Smolin was particularly worried
by these issues and he addressed them as follows:
We, in our time, are led by our faith in the Newtonian paradigm to
two simple questions that no theory based on that paradigm will ever
be able to answer: [First:] Why these laws? Why is the universe
governed by a particular set of laws? What selected the actual laws
from other laws that might have governed the world? [Second:] The
universe starts off at the Big Bang with a particular set of initial
conditions. Why these initial conditions? Once we fix the laws, there
are still an infinite number of initial conditions the universe might
have begun with. What mechanism selected the actual initial
conditions out of the infinite set of possibilities? The Newtonian
paradigm cannot even begin to answer these two enormous questions,
because the laws and initial conditions are inputs to it. If physics
ultimately is formulated within the Newtonian paradigm, these big
questions will remain mysteries forever. (Smolin, Time, 97-98)
VAN DuK/Process Physics, Time, and Consciousness 143
Process physics, on the other hand, since it is not rooted in the Newtonian
paradigm of doing physics in a box, does not have these problems that
seem to be so inevitably associated with the use of physical equations.
That is, process physics simply does not avail itself of any formal system
of lawlike mathematical equations. In stark contrast with the math-based
models of mainstream physics, process physics introduces a non-formal,
self-organizing modeling of nature-based on a stochastic iteration routine
that reflects the Peircean principle of precedence (Peirce 277), rather than
being based on lawlike physical equations.
Thanks to its intrinsic stochastic recursiveness, the process physics
model eventually manages to evolve many features that we also find in
our own natural world: emergent three-dimensionality, emergent relativistic
and gravitational effects, non-locality, emergent quasi-deterministic
classical behavior, creative novelty, habit formation, an internal sense of
(proto )subjectivity made possible by its mutual informativeness, an intrinsic
present moment effect with open-ended evolution, 118 and more.
Without having discussed how process physics actually works, however,
it is of course still too early to label it the final cure-all for the main
problems in today's physical sciences. So, therefore, let us get down to
the finer details and explore if process physics can really be taken seriously
enough as a way of doing physics without a box to have it join forces with
our conventional way of doing physics in a box.
This noise, then, basically "blankets" the entire network with each
cycle of the stochastic iteration routine. It actually enables the initially
uniform, low-level processuality in the process physics model to self-
organize into mutually informative fore- and background patterns. So,
remarkably, the same kind of mutual informativeness that turned out to
be such a characteristic aspect in SOC-systems (see Section 4.3 .3) and
that played such a crucial role in the emergence of higher-order consciousness
(see Sections 4.3.1 and 4.3.2), ends up being crucial to process physics
as well!
Although in classical information theory and in electrical engineering
noise is typically thought of as an irregular, residual distortion signal, in
process physics it is an expression of the inherent lawlessness of nature.
That is, while our long-standing tradition of doing physics in a box dictates
that we try to capture natural systems in terms of algorithmically expressed
laws of nature, there is a lot in nature that seems to be unfit to be specified
like that. For instance, although complex systems and biological systems
may indeed seem regular and predictable when looked at during short
enough time spans or in between phase transitions, their behavior eventually
cannot be compressed into concise algorithmic expressions capable of
faithfully reproducing the empirical data extracted from these systems
(see Kauffman, "Foreword: Evolution," 9-22). In the words of physicist
Joe Rosen: 122
In our effort to understand, we first search for order among the
reproducible phenomena of nature, and then attempt to formulate
laws that fit the collected data and predict new results. Such laws of
nature are expressions of order, of simplicity. They condense all
existing data, as well as any amount of potential data, into compact
expressions. Thus, they are abstractions from the sets of data from
which they are derived, and are unifying, descriptive devices for their
relevant classes of natural phenomena .... [Then again,] we do not
claim that nature is predictable in all its aspects. But any
unpredictable aspects it might possess lie outside the domain of
science by the definition of science that informs our present
investigation. (J. Rosen 40, also 36)
So, in Joe Rosen's view, science, by definition, is not meant to deal with
irreproducible and unpredictable phenomena. 123 The empirical data extracted
from any such phenomena cannot be compressed into a smaller data-
reproducing algorithm. Therefore, no empirically adequate physical
148 PROCESS STUDIES SUPPLEMENT 24 (2017)
equation can be put together that may deserve the label of "law of nature."
Such phenomena, whether they are too random to find any regularity
within their data, or too unique to be reproduced, can therefore be called
"lawless." Under this banner we can not only gather complex systems like
the mind-brain (which only exhibits reproducibility in a limited sense),
but also nature as a whole. After all, the universe at large, because it cannot
be compared to any other specimen, and because it exceeds the reach of
any algorithmic compression, is utterly irreproducible:
When we push matters to their extreme and consider the whole
universe, we have clearly and irretrievably lost the last vestige of
reproducibility; the universe as a whole is a unique phenomenon and
as such is intrinsically irreproducible. (J. Rosen 72)
Process ecologist Robert Ulanowicz drove home a similar point in
his very engaging book The Third Window: Natural Life beyond Newton
and Darwin:
As most readers are probably aware, Kurt Godel (1931), working
with number theory, demonstrated how any formal, self-consistent,
recursive axiomatic system cannot encompass some true
propositions. In other words, some truths will always remain outside
the ken of the formal system. The suggestion by analogy is that the
known system of physical laws is incapable of encompassing all real
events. Some events perforce remain outside the realm of law, and we
label them chance. Of course, analogy is not proof, but I regard
Godel' s treatise on logic to be so congruent with how we reason
about nature that I find it hard to envision how our construct of
physical laws can possibly escape the same judgment that Godel
pronounced upon number theory. (Ulanowicz, The Third, 121-122)
Drawing on the work of physicist Walter Elsasser ( 1969; 1981 ), Ulanowicz
calls these complex chance events that fall outside the ken of physical
laws "aleatoric" (The Third 119-122). On this aleatoric account, such
complex chance events are irreproducible and unpredictable since they
involve a unique coincidence of nature's locally-globally actualizing
processes. By Joe Rosen's definition this makes them "lawless." Moreover,
due to the nature-wide abundance of these aleatoric events, particularly
in nonequilibrium systems, lawlessness should actually be considered the
rule, rather than the exception.
In line with Ulanowicz's above quotation on number theory and
physical law, algorithmic information theory (AIT; see Solomonoff;
VAN DuK/Process Physics, Time, and Consciousness 149
(that do not seem to lead to any explicit activity other than low-level noisy
fluctuations) to very rare giant ones (that can shake up the entire action-
potentiation network, thus drastically renewing it in one go).
In the famous sand pile systems, for instance, the characteristic events
are avalanches of all sizes; from many small cascades to rare, large sand
slides or even none at all (e.g., when a single grain falls directly in a local
"pocket of potential"-see Section 4.3.3). Likewise, in the process physics
model, each noisy perturbation of the network as a whole can trigger (a)
the emergence of many small, low-level phenomena (i.e., "events,"
"actualities," or "nodes") with weak or negligible connectivity, (b) less
frequent medium-size phenomena with more robust connectivity, and even
much more rare phenomena with proliferating higher-order connectivity.
Universality (or, in other words, the occurrence of system-characteristic
phenomena of all scales) can cause the system's low-grade starting level
to eventually become "hidden from plain view" as the higher-order
phenomena cascade all across the emergent action-potentiation network,
thus "overflowing" any lower levels of activity (see Watkins, et al., 21-
22; Cahill and Klinger, "Self-Referential Noise and the Synthesis," parts
4, 7). The self-organized criticality in the process physics model is what
actually facilitates the possibility of "foundations without foundation"-one
of the earlier-listed requirements for doing physics without a box (see
section 5.1).
Table 5-1: The indexical rnlation matrix - When nodes i and j are conn ected, they will be indexed as
having a non-zero connection strength B,1. Anti-symmetry (here indicated by matching background colors of the
matrix cells) gu arantees that the strength of any self-connection (Bii) will always be zero. Positive or negative
signs of the actual B,1 values depend on the direction of the arrows between nodes i and j (see Fig. 5-1).
node
2 3 4 s
,nn.
0 B,2(= -B2, ) B 13 (= -B31) 8 14 (= -8 41 ) 8 ,s(=-8., )
8 2, (= -8,2) 0 823 (= - 8 32) 8 24(= - 842) 8 2s(= -8s2)
3 8 31(= -813) 8 32(= -823) 0 8 34(= -843) 83s(= -8s3) 830(= - 803)
4 8 41 {= -8 14 ) 8 42(= -824) 8 43 (= -834) 0 8 45 (= - B 54 ) 8 46 {= -8 64 )
s 8 s,(= -8,s) 8 s2(= -82s) 8 s3(= -83s) 8 s4(= - 8 45) 0 8 so(= - B 65)
8 03(= -830) 8 64(= -8 46 ) 8•s (= - 8 56) 0
0 - b21 -b3 1 - bi l
- h12 0 - b 32 - bi2
- b13 - b2 3 0 - bi3
Anti-symmetry then gives: Bij =
- blj - b 3j - b 3j 0
Process physics uses its connectivity matrix to model the gradually evolving
connection strengths of emergent activity patterns within the initially
uniform and orderless universe. In order to meet Wheeler's requirement
of "law without law" (see also Section 5.1), an iterative update routine is
used to enrich the network of connection strengths with system-wide
connectivity combined with a layer of system-renewing noise. As the
system continuously keeps on going through its stochastic iteration cycles,
with each such loop being indexed by the relation matrix, slowly but
surely, higher-order patterns of connectivity will emerge. 126
The iteration routine in question is in fact derived from the bifocal
field representation that is used in quantum electrodynamics-hence the
use of the symbol B in Eq. 5.1 as it refers to "bilocal" (see Cahill and
Klinger, "Self-Referential Noise and the Synthesis," section 3). By stripping
away all terms that refer to any presupposed geometrical aspects, the
following update routine is achieved:
B;j --> B ;j - a(B + s- 1 );j + W;j , with i,j = 1, 2, 3, ... , 2M and M--> co. (5.1 )
Cahill has summarized his stochastic iteration routine in the following way:
The iteration system has the form a,j __, a,j - a(B + a- ),j + w,j- Here Bij is
1
random numbers wij are included: this enables the iteration process to
model all aspects of time. These random numbers are called self-
referential noise (SRN) as they limit the precision of the self-
referencing. Without the SRN the system is deterministic and
reversible, and loses all the experiential properties of time. (Cahill,
"Process Physics: Self-Referential," 12)
To recap, the first term Bu embodies the network's entire acquired past
(up to the immediately preceding iteration) as it holds the iteratively built-
up connection strengths among connection pairs i and}. As such, it may
be called the precedence term-this entirely in line with the Peircean
"principle of precedence" (see Peirce 277; Smolin, Time, 4 7). The second
term -a(B+B- 1), which may be referred to as the cross-linkage or binding
term, facilitates universal interconnectedness by hooking up the single
matrix B with its inverse counterpart B· 1• Something close to a holarchic
feedback loop becomes active within the system. This setup requires anti-
symmetry Bu= -Bj;· This is needed to ensure that self-connections B;; will
always be zero; this in conformity with the above requirement that there
is no internal subnetwork connectivity to the start-up nodes themselves.
Furthermore, the parameter a within this second term is comparable to a
tuning parameter in self-organized criticality (SOC) systems; such a
parameter does not influence the fact that SOC occurs, but it does
affect how SOC occurs. For instance, in sand pile systems it determines
the narrow region of near-critical angles in which avalanches will tumble
down the slope-but the parameter itself is non-critical since it can vary
widely without frustrating the occurrence of SOC. 127 Last but not least,
in every new iteration and for each "connection pair" i and j, the noise
term wij = -wji is an independent random variable with variance TJ, picked
arbitrarily from a probability distribution (see Cahill and Klinger, "Self-
Referential Noise as a Fundamental," for more detailed information, for
instance, on the size of TJ).
VAN DuK/Process Physics, Time, and Consciousness 157
binding term
B ij (zoomed-out level)
c)
~ emergent higher-order
- a + connectivity among Bii
(zoomed-out level)
The connection strengths between all these "connection pairs" i and j are
not themselves visible features, so the above visualizations can only serve
as instructive metaphors and should not be thought of as images of nature
itself. In order to avoid the fallacy of misplaced concreteness, after all,
we should realize that the here depicted "connectivity landscape" pertains
to an indexical mapping of connection strengths. Just as Stuart Kauffman's
"fitness landscapes" (At Home, 163-180) do not depict any real landscapes,
these "connectivity landscapes" do not directly reflect nature. Rather, they
form an indexical grid of emergent connectivity on a pre-sorted layout,
analogous to using a grid of people's home addresses to index the level
of social connectivity within a community (see also Section 5.3.4).
Moreover, since the connection strengths are held to be initially practically
158 PROCESS STUDIES SUPPLEMENT 24 (2017)
Do aa l
D1 = 2
D, = 4
D 3 =l
Table 5-2: The amount of connections arranged by distance and connection strength
low connection medium connection high connection
strength strength strength
sho1t-distance overwhelming majority few scarce
medium-distance few scarce very, very sca rce
long-distance scarce very, very scarce extremely scarce
N(D N) = ( ) I Dz D3
M-1 .D1 Dz ... DL-1
DL
(5.2)
' (M-N-2)! D1! Dz! ... DL! '
After having specified this number N (D,N), Cahill and Klinger proceed:
Here vik+> is the number of different possible linkage patterns
between level k and level k+l, and (M-1)!/(M-N-2) is the number of
different possible choices for the monads, with i fixed. The
denominator accounts for those permutations which have already
been accounted for by the vik+> factors. We compute the most
likely tree-graph structure by maximising In N(D, N) + µ(Il =o Dk - N)
where µ is a Lagrange multiplier for the constraint. Using Stirling's
approximation for D) we obtain Dk+1 = Dk Ini?- - µDk+½ . •••
which can be solved numerically. [Fig. 5-5] sho~s1 a typical result
obtained by starting [equation (5.2)] with D 1=2, D2=5, and µ =0.9,
and giving L=16, N=253. Also shown is an approximate analytic
162 PROCESS STUDIES SUPPLEMENT 24 (2017)
3 • • • • • • ••
-8 -7 -6 -5 -4
k Log 10P
a) c)
d) e)
and the third part F(l-u) is the replenishment term (with feed rate
F = 0.0600) which is needed to replenish the chemical species U
because it gets used up in the reaction. For the second partial
differential equation, the parameters are: Dv = 1.00 · I0- 5, feed rate
F = 0.0600 and diminishment term k = 0.0620. The simulation was
originally performed by the XMorphia simulation software (authored
by Roy Williams at Caltech) with partial differential equations (as
discussed in Pearson). However, samples (a) to (t) are taken from a
renewed simulation run by Robert Munafo
(see http://mrob.com/pub/comp/xmorphia/index.html for details).
As the network continues to go through its update iterations, even higher-
order structures can arise from this initially low-level process. The model's
network of higher-order process-structures will thus start to exhibit all
kinds of characteristics that can also be found to occur in nature. Among
the signature features of the process physics model we can find, for
instance, nonlocality, emergent quantum behavior, emergence of a quasi-
classical world, gravitational and relativistic effects, inertia, universal
expansion, black holes and event horizons, and also a present moment
effect inherent to the system itself (see Cahill, "Process Physics: From
Information Theory," 11-12).
two features (see Ulanowicz, The Third, 112) then leads to structure
formation as found, for instance, in the optimally complex neural network
of Fig. 4-4a-b. Indeed, similar fractal pattern formation can be found in
systems as diverse as neural networks (Fig. 5-8a), tree root networks (Fig.
5-8b), river deltas (Fig. 5-8c ), blood vascular networks (Fig. 5-8d), ant
foraging trails, and many more natural systems, even at the level of galactic
superclusters (see Fig. 5-10 below).
In summary, all the pattern formation in these systems thrives on
dispositional activity. Whenever branching structures, under the influence
of internal, external, local, and global contingencies and constraints, get
to become each other's "adjacent possible" (Kauffman, "Foreword: The
Open," xiii; "Foreword: Evolution," 15), they will likely hook up and,
depending on the level of mutual sustainment, get involved into a more
durable relationship or not. Simply because the probability to hook up
will increase when branching structures are (1) simultaneously active, (2)
equally strong, (3) equally durable, and (4) equally reactive, it would
certainly not be too much off-target to say that these branching structures
develop in an anticipatory, or at least a proto-anticipatory way. As all
branching structures are "biased" in the sense that they tend to connect
with resembling parts, this can also be thought of as a primitive form of
subjectivity. The network as a whole, then, will exhibit what Robert
Ulanowicz has called "ascendency" (The Ascendent; The Third 112)-the
tendency to develop towards ever-higher, and increasingly intense complexity.
(b) seamlessly embedded within the same natural world he, from
early life into adulthood, gets to make sense of through (c) the
workings of his mind-brain and the perception-action cycles, and
other non-equilibrium cycles that enable him to go through life (these
cycles are not depicted here; see Fig. 4-1 as a replacement).
Subsequently, (d) this brain-equipped observer, by going through his
perception-action cycles, gets to sculpt a conscious view of the
greater embedding world, which at the supragalactic level, is also
organized in a "neural network-like" way. Finally, then, (e) process
physics shows that, at the "deepest" level of organization, the process
of nature branches out into an all-encompassing, optimally
interconnected complex network of "neuromorphic" activity patterns.
This is characteristic of a self-organizing, criticality-seeking,
complex fractal network process. All this suggests that this self-
organizing network process gives rise to habit formation, internal
meaningfulness through universal mutual informativeness, and all
experiential aspects that mainstream physics systematically
overlooks.
results directly from the iterative update routine and no dark matter
or dark energy hypothesis needs to be invoked.
This not only debunks the first main assumption of the block universe
interpretation-namely, that nature is an objectively existing, mind-
independent "real world out there"-but also the second and the third.
After all, assumption (2) can only be made to work when natural events
and living observers are being reduced to point-events and point-
observers-something that is shown to be a misleading abstraction in
Section 2.5.2. Moreover, since it turns out that relativity of simultaneity
does not hold in each and every case (see Section 2.5.3), the third assumption,
that our experience of time passing by is merely an illusion, should no
longer be considered an established finding either. To drive home the
point that the block universe interpretation is mistaken, though, it will for
now be enough to focus primarily on the flaw in the first main assumption
and keep the flaws in the other assumptions on standby. As for the first
assumption, it should be quite obvious that the long-cherished ideal of a
mind-independent "real world out there" is flatly contradicted by the
above-mentioned finding that observing organism and observed world are
ultimately one. In fact, this finding is utterly incompatible with our entire
current enterprise of doing "physics in a box" (or "exophysical-
decompositional physics," as it can be referred to as well). 136
Due to this incompatibility, and some other reasons as well, it seems
we need to (1) temporarily put aside our exophysical-decompositional
way of doing physics in a box and save it exclusively for practical pur-
poses, 137 and (2) look out for a nanexophysical-nandecompositional way
of doing physics without a box to thus be able to get a modelling method
in which mutual informativeness is an integral part of the system, so that
we no longer need to bump into the problem of information having to be
pre-coded before it can ever be imported "from the outside" 138 (see also
Kauffman, "Foreword: Evolution," 9-22) as would be required for an
observer whose exophysical center of subjectivity is processing the sense
data originating from the allegedly mind-independent "real world out there."
This problem of pre-coded information can be avoided when information
is in fact an initially unlabeled process of mutual informativeness through
which the model system is dynamically being given shape from within,
i.e., a process through which all activity patterns can make a difference
to all other activity patterns within the system, and vice versa. Without
such mutual informativeness, the above-mentioned process of perceptual
categorization would not even be possible. It is through the mutual
informativeness among and within neuronal groups in the thalamocortical
VAN DuK/Process Physics, Time, and Consciousness 177
region of the mind-brain that conscious organisms get to carve out their
conscious "Umwelt" (i.e., their "self-centered world of significance"; see
Von Uexkiill and Von Uexkiill; Koutroufinis) from a less salient background
of noisy, lower-order activity patterns.
In a remarkably similar way, the mutually informative process of
"autocatalysis" is thought to have facilitated the advent of life by enabling
the emergence of initially primitive biotic networks from a nondescript
background of low-grade, slow-going chemical reaction cycles (see
Kauffman, At Home, 47-69). In both cases, a higher-order world of habit-
establishing foreground patterns is "bootstrapped into actuality" 139 through
the mutually informative cyclic activity within the system itself.
According to Reg Cahill's process physics, this mutual informativeness
is not only an essential characteristic of biological systems, but also of
nature as a whole. In the process physics model, it is the mutual
informativeness among inner-system activity patterns that gives rise to a
complex world of criticality-seeking, habit-establishing foreground patterns.
Similarly to what happens in the emergence of life and consciousness
through autocatalysis and perceptual categorization, respectively, these
foreground patterns get to "bootstrap" themselves into actuality from an
initially undifferentiated background process of noise-driven, mutually
informative activity patterns. Because of this mutual informativeness,
which enables it to avoid all the problems associated with pre-coded
information, process physics should be considered a prime candidate for
a nonexophysical-nondecompositional way of doing physics.
Process physics, by virtue of its "co-informativeness-based" 140 way
of doing physics without a box, introduces a non-mechanistic, non-
deterministic modeling of nature based on a self-organizing and noise-
driven iterative update routine. As such, process physics can be said to
work according to a Peircean principle of precedence (Peirce 277), so that
it has no need for lawful physical equations and can thus avoid the many
problems and fallacies that are associated with our conventional way of
doing physics in a box.
By means of its "habit-establishing, stochastic recursiveness," the
process physics model can give rise to constantly renewing activity patterns.
In contrast to mainstream physics, in which any sense of processuality
has been so worryingly absent, the thus achieved "becomingness" can be
associated with what we, in everyday life, experience as time. Instead of
ending up with an utterly timeless and non-processual world, such as the
178 PROCESS STUDIES SUPPLEMENT 24 (2017)
block universe which mainstream physics claims that we live in, the
process physics model, by going through its habit-establishing iterations,
gradually gives rise to an entirely processual network of self-organizing
activity patterns that exhibit lots of familiar behaviors that can also be
found to occur in nature itself. In so doing, the process physics model will
slowly but surely start to show more and more features that are also so
characteristic of our own natural universe: non-locality; emergent three-
dimensionality; inertia; emergent relativistic and gravitational effects;
emergent quasi-deterministic classical behavior; creative novelty; inherent
time-like processuality with open-ended evolution; and more. Finally,
perhaps the most directly appealing aspect of process physics may well
be its full compliance with our best theories on life and consciousness.
ENDNOTES
9. Please note that the average speed still had to obey Aristotle's so-called
"law of motion": V c< FIR, (with V= speed, F= motive force, and R= resistance
of medium), which expressed Aristotle's belief that the rate of falling was
proportional to weight and inversely proportional to the density of the medium.
So, it was commonly agreed upon that air resistance and viscosity of water
would indeed slow down falling objects, thus to a certain extent affecting the
rate at which the falling speed would build up.
10. The data shown here can be found in Galileo's original working papers
on folio 107v [with "folio" meaning sheet, and v standing for "verso," which
is Italian for "back side" as opposed to r, which stands for "recto" (i.e., front
side)] . The working papers are being kept in Florence, in the Biblioteca
Nazionale Centrale (the Central National Library). The 160 surviving sheets
of the working papers are now bound as Volume 72 of the Galileo
manuscripts-also known as "Codex 72" or "Manoscritto Galileiano 72."
11. Euclid's magnum opus on geometry had already been published in Ancient
Greece around 300 BCE (see Byrne).
12. The equally long time stretches could be the intervals between (I) the
ramp's warning bells, (2) the water level markings of the water clocks, (3)
the sand level markings on an hour glass, (4) the completed swing periods of
a pendulum, or ( 5) any other indication of time units that can be used in an
experiment.
13. Doing physics in a box: this term, coined by Lee Smolin in his 2013 book
Time Reborn, refers to the long-established practice of isolating some aspect
of nature (or system of interest) from its surroundings and then trying to
empirically identify and mathematically capture the regularities in its behavior.
14. The term "beables," coined by John Bell (Speakable, 174), refers to those
existents purported to make up the unobservable realm "beneath" our
observation-based phenomenal world.
182 PROCESS STUDIES SUPPLEMENT 24 (2017)
15. E.g., physical parameters such as height, distance, water level, or the
angular position of a clock's hand (i.e., the "time pointer").
17. One of his first spontaneous experiments was to time the swing periods
of a chandelier by using his pulse. In his later experiments, Galileo would
also exploit several other means of measuring time, such as the rising level
of a water clock, or, indeed, the increasing amount of synchronously ringing
downhill bells.
19. Depending on which experiment was being performed, the time indicator
markings in question were: (a) the water level markings, or (b) the warning
bell positions.
21. The "double calibration" consists of: (1) synchronizing the frets or alarm
bells with the back-and-forth dangles of a free-swinging pendulum; (2)
synchronizing the frets or alarm bells to each other by hearing, that is, by
listening if their consecutive sounds, triggered by the descent of a downward
rolling ball, form an even sequence.
23 . Over the years, determinism has now taken on a somewhat less rigorous
guise: reductionism. And although reductionism, in tum, comes in many
different flavors, its general idea is that all of nature can be brought back to
its most elementary physical foundations, which should then be expressible
in terms of a concise set of physical equations. The physical equation has
managed to boost its status from convenient, approximating tool (man-made
artifact / abstraction / simplifying idealization) to an all-encompassing, literal
representation of nature.
24. Newton's Second Law of Motion (F=ma), for instance, was thought to
pertain not just to one specific physical body. Instead, Newton deemed it
universally valid for all masses in the universe.
26. Please note that a noise factor may be built into many physical equations
so that external influences can be taken into account. However, this makeshift
procedure is not used for "laws" since so-called laws of nature are thought
to give deterministic outcomes in many cases.
27. Since the term "initial conditions" is typically used for some specific,
carefully selected entry out of a larger set of temporally arranged alternatives,
the term "interim conditions" is probably more accurate.
31. Next to the philosophical branch of process thought, we may think, for
instance, of David Bohm's, Milic Capek's, and Ilya Prigogine's processual
worldviews (see Griffin, Physics).
32. For a larger list of process-minded physicists, see Eastman and Keeton.
33. Please note that this summation does not include Einstein's assumptions,
because we are here dealing with the assumptions that gave rise to the block
universe interpretation as based on Einstein's special theory of relativity, not
STR itself. See also Bros.
34. In Einstein's time it was unknown if the universe actually extended beyond
our own galaxy.
prepared to digest the food. Accordingly, from early life onwards, conscious
organisms gradually learn to value what they are undergoing by how their
body states are affected by it. Consciousness becomes a lived "anticipatory
remembered present" (see Van Dijk, "The Process-Informativeness") - i.e.,
a bound-in-one culmination of direct perception and value-laden memories
as experienced from within - which definitely has causal consequences for
physical reality. When held in the spotlight of the third-person perspective
of physical science, however, it remains notoriously elusive.
37. Next to big bang nucleosynthesis (which is the main source of hydrogen
[H] and helium [He] in the universe), there is also stellar nucleosynthesis and
supernova nucleosynthesis (synthesizing H and He into the more heavy
elements of the periodic table).
40. Please note that "abstraction" is not the act of reducing concrete, real-
world objects, events, relations, and/or phenomena to their most pure and
ideal Platonic forms . Rather, abstraction is the dissection and reduction of
the process of nature to symbols, geometric elements, algorithms, etc., that
are meaningless by themselves. They can only achieve concrete significance
when situated within a socioculturally evolved, meaning-providing context
of use. Like this, in order to make any sense at all, they need to be considered
within a semiotic process where they can form a unified threesome with an
observer-individuated referent (i.e., target system or aspect of interest) and
with an impact on the sign-interpreting observer; see Section 3.2.5.
41. For Newton, space and time were absolutes that did not depend upon any
physical goings-on. Rather, they made up the backdrop within which the
contents of nature could be accommodated. Absolute space was seen to be
unchanging and immovable. Time, on the other hand, was thought to be
absolute and universal in the sense that (a) it was supposed to be valid for all
of nature simultaneously, and (b) it was held to run its course irrespective of
any events being present to unfold "within" this absolute time.
idealizing simplification.
44. Initially, in special relativity, Einstein did not take into account gravitation.
Only with the later development of his general theory of relativity, gravitation
(and thus mass) were presented as a natural consequence of the curvature of
the geometrical spacetime continuum. As an addendum to Einstein's first
(special) theory of relativity, Minkowski's geometrical spacetime construct
only had to deal with geometry, point events, point observers, and any (less
than or equal to light-speed) causal connection between them (see Papatheodorou
and Hiley).
45. This can be derived from Minkowski's formula for the constancy of the
world interval/= s 2 - c2 (t2 - t 1 ) = constant (with c = 3· 105 km/sec= 3· 10 10
cm/sec; and with spatial intervals = -J (x 2 - x1 ) 2 + (y 2 - y 1 ) 2 + (z 2 - z 1 ) 2 .
Like this, sis expressed in terms of the spatial and temporal coordinates Xi,
y 1, Zi, ti, x 2 , y 2 , z2, and t 2 which are geometrically associated with the events
E 1 and E 2 • Please note that all these geometrical coordinates are specified
from the perspective of the observer whose reference frame is being applied.
47. See also Weinert 184 for the link between causality (causal chain) and
the before-after asymmetry.
49. As mentioned earlier, there have also been some experiments that did not
agree with the relativity theories. Examples are the experiments that led to
the "bore hole anomaly," the "earth fly-by anomaly," and, of course, the "dark
matter and dark energy anomalies." Instead of leading to doubt about the
theories, however, these experiments are typically thought to be indications
that the data are, in one way or the other, incomplete (see McCarthy 358).
56. Please note that the system environment and observation facilities are
themselves thought to be made up from their own individual system constituents
as well. For instance, the observation-enabling support systems, among which
there are: (1) the sensory system, (2) accessories, and (3) research facilities,
may respectively be divided into: (1) the eyes, optic nerves, visual cortices,
etc.; (2) engineering tools and research equipment, such as wrenches, cloud
chambers, and photo-detectors; (3) lab buildings, cleanrooms, scientific
libraries, and so on. In turn, all this is embedded in a greater embedding
environment and set within a historically evolved context of sociocultural
and scientific use (see Van Dijk, "An Introduction," 77; also "The Process").
On the whole, however, all aforementioned systems are typically taken for
granted, neglected or left out of scope. Depending on the focus of the
investigation, as well as the personal preference and philosophical persuasion
of the chief investigator, any of the supporting subsystems on the subject side
may be handed over to the target side. A measuring instrument may itself
become part of the system-to-be-observed and the subject side will have to
trust "the naked eye" to gather its empirical data.
57. This present moment indicator, or time pointer, moves externally from
the timeline at a uniform rate, or else it cannot provide the otherwise completely
static timeline with any "dynamicity" or a distinction between past and future
(see Cahill, Klinger, and Kitto).
58. Please note that, in line with Einstein's famous equation E=mc2 , this
content is typically thought of in a material-energetic sense. In the timeless
interpretation of quantum physics, it is thought that the stationary wave
function can specify all possible configurations of all the universe's material-
energetic content that is compliant with the universe's actual initial conditions:
"In quantum mechanics, [the wave function] is all that does change. Forget
any idea about the particles themselves moving. The space Q of possible
configurations, or structures, is given once and for all: it is a timeless
configuration space ... .[T]he probability density [of this configuration space
Q] has a frozen value - it is independent of time (though its value generally
changes over Q). Such a state is called a stationary state ... .All true change
in quantum mechanics comes from interference between stationary states
VAN DuK/Process Physics, Time, and Consciousness 189
59. Admittedly, this is of course a crude caricature, but a telling one nonetheless.
After all, just as the shell is a crucial part of the egg that will be lost in the
process of separation, there are also various aspects of nature that will be lost
in the above process of decomposition. First of all, everything that is related
to the subject side-measurement instruments, including clocks and measuring
rods, as well as the conscious observer and all unquantifiable subjective
aspects of observation-is separated from what is held to be the entirely
physical "real world out there." Also, space, time, and mass-energy are indeed
first artificially decomposed from the undivided whole which is nature in the
raw, before it is attempted to glue them together again. But because the
initially unbroken "whole is more than the sum of its parts," any act of a
priori decomposition will cause something essential in nature to be lost. By
the way, the well-known phrase "the whole is more than the sum of its parts"
can easily be misunderstood, because in its deepest essence nature does not
contain any real "parts." That is, every "part" of nature is only a "part" in the
sense that it is subjectively singled out and linguistically labeled as such.
60. The framework of physical equations for each of those theories is subject
to all sorts of different interpretations. It is of course widely known that there
is a large number of different interpretations of quantum mechanics, among
which we can find the Copenhagen interpretation, the Bohmian hidden-
variable interpretation, Everett's many-worlds interpretation, Einstein's
neorealist interpretation, Von Neumann's extension of the Copenhagen
interpretation, and Heisenberg's potentia-actuality interpretation (see Herbert
16-29). Furthermore, next to the block universe interpretation of the theory
of special relativity, there is also a dynamic block universe interpretation, as
well as a Lorentzian and neo-Lorentzian interpretation, to name a few. Even
for the quite straightforward classical Newtonian mechanics there are at least
four empirically equivalent interpretations: ( 1) the action-at-a-distance
interpretation; (2) the gravitational field interpretation; (3) the curved space
interpretation; and (4) the analytical-mechanistic interpretation (see Jones).
61. In full, these acronyms are read as: "Large Hadron Collider" at the
"European Organization [formerly: Council] for Nuclear Research" and the
"Laser Interferometer Gravitational-Wave Observatory." Both research projects
have undergone several technical updates to increase their sensitivity and
measurement range.
64. DNA, for instance, has not been pre-available in the biosphere, but had
to evolve within it. In the prebiotic universe, it becomes even harder to find
something analogous to a symbol-based alphabet. In fact, the introduction of
an alphabet of symbols can be seen as part of pre-theoretical interpretation.
66. Although target side and subject side can arguably be decomposed into
an arbitrary number of constituents, this cannot be done for the measurement
interaction between those opposing sides. This is due to what may be called
"the problem of the missing meta-observer" which is inherent to the use of
an epistemic cut between target and subject system. However, when allowing
such a new meta-observer to examine the finer details of measurement
interaction, the same problem will occur all over again, albeit this time between
the newly introduced meta-observer and the initial measurement interaction
that became the new target of investigation (see Von Neumann 352; Pattee;
Van Dijk, "The Process").
68. Please note that classical information theory allows various kinds of
members-"elementary items of information," such as symbols, signs, tokens,
bits, byte-sized bit strings, syllables, or words-to be used interchangeably.
69. As apparent from the example of weighing scales, which had already been
around in ancient Greece, the basic method of proportional comparison was
already available long before Galileo. He was the first, however, to systematically
apply it to time and distance combined.
VAN DuK/Process Physics, Time, and Consciousness 191
72. It must be noted that both measurement encoding and predictive decoding
will always require ample interpretation on the part of the experimentalist.
As first mentioned by Thomas Kuhn (The Structure 123), all measurement
interpretation occurs on the basis of the actually applied theory (including
all relevant background theories).
73. After all, this would only call forth the same problem all over again (i.e.,
which production rules to use for putting together the data-smoothening
encoding and decoding encryptions) and thus lead to confusing circularity
and/or infinite regress.
75. Wave function collapse: the coming into actuality of one specific
measurement outcome although the system-to-be-measured is thought to exist
in a superposition of equally probable quantum states prior to the conscious
measuring act. In absence of conscious observation the quantum states are
believed to exist all-together-at-once, this in analogy to the different time
slices in Minkowski 's block universe that are thought to exist all-together-
at-once as well, thus leading to (1) the timeless view of the universe, and (2)
the arguable claim that our experience of time is completely imaginary. This
is why adherents of the block universe interpretation and relativity-inspired
interpretations of quantum physics like to dismiss consciousness as irrelevant
and illusory (see Smolin, Time, 59-64 and 80). The idea of consciousness
playing a decisive role in the collapse of the wave function has a controversial
history and has long been considered rather troublesome and unwanted by
many physicists. Hence, nowadays, mainstream physics has put its trust in
the quantum decoherence explanation in which wave function collapse can
be interpreted as being brought about by the hard-to-pin-down environmental
part of the quantum system under investigation-more or less along the lines
of Paul Dirac's idea of 'Nature making a choice' instead of consciousness.
The mathematical methodology behind the decoherence interpretation, however,
is definitely not without foundational problems either: "Under normal
circumstances ... one must regard the density matrix [i.e., the mathematical
192 PROCESS STUDIES SUPPLEMENT 24 (2017)
tool that forms the main pillar underneath the decoherence approach] as some
kind of approximation to the whole quantum truth .... It would seem to be a
strange view of physical reality to regard it to be 'really' described by a
density matrix. The density-matrix description may be thus regarded as a
pragmatic convenience: Something FAPP [an acronym by John Bell that
means for all practical purposes, or, in other words, a figure of speech], rather
than providing a 'true' picture of fundamental physical reality" (Penrose,
803). And although the last word on this topic has probably not yet been said,
for now, we will round off the discussion with the argument from process
physics-namely, that the process of quantum measurement involves aspects
of decoherence as well as consciousness (see Cahill, "Process Physics: Self-
Referential," 7-9 and 24; Cahill, "Process Physics: From Information," 33).
76. Von Neumann used the term "abstract ego" as a name for the "immaterial
intellectual inner life," the "conscious mind," or the "center of subjectivity."
79. See Mara Beller's 2003 article "Inevitability, Inseparability and Gedanken
Measurement" for some more background information on how Bohr arrived
at this interpretation.
and Tononi 46-47). Because of value systems, the organism can become
capable of strengthening successful activity patterns and weakening those
that are of little to no use for survival.
83. Due to the close resemblance between the eyes of humans and octopi, the
various evolutionary stages that are hypothesized to have preceded the current
stage of the octopus eye are often thought to make up a good model for the
evolutionary development the human eye may have undergone.
84. A photodiode will typically alternate between two pre-set states that
enable it to send out a binary signal, thus communicating the detection or
non-detection of light. These states can be considered part and parcel of the
physical architecture of the photodiode.
85. See "causative" stimulus and "effectuated" nervous signaling; body and
mind; the physical world and the mental world; Descartes' res extensa and
res cogitans, etc.
86. Proteins are biomolecules that are absolutely vital to living organisms as
they participate in a vast repertoire of biological activities, such as DNA
replication, cell metabolism, biochemical signaling, and molecular transportation
(as in the blood's O2-binding protein hemoglobin).
87. Epigenetics is the study of how each organism's life events can affect the
expression of their genes as some genes are left free to act while others are
deactivated by methylization (see Phillips).
88. Although it is typically suggested that the interior of these black boxes
can be fully accounted for in a later meta-analysis, this follow-up analysis
will then inevitably bring along the same problem all over again. In this way,
just as in physics (see Section 3.3), only a pseudo-explanation is given, or
another sub-plot, of how the processing of inputs into outputs should occur.
90. See also John Pickering's papers on the relation between David Bohm's
active information and J.J. Gibson's view and on mutualism as an alternative
for conventional cognitivism for some added context.
92. From the perspective of the orthodox physicalist paradigm, life (as well
as the related phenomenon of conscious experience) seems to be utterly
otherworldly. That is, by prematurely characterizing the early universe as an
entirely physical, mechanistic, and abiotic realm, the emergence of life
automatically becomes a sudden and radical departure from the mechanistic
status quo. As a result, reductionistic explanations have remained at a loss
ever since-requiring all kinds of counter-productive measures, such as
writing off conscious experience and the passage of time as illusory, just in
order to preserve the mechanistic, reductionistic worldview. Unfortunately,
though, such measures create more problems than they solve and leave more
things unexplained than they clarify. With that in mind, perhaps it is about
time to start questioning the mechanistic, reductionistic worldview, rather
than conscious experience and the passage of time.
94. Of course, all non-equilibrium processuality will involve not only the
entire autocatalytic cycle, but also all of the (direct and indirect) in- and
outgoing flows of energy, material, and information.
97. For instance, under the influence of the day-night cycle, the organism
may develop early circadian rhythms that affect its inner biochemistry.
99. These features are the result of hominids adaptively living through their
perception-action cycles, nutrient-waste cycles, 0 2-C0 2-cycles, etc., during
the course of evolution.
103. That is, different patterns of exteroceptive stimuli that are held to pertain
to the "state" of the outer-organism world, but also of proprioceptive signals
pertaining to the organism's musculoskeletal positions and movements within
that world.
104. That is, the totality of interoceptive patterns relating to the entire
homeostatic and physiological condition of the organism's body.
106. In fact, this narrows down the range of possible neural patterns to a
relatively small set, thus leading to invariability.
107. For example, influx versus dissipation; system constraints versus system
dynamics; excitatory versus inhibitory forces; synaptic growth versus decay.
108. As mentioned earlier, such a threshold can thus form a local pocket of
potential (a.k.a. potential well).
113. The most serious candidate for this "law without law" criterion seems
to be what Charles Sanders Peirce called a "tendency to take habits" (277).
114. Here Wheeler did not include an explanation of what "the boundary of
a boundary is zero" should mean. He probably meant to say that it is a
fundamental assumption in physics that various conservation laws hold in
every physical system that is properly isolated from its environment (see Von
Kitzinger 177).
118. An intrinsic present moment effect causes the external present moment
indicator to become redundant (see Section 2.1.3 for more details on the
external present moment indicator).
119. Trying to model nature with the help of such supposedly fundamental
physical constituents necessarily has to rely on pre-theoretical interpretation.
And in the Cartesian-Newtonian paradigm the first task to be performed
during pre-theoretical interpretation is to draw the Galilean cut which slices
away any subjective aspects of the phenomena under investigation. However,
as has been emphasized throughout this paper, it is a mistake to think that
this would successfully divide nature into, on the one hand, "entirely physical
constituents" and, on the other hand, our "entirely subjective experiences"
of those constituents. This would amount to the undesirable bifurcation of
nature, which, once having been put into effect, cannot be undone. That is,
"nature in the raw" cannot be cut into bits and pieces and still be kept intact,
i.e., in conformity with "naked fact."
120. Please note that the word "deepest" implies a layered hierarchy of lower-
and higher-order levels of organization. However, this use of language should
be considered metaphorical rather than true to nature; in reality, it makes
more sense to think of nature in a holarchic way-with each part being a
seamlessly integrated member of the whole in which it participates, and, in
tum, with each whole itself being interpretable as such a seamlessly integrated
part as well (see Koestler). All this is characteristic of self-similar fractal
organization.
121 . As already mentioned in Section 3 .1 .2, there are many different ways
to refer to the initially unlabeled natural world. A wide variety of names can
be used, all of which have their own context of use and are the result of a
specific set of beliefs on how nature works. Although these terms-the Kantian
"noumenal world" or "nature-in-itself," John Archibald Wheeler's "pre-
geometric quantum foam" or "pre-space," David Bohm and Basil Hiley's
"holomovement" and "implicate order," Bernard D'Espagnat's "veiled reality,"
the ancient Greek "apeiron," John Stewart Bell's world of pre-observational
"beables," or other words, like "vacuum," "void," or the Buddhist "plenary
void"-can all be used to refer to this primordial stage of nature, no one of
them can be crowned as the ultimate candidate.
122. As far as I know, Joe Rosen is not related to theoretical biologist and
biophysicist Robert Rosen.
123. Earlier on, Joe Rosen defines science as our attempt to understand the
reproducible and predicable aspects of nature as objectively as possible (30).
200 PROCESS STUDIES SUPPLEMENT 24 (2017)
124. Although physical equations are, when combined with their post-
theoretical interpretations, often thought to provide an explanation of how
nature works, they do not really do so. Just as there can be no neutral algorithm
for the choice of physical equations-i.e., for deciding which physical equation
best describes a given set of empirical data (Kuhn, The Structure, in the
Postscript written in 1969)-there can also be no finite and fairly balanced
procedure for finding the best interpretation of equation-based theories like
quantum theory or Einstein's relativity theories. Therefore, an interpretation
merely confirms the context of use within which a given physical equation
reached its mature form (see Van Dijk, "The Process"). This, then, is the
reason that no conclusive final answer can be found as to which interpretation
should be the best one.
125. Think, for instance, of a temporary wooden support on which to rest the
building bricks when constructing an archway. Although its semicircular
shape indicates where the bricks should be placed, once the arch is completed
the support is no longer needed and can be conveniently removed.
126. The noise-driven update routine has an effect that is quite similar to that
of neuromodulation (which enables brain plasticity in the initially unconditioned,
newly developing fetal brain). Analogous to self-referential noise in the
process physics model, neural noise and reentry play an indispensable role
in neuromodulation, neuroplasticity, the optimization of motor control, and
the like (see Sections 4.3 and 4.3.1 ). In the case of the process physics model,
however, there is no explicit, pre-developed substructure like a prewired brain.
127. In fact, in rice and sand pile systems such a tuning parameter can "gather
under its umbrella" the effects of various phenomena: (1) stickiness between
grains; (2) the average mass of grains; (3) the precise magnitude of the
gravitational constant (which may vary with the latitude at which the experiment
is performed); (4) the average downward velocity of the grains being dropped;
(5) possible wind sheer. .. and so on. Accordingly, such a tuning parameter
VAN DuK/Process Physics, Time, and Consciousness 201
may influence the self-organizing dynamics of the sand or rice pile system
in question. Particularly, it will set the angle (or, better put, the small range
of near-critical angles) at which avalanches will be able to tumble down the
slope. The occurrence of self-organized criticality itself, however, will remain
unaffected. By the same token, all such details can be likewise covered by
one generic parameter a in the process physics model. In both cases, the
precise features of all contributing micro-factors and subnetwork activities
do not matter too much, just as long as self-organized criticality will be
achieved. And just as avalanches can occur at a wide range of different angles
in rice and sand pile models, many different values of the tuning parameter
a may be used in the process physics model without them affecting the ongoing
self-organized coming-into-actuality of "foreground cells" of activity patterns
(i.e., connection nodes) from a background of activity patterns with lower-
order connectivity.
128. The here depicted images are white noise frequency spectra (see Bourke)
which are used for educational and aesthetic reasons only.
132. "The requirement that the clock that measures time in quantum mechanics
must be outside the system has stark consequences when we attempt to apply
quantum theory to the universe as a whole. By definition, nothing can be
outside the universe, not even a clock. So how does the quantum state of the
202 PROCESS STUDIES SUPPLEMENT 24 (2017)
universe change with respect to a clock outside the universe? Since there is
no such clock, the only answer can be that it does not change with respect to
an outside clock. As a result, the quantum state of the universe, when viewed
from a mythical standpoint outside the universe, appears frozen in time"
(Smolin, Time, 80).
137. I.e., practical purposes like the design and manufacture of computer
chips; the sending out of space-craft on missions into space; the deployment
of a properly working GPS-system, and so on.
incapable to include ultraviolet, infrared, etc., into the picture. In other words,
pre-coded or pre-stated information necessarily leads to incomplete rep-
resentations since our tools of observation and alphabets of expression can
only denote so much-they always have upper and lower limits beyond which
they cannot go.
139. This "bootstrapping" refers to the Baron von Miinchhausen, who allegedly
used his own bootstraps to pull himself out of the deadly swamp. A bootstrap,
then, is the handgrip at the backside of a boot that can be used to pull it up.
For "bootstrapping" in the context of the emergence of autocatalysis and
higher-order consciousness, see Kauffman, The Origins, 373; and Edelman
and Tononi 173, 205, respectively.
142. The organism's internal milieu involves, among others, the state of its
sensory apparatus and life organs, its homeostasis (as well as derived feelings
and emotions), the kinesthetics and position of limbs and joints, muscle
tension, and so forth.
143. During use neural connections and muscle tissue are constantly engaged
in a process of strengthening and weakening through brain plasticity,
neuromuscular memory path formation, etc. (see Edelman and Tononi 46, 79-95).
145. Please note that world and organism should not be seen as truly separate.
The concept of "world" includes all within-nature organisms, while, in tum,
each organism is fully embedded within the natural world in which it lives.
204 PROCESS STUDIES SUPPLEMENT 24 (2017)
WORKS CITED
Haber, Shimon, Alys Clark, and Merryn Tawhai. "Blood Flow in Capillaries
of the Human Lung." Journal of Biomechanical Engineering 135.10
(September 20, 2013).
Harris, Michael G. "Optic and Retinal Flow." In A.T. Smith and R.J. Snowden.
Eds. Visual Detection of Motion. London: Academic P, 1994: 307-332.
Heisenberg, Werner. Physics and Philosophy: The Revolution in Modern
Science. London: George Allen and Unwin, 1958.
Herbert, Nick. Quantum Reality: Beyond the New Physics. New York: Anchor
Books, 1985.
Hesse, Janina, and Thilo Gross. "Self-Organized Criticality as a Fundamental
Property of Neural Systems." Frontiers in System Neuroscience 8 (23
September 2014): 166.
Hunt, Tam. Eco, Ego, Eros: Essays in Philosophy, Spirituality and Science.
Santa Barbara: Aramis P, 2014.
James, William. "Percept and Concept: The Import of Concepts." In Some
Problems of Philosophy: A Beginning of an Introduction to Philosophy.
New York: Longmans Green, 1911.
_ . The Principles of Psychology, Vol. 1. New York: Cosimo Classics, 2007.
Jan, James E., Russel J. Reiter, Michael B. Wasdell, and Martin Bax. "The
Role of the Thalamus in Sleep, Pineal Melatonin Production, and Circadian
Rhythm Sleep Disorders." Journal of Pineal Research 46 (2009): 1-7.
Jantsch, Erich. The Self-Organizing Universe: Scientific and Human Implications
of the Emerging Paradigm of Evolution . Frankfurt: Pergamon P, 1980.
Jensen, Henrik J. Self-Organized Criticality: Emergent Complex Behavior in
Physical and Biological Systems. Cambridge: Cambridge UP, 1998.
Jones, Roger. "Realism about What?" Philosophy of Science 58 (1991): 185-202.
Jung, Peter, et al. "Noise-Induced Spiral Waves in Astrocyte Syncytia Show
Evidence of Self-Organized Criticality." Journal of Neurophysiology 79
(1998): 1098-1101.
Kiiufer, Stephan, and Anthony Chemero. Phenomenology: An Introduction.
Cambridge: Polity P, 2015.
Kauffman, Stuart A. The Origins of Order: Self-organization and Selection
in Evolution . New York: Oxford UP, 1993.
_ . At Home in the Universe: The Search for the Laws of Self-Organization
and Complexity. Oxford: Oxford UP, 1995.
_ . "Foreword: The Open Universe." In Robert Ulanowicz. Ed. The Third
Window: Natural Life beyond Newton and Darwin. West Conshohocken,
PA: Templeton Foundation P, 2009.
VAN DuK/Process Physics, Time, and Consciousness 211