Sifakis On The NatureofCoputing-new - Print

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

On the Nature of Computing

(Notes for a Lecture at Heidelberg Laureate Forum


September 20, Heidelberg
Draft, Do not distribute)

Joseph Sifakis

Joseph.Sifakis@epfl.ch, Joseph.Sifakis@imag.fr

Abstract

Computing is a domain of knowledge. Knowledge is truthful information that embedded into the right
network of conceptual interrelations can be used to understand a subject or solve a problem.
According to this definition, Physics, Biology but also Mathematics, Engineering, Social Sciences and
Cooking are all domains of knowledge. This definition encompasses both, scientific knowledge about
physical phenomena and engineering knowledge applied to design and build artefacts. For all domains
of knowledge, Mathematics and Logic provide the models and their underlying laws; they formalize a
priori knowledge which is independent of experience. Computing with Physics and Biology is a basic
domain of knowledge. In contrast to the other basic domains, it is rooted in a priori knowledge and
deals with the study of information processing – both what can be computed and how to compute it.

To understand and master the world, domains of knowledge share two common methodological
principles. They use abstraction hierarchies to cope with scale problems. To cope with complexity,
they use modularity. We point out similarities and differences in the application of these two
methodological principles and discuss inherent limitations common to all domains.
In particular, we attempt a comparison between natural systems and computing systems by addressing
two issues: 1) Linking physicality and computation; 2) Linking natural and artificial intelligence.

Computing and Physical Sciences share a common objective: the study of dynamic systems. A big
difference is that physical systems are inherently synchronous. They cannot be studied without
reference to time-space while models of computation ignore physical time and resources. Physical
systems are driven by uniform laws while computing systems are driven by laws enforced by their
designers. Another important difference lies in the discrete nature of Computing that limits its ability
to fully capture physical phenomena. Linking physicality and computation is at the core of the
emerging Cyber physical systems discipline. We discuss limitations in bridging the gap between
physical and discrete computational phenomena, stemming from the identified differences.

Living organisms intimately combine intertwined physical and computational phenomena that have a
deep impact on their development and evolution. They share common characteristics with computing
systems such as the use of memory and languages. Compared to conscious thinking, computers are
much faster and more precise. This confers computers the ability to successfully compete with humans
in solving problems that involve the exploration of large spaces of solutions or the combination of
predefined knowledge e.g. AlphaGo's recent winning performance. We consider that Intelligence is
the ability to formalize human knowledge and create knowledge by applying rules of reasoning. Under
this definition, a first limitation stems from our apparent inability to formalize natural languages and
create models for reasoning about the world. A second limitation comes from the fact that computers
cannot discover induction hypotheses as a consequence of Gödel's incompleteness theorems.

We conclude by arguing that Computing drastically contributes to the development of knowledge


through cross-fertilization with other domains as well as enhanced predictability and designability. In

1
particular, it complements and enriches our understanding of the world with a constructive and
computational view different from the declarative and analytic adopted by physical sciences.

1. Introduction

There is currently a lack of recognition of Computing as a discipline. It does not enjoy the same
prestige as natural sciences and mathematics do. Public opinion, policy and decision makers but also
famous scientists have a poor opinion of our discipline. Computing has a secondary status in K-12
teaching curricula in most countries.

Physics (physicists) have dominated scientific thought until the end of the 20th century. For decades
the importance of Computing and Information has been underestimated or overlooked by a strongly
reductionist view of the world: understanding the nature of complex things by reducing them to the
interactions of their parts, or to simpler or more fundamental things. Any phenomenon can be
understood as an emergent property of a game of elementary particles. Hereby, I provide a sample of
characteristic sentences from famous scientists illustrating this fact:
 “My task is to explain elephants and the world of complex things, in terms of the simple things
that physicists either understand, or are working on”
 “The capacity to do word-processing is an emergent property of computers”
 “Artificial intelligence could end mankind”
 “Brain could exist outside body”

Even within the computing community, there is no agreement about the scope, the perimeter and the
very nature of our discipline or even its name. I remember diverging discussions at the Turing
Centenary Celebration Event organized by the ACM in San Francisco in July 2012. Nonetheless
everybody agrees today that “Computing is no more about computers than astronomy is about
telescopes” (E. Dijksrta).

I understand that talking about the nature of Computing and its relationship to other domains of
knowledge may raise deep and subtle philosophical issues. The paper is not intended to provide
definite answers to long-open questions but rather to clarify and propose a methodological framework
for comparing Computing with other basic domains of knowledge. It also aims at understanding what
really can be achieved by computers as they are today, what are their limitations and at contributing to
a so much needed useful and fruitful intra-disciplinary and inter-disciplinary dialogue.

2. Computing and Information

Initially, Computing was considered rather as an auxiliary technology of mathematics, electrical


engineering or science, depending on the observer. Over the years, it resisted absorption back into the
fields of its roots and developed an impressive body of knowledge [1,2].

Computing deals with the study of information processing – both what can be computed and how to
compute it.

Information can be defined as a relationship between the syntax and the semantics of a given language.
The semantics define the denotation of symbols in terms of concepts of a semantic domain. In the
opposite direction a representation function goes from the concepts to the symbols (Figure 1).
According to this definition, information as not just symbols but symbols that can be interpreted by a
mind or a machine.

2
Information is in the Mind of the Beholder. A non-deciphered scripture provides non information,
while ancient Greek text is information accessible to Hellenists, Maxwell’s equations have meaning
for physicists, and seeing an apple evokes the meaning of “apple”.

Figure 1: Information

Information is by its nature non-material. It is an entity different from matter/energy and thus it is not
subject to time-space constraints. It is important to notice that models of computation ignore physical
time. Time in simulation programs is a state variable – time progress is explicitly handled by the
programmer.
These remarks justify the fact that computing is a discipline in its own right, non-overlapping with
Natural disciplines.
It is important to note that computers do not create information. Information is created by
programmers or is predefined by the semantics of programming languages by using data types.
Computers transform information as specified by algorithms, effective methods expressed as a finite
list of well-defined instructions for calculating a function. When formalized algorithms express the
way a model of computation e.g. Turing Machine, computes the function.
The theory of computing is rooted in prior knowledge about computation based on mathematics and
logic. So far, the focus was on designing computing systems. The dominant paradigm is synthesis
rather than analysis to understand and predict phenomena.

Syntactic Information:
Information as defined above should not be confused with syntactic information, a quantity measuring
the number of symbols, pixels, bits needed for a representation. According to Shannon’s Theory, it
characterizes the content of a message, not its meaning: it is nlog(b), the number of yes/no questions
one would have asked to completely resolve ambiguity for a word of length n on an alphabet of b
symbols. Kolmogorov’s algorithmic information of an object is the length of the shortest program that
can produce the object.

Many books about information focus on this quantitative definition. Without denying the interest of
this concept that finds important applications in Computing e.g. data compression, it should be
emphasized that it captures only quantitative aspects of information.

3. Computing as a Domain of Knowledge

Computing does not fit the commonly accepted definition of Science. The Oxford dictionary says
science is “The intellectual and practical activity encompassing the systematic study of the structure
and behaviour of the physical and natural world through observation and experiment”. Similar
definitions are provided by other dictionaries. Webster says science is “a branch of study concerned
with the observation and classification of facts, especially with the establishment and quantitative
formulation of verifiable general laws.”

According to these definitions, science is concerned with the discovery of facts and laws. It
encompasses only the branches of study that relate to the phenomena of the physical universe and their
laws. So, while physics and biology are deemed sciences, mathematics and Computing are not.
Science in Computing is limited to natural computing that studies computational paradigms that are
abstracted from natural phenomena e.g. self-replication, the functioning of the brain, and
morphogenesis.

3
To understand the nature of Computing, a more pertinent concept is that of domain of knowledge.
Knowledge is truthful information that embedded into the right network of conceptual interrelations
can be used to understand a subject or solve a problem.
According to this definition scientific theories, but also Mathematics, Engineering, Social Sciences,
Cooking are domains of knowledge. Thus the Pythagorean Theorem, the law of conservation of
energy, algorithms and cooking recipes are all knowledge.

Knowledge acquisition and development involve a cycle


1. Formulate question (a problem) in an adequate framework involving concepts, adopted
hypotheses and models;
2. Develop results e.g. scientific theory, building artefacts, logical proofs;
3. If applicable, experimentally test the obtained results - go back to 1, if discrepancy.

The application of the third step is necessary for a posteriori knowledge dependent on experience or
empirical evidence e.g. Natural Sciences, Engineering disciplines, Medicine, Cooking, Economics.
Testing of results is necessary to check that hypotheses are consistent with observation and
measurement. On the contrary, a priori knowledge is independent of experience e.g. Mathematics,
Logic, Theory of Computing, and needs not to be tested.

A priori knowledge is absolute. It is valid for a given theoretical framework defined by axioms and
rules. On the contrary, a posteriori knowledge is falsifiable and comes in degrees. Its validity may
differ in testability, the degree of abstraction and the way in which it is developed. Astronomy has
limited possibilities of experimental testing and this is even worse for “social sciences”. Such a
limitation is not a sufficient reason for not striving for the development of knowledge. On the contrary,
knowledge should be pursued humbly taking limitation concerns into consideration.

Considering domains of knowledge avoids sterile discussions focusing on the scientific or non-
scientific nature of disciplines. It encompasses both the development of theory and its systematic
application. The latter is not only essential for validation purposes. It can give rise to new problems
and trigger novel theoretical developments. It is also indispensable for satisfying the needs for
mastering the world and meeting specific material and spiritual needs. What really matters is that
knowledge has been acquired and experimentally tested.
Today modeling and simulation techniques allow the study of complex physical phenomena such as
lava flow or the behavior of artefacts such as web-based systems. The models used may not be directly
related to observation; they may be completely ad hoc or built by combining theoretical results and
empirical knowledge. What really counts is that simulation results do agree with the phenomena they
are intended to describe and that they contribute to their understanding and prediction. The physician,
for example, must articulate a problem (say, a person with an odd set of symptoms) and then generate
hypotheses (in this case, possible diseases to explain the observed symptoms). Each hypothesis
(diagnosis) is tested until one fits all the observed data, fulfills certain predictions, and is supported by
additional data. Then it is acted upon. One does the same sort of thing when attempting to solve a
financial problem. Thus expanded, scientific knowledge involves any ideas about the world which are
based on inductive reasoning and which are open to testing and change.
Furthermore, the starting point in the pursuit of knowledge need not be observation. The Theory of
Relativity was motivated by a series of thought experiments rather than direct observation. The
development of Computing as a scientific discipline started from prior knowledge about computation
based on mathematics and logic. Non-compliance to the standard development model of natural
sciences is not a sufficient reason for questioning its scientific status. Today it is widely recognized
that Computing transcends the realm of computers. Physical phenomena admit a computational
interpretation (e.g. DNA replication, quantum computing). If Computing had emerged through the
study of such phenomena would it have been deemed as “true” science?
Knowledge acquisition and development intimately combines two interdependent types of activities:
Science and Engineering.

4
Science is mainly motivated by the need for understanding the physical world. It privileges the
analytic approach by connecting the physical phenomena through abstractions to the world of concepts
and mathematics.
Engineering is predominantly synthetic. It is motivated by the need to master and adapt the physical
world. It transforms scientific results into concrete, trustworthy and optimized artefacts and deals with
our ability to mold our environment in order to satisfy material and spiritual needs. Interaction and
cross-fertilization between Science and Engineering is key to the progress of scientific knowledge as
shown by numerous examples. A great deal of the foundations of physics and mathematics has been
laid by engineers. Today, more than ever, Science and Engineering are involved in an accelerating
virtuous cycle of mutual advancement. Scientists build experiments in order to study while engineers
study theory in order to build.

The World can be understood as the combination of phenomena of the material world and their mental
representations, concepts what we usually call ideas. Some ideas and data can be represented by
information in particular in digital form, and information is what the real issue is. Some type of
information can be characterized as knowledge (reused in an appropriate context to solve problems)
and in particular scientific knowledge. Thinking is an information processing activity. The theory of
computation is deemed knowledge that can be implemented and experimentally validated by using
computers (Figure 2).

Figure 2: The virtuous cycle of scientific and engineering knowledge

4. Domains of Knowledge
4.1 Common Paradigms

There exist four main domains of knowledge: Mathematics and Logic, Physics, Computing, and
Biology. Mathematics and Logic provide the models and theoretical tools to reason about the world.
Physics deals with the study of phenomena involving matter/energy and its motion through space and
time – both discovering the underlying laws and building artefacts. Computing deals with the study of
information processing – both what can be computed and how to compute it. It is not a branch of
mathematics it relies on mathematical models exactly as Physics does.

5
Biology deals with the study of life and living organisms. Biological phenomena are a combination of
intertwined physicochemical and computational processes. All other domains of knowledge rely on
composite knowledge from these basic domains.

Understanding the world involves problems of scale and problems of complexity. Domains of
knowledge rely on two complementary methodological simplifications to cope respectively with each
one of them: abstraction layering and modularity [3,4].

4.1.1 Abstraction Layering

The world living or inanimate (including artefacts), has breadth and depth. We deal with phenomena
in a range of scales 10-35m (the Planck length, the smallest measurable quantity) to 1025 m (the size of
the observed universe) .
To cope with problems of scale we study the physical world at different levels of abstraction. Note that
abstraction is a holistic way to break complexity by revealing relevant features of the observed reality.
As E. W. Dijsktra says, “Being abstract is something profoundly different from being vague … The
purpose of abstraction is not to be vague, but to create a new semantic level in which one can be
absolutely precise.”

To understand complex systems we use hierarchies of models where each layer is related by adequate
abstraction relations. As we move up in the abstraction hierarchy, the granularity of observation
becomes coarser. The abstraction relation should establish correspondences between the laws and
properties at one layer with laws and properties of the upper layers. The obvious demand here is
integration of layered models to come up with a Theory of Everything that coherently integrates the
knowledge from each layer for a given domain of knowledge.
Figure 3 depicts a tentative hierarchical decomposition of three basic domains of knowledge.

Figure 3: Abstraction hierarchies

4.1.2 Modularity

Modularity is used to break the complexity at a given layer of abstraction. It considers that complex
systems can be build form a relatively small number of types of components (bricks, atomic elements)
and glue (mortar) that can be considered as a composition operator. The application of this paradigm
has greatly contributed to the advancement of knowledge in Physics and physical systems engineering.
Complex systems can be built by composing predefined components characterized by their dynamic
behavior. Composition implies restrictions to the behavior of the composed components expressed as
constraints on their state variables. For instance, electrical circuits can be built out of four different

6
types of components. Their behavior can be characterized by a set of equations including the equations
describing the behavior of the involved components and the equations expressing constraints induced
by the connections between the components.

Modularity relies on two basic assumptions.


 The behavior of each component can be studied separately.
 The behavior of a composite component can be inferred by composing the behavior of its
constituents.

An implicit assumption in the application of this paradigm is that the behavior of the atomic elements
is not altered when they are plugged in a given context. This assumption is valid in classical physics
but it fails for bio-systems, linguistic systems.

The requirement to deal with a small number of types of components is essential for the formalization
of the composition rules between components. Frameworks with a large number of types of
components are badly amenable to formalization.
A specific problem for computing systems is component heterogeneity. While hardware systems can
be built out of a limited number of types of components e.g. memory, bus, ALU, multiplexer , it seems
hard to find for software systems a common component model. This is a key limitation to the
development of theories for mastering the component-based construction of software [5].

4.2 Common Limitations


4.2.1 More is Different – Emergent Properties

An important question is whether we can achieve knowledge unification across the abstraction layers
using a compositionality principle: knowing the properties of components at one abstraction layer, is it
possible to infer global properties of composite components at a higher abstraction level? For
example,
 inferring properties of water from properties of the atoms of hydrogen and oxygen and rules
for their composition;
 inferring properties of an application software from behavioral properties of the components
of the HW platform on which it is running.

These two problems are of the same nature, and will probably find no satisfactory answers as it is
explained in the seminal paper of Philip. Anderson “More is Different” [3]:
“The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means
imply a "constructionist" one: The ability to reduce everything to simple fundamental laws does not
imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary
particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to
have to the very real problems of the rest of science, much less to those of society”

This limitation can also be understood by the observation that new properties may emerge due to the
interaction of a number of simple entities in an environment, forming more complex behaviors as a
collective. It implies in particular that unification of knowledge in a domain is problematic.

4.2.2. Predictability

Understanding and predicting the behavior of the world is limited by the conjunction of two factors.
 One is our ability to model and experimentally validate the observed phenomena. This usually
involves a complexity (hardness) that we call epistemic complexity.
 The other is our ability to analyze experimentally validated models and infer their properties.
This is limited by the computational complexity of the analysis problem.

7
Experimental validation ensures that the models are faithful: whatever holds for the model holds for
the system under study. Note that the same phenomenon may admit models of different complexity.
Newtonian models are an approximation that has been improved by Relativity Theory. Statistically
based quantum mechanics as formulated hundred years ago, seems be a caricature of a more complex
reality.

We consider that predictability of a studied system/phenomenon can be characterized by a quantity


that is inversely proportional to the product of epistemic complexity and computational complexity
(Figure 4). For example, computing execution times of an application software running on a given
execution platform involves two steps:
1) build a faithful model of the program code running on the execution platform. To reflect
precisely the dynamics of the execution such a model may be extremely complex for modern
chips as they have inherently non-deterministic behavior. Execution times of even simple
instructions cannot be precisely estimated due to the use of memory hierarchies and
speculative execution.
2) estimate by using static analysis techniques, execution times. Notice that well-known non-
computability limitations do not allow precise computation of execution times for a given
model. Convergence of analysis algorithms involving fixpoint computation is enforced by
abstract interpretation to compute upper bounds [6].

The above example adequately illustrates the difficulties encountered in producing knowledge about
complex phenomena or systems as a well as the interplay between precision of modeling and precision
of analysis. Precise and detailed models are much harder to analyze, and thus only approximate
solutions are tractable. On the contrary, for more abstract models analysis complexity is lower.
Nonetheless, a possible gain in precision of computation may be outweighed by low information
content of the model.

Figure 4: Predictability and Designability

4.2.3 Designability

The rigorous design of artefacts involves two distinct steps [5]. One step deals with formalizing and
validating user requirements expressed in natural language. The second step deals with the synthesis
from the formalized requirements of a model from which an implementation of the artefact can be

8
directly derived.
Our capability to design artefacts is limited by the conjunction of two factors:
 One factor is related to our ability to formalize and experimentally validate requirements about
the artefacts to design. This involves a complexity that we call linguistic complexity. The
requirements to be formalized are usually expressed in natural language and are declarative by
their nature. For some well-established domains of knowledge, the formalization of
requirements may be relatively easy e.g. for building electric circuits or bridges. On the
contrary, for computing systems the formalization and validation of requirements seems to be
a much harder to achieve.
 The other factor is related to the ability to synthesize implementable models meeting the
formalized requirements. This is limited by the computational complexity of the synthesis
problem.

The designability of an artefact can be defined as a quantity that is inversely proportional to the
product between the linguistic complexity for the formalization of its requirements and the
computational complexity for solving the corresponding synthesis problem (Figure 4).

A simple example illustrating this concept is the design of a controller meeting given requirements. It
involves two steps:
1) Formalizing the requirements e.g. by using temporal logic. The hardness of this task depends
on the type and the complexity of the requirements.
2) Synthesizing the controller from the requirements and an executable model of the system to be
controlled. The complexity and the tractability of the synthesis problem depend on both the
type of requirements and the complexity of the controlled system.

Both predictability and designability are abstract concepts that show how Computing contributes to
the development of both scientific knowledge and engineering knowledge. They determine as well
corresponding limitations.

5. Linking Physicality and Computation


5.1 Seeking Commonalities

Recently, the hypothesis that the Universe is a digital computer has motivated a collection of
theoretical perspectives called Digital Physics. The key point of view is that the universe can be
conceived of as either the output of a deterministic or probabilistic computer program, a vast, digital
computation device, or mathematically isomorphic to such a device.
In parallel, the term Computational Thinking was brought to the forefront of the Computing
community initially by Seymour Papert in 1980 [7] and again by Jeannette Wing in 1996 [8]. The key
idea is that integrating computational paradigms and theory in other disciplines is a new way of
understanding the world and solving problems.
Linking the realms of physical systems and computing systems requires a better understanding of
differences and points of contact between them. How is it possible to define models of computation
encompassing quantities such as physical time and resources? Significant differences exist in the
approaches and paradigms adopted by physical and computing systems engineering.

We discuss below problems raised by the emerging discipline of cyber physical systems which tightly
integrate computation, networking, and physical processes. We consider electromechanical processes
that can be modeled in classical Physics.
Cyber physical systems design flows should consistently combine component-based frameworks for
the description of both physical and cyber systems. The behavior of components for physical systems
is described by equations while cyber components are transition systems. Furthermore, connectors for
physical components are just constraints on flows while for cyber components they are

9
synchronization events (interactions) with associated data transfer operations. Physical systems are
naturally parallel and dataflow while computational models are built out of interacting components
that are inherently sequential.

Physical systems models are primarily based on continuous mathematics while Computing is rooted in
discrete non-invertible mathematics. Physics relies on the knowledge of laws governing the physical
world as it is, while Computing is rooted in a priori concepts. Physical laws are declarative by their
nature. Physical systems are modeled by differential equations involving relations between physical
quantities. They are governed by simple laws that are to a large extent, deterministic and predictable.
We know how to build artefacts meeting given requirements (e.g. bridges or circuits), by solving
equations describing their behavior. This approach is badly applicable to computing systems. State
equations of very simple computing systems, such as an RS flip-flop, do not admit linear
representations in any finite field. Computing systems are described in executable formalisms such as
programs and machines. Their behavior is intrinsically non-deterministic. For computing systems
synthesis is in general intractable. Correctness is usually ensured by a posteriori verification. Non-
decidability of their essential properties implies poor predictability.

Despite these differences, both physical and computing systems can be the studied as dynamic systems
described by equations of the form X’=f(X,Y) where X’ is a “next state” variable, X is the current
state and Y is the current input of the system.
For physical systems, variables are functions of a single real-valued time parameter. For computing
systems, variables range over discrete domains. The next state variable X’ is typically dX/dt for
physical systems, while for computing systems it denotes system state in the next computation step.

Figure 5 shows a program computing the GCD of two integer variables and a mass-spring system. The
operational semantics of the programming language associates with the program a next-state function,
while the solution of the differential equation describes the movement of the mass.
The set of reachable states of the program are characterized by the invariant GCD(x,y)=GCD(x0,y0)
where x0,y0 are the initial values of x and y, respectively. This invariant can be used to prove that the
program is correct if it terminates.
In exactly the same manner, the law of conservation of energy 1/2kx02-1/2kx2=1/2 kv2 determines the
movement of the mass as a function of its distance from the origin x, its initial position x0, and its
speed v.

Figure 5: Behavior and laws characterizing a GCD program and a spring-mass system

This example illustrates remarkable similarities and also highlights some significant differences.
Programs can be considered as scientific theories. Nonetheless, they are subject to specific laws
(invariants) enforced by their designers and which are hard to discover. Finding program invariants is
a well-known non-tractable problem. On the contrary, all physical systems, and electromechanical
systems in particular, are subject to uniform laws governing their behavior.

Another important difference is that for physical systems, variables are functions of a single time
parameter. This drastically simplifies their specification and analysis. Operations on these variables
are defined on streams of values while as a rule, operations on program variables depend only on
current state.

10
5.2 Modular Simulation

In contrast to physical system models that have inherently time-space dynamics, models of
computation e. g. automata, Turing machines, do not have a built-in notion of time. For
modeling/simulation purposes, logical time is represented by state variables (clocks) that must be
increasing monotonically and synchronously. Nonetheless, clock synchronization can be achieved
only at some degree of precision and is computationally expensive. This notion of logical time as a
state variable explicitly handled by execution mechanisms, significantly differs from physical time
modeled as an ever increasing time parameter. In particular logical time may be blocked or slowed-
down.

Physical synchrony intrigued Leibniz who tried to logically explain connections between phenomena
in the world. He was amazed that we can have in theory two clocks that are perfectly synchronized and
he suspected that there is a hidden relation (talking about a pre-established harmony in the natural
world). This “miracle” is naturally explained by the existence of fields in physics. Accurate
synchronization of distant computing processes (distributed systems) involves a lot of computational
overhead to be achieved with good precision.
The problem of modular simulation of physical systems has been initially discussed by Richard
Feynman [9]:

“The rule of the simulation that I would like to have is that the number of computer elements required
to simulate a large physical system is only proportional to the space-time-volume of the physical
system. I don't want to have an explosion. That is, if you say I want to explain this much physics, I can
do it exactly and I need a certain size computer. If doubling the volume of space and time means I'll
need an exponentially larger computer, I consider that against the rules.”

Today for efficiency reasons, simulators of physical systems compile the systems of equations
describing their components and the constraints induced by their composition, into a global system of
equations. This limits the capability of simulators of complex systems. An alternative would be to
adopt a distributed modular simulation principle depicted in Figure 6.

To simulate the behavior of the circuit consisting of components R, L and C one method consists in
solving the system of equations describing the dynamics of the components and the constraints on
currents and voltages induced by the connectors (represented by bullets). An alternative approach that

Figure 6: Modular simulation

11
would avoid the construction of a global model is to generate code by connecting computers that
execute the simulation programs PR, PL and PC for R, L and C respectively and use a protocol PR for
their coordination. The protocol plays the role of connectors and should enforce equalization of
voltages and currents as specified by the interconnect constraints.

An interesting open question is whether modular simulation is at all possible with satisfactory
precision, as the communication overhead seems prohibitive even for simple systems. If not this would
bring an extra argument showing that natural computers cannot be rooted in Turing-based
computation.

5.3 Discretizing Time


5.3.1 The Synchrony Assumption

Discretization of continuous models often makes a very strong implicit assumption regarding the
speed of the discretized system with respect to its environment. This assumption known as synchrony
assumption, says that the input x (external environment) does not change or does not change
significantly between two successive integration iterations. If this assumption is not respected then the
simulation is not faithful.

The importance of synchrony assumption can be understood when we try to find an automaton that is
behaviorally equivalent to a function as simple as a unit delay. A unit delay is specified by the
equation y(t)=x(t-1). For the sake of simplicity, we consider that x and y are binary variables,
functions of time t. The behavior of a unit delay can be represented by the timed automaton in Figure 7
with four states, provided that there is at most one change of x in one time unit. The automaton detects
for the input x, raising edge (x) and falling edge (x) events and produces corresponding outputs in
one time unit. Reaction times are enforced by using a clock τ. Notice that the number of states and
clocks needed to represent a unit delay, increases linearly with the maximum number of changes
allowed for x in one time unit. So, there is no finite state computational model equivalent to a unit
delay if we do not make an assumption on the upper bound of input changes over one time unit! If the
synchrony assumption holds then the provided automaton is behaviorally equivalent to the unit delay
function.

Figure 7: Timed automaton representing a unit delay

The above example also shows that mathematically simple functions may not have computationally
simple implementations.

5.3.2 Coping with Zenoness

Another consequence of time discretization is the emergence of so-called Zeno behavior, when an
infinite number of events happen within a finite time span. The classical example models a ball falling

12
from an attitude h and losing a percentage p of its speed due to non-elastic shock upon bouncing on
the ground. The infinite sequence of moments when the ball bounces of the ground converges to a
finite moment, called the Zeno point.

Notice that the behavior of the bouncing ball model is not defined beyond the Zeno point. Thus, for a
correct simulation to be possible, the simulation process must comprise two modes: before and after
the Zeno point. Nonetheless, the problem remains of detecting Zeno behavior and deciding when the
transition between the modes can be taken without the risk of considerable deviation from ideal
behavior. Zeno properties are not computable; so, detecting Zenoness in practice requires human
assistance. Although this problem has been extensively studied existing simulators leave – not
surprisingly - to the model designer the responsibility for providing additional information, such as
precision tolerance or patterns of energy dissipation, which allows deciding when a transition is to be
taken. Although for simple examples, such as the bouncing ball, guessing Zeno points is relatively
easy, doing so for complex realistic models is not practical. Furthermore, to a considerable extent, this
defeats the purpose of simulation, which consists, precisely, in discovering such information.

5.4 Time Predictability of Computing Systems

The study of the dynamic behavior of computing systems shows important specific differences with
physical systems. It is important to note that there is no theory for predicting the dynamic behavior of
and application SW running on a given platform. Empirical methods involve the construction of a
timed model and an analysis to derive properties of the model.

Time and resources in models of mixed HW/SW systems are modeled as state variables. Each action
needs a certain amount of resources for its execution including time, memory, energy etc. Upon
completion it may release non-consumable resources.
Besides the problems related to the predictability of the resources needed for each action (discussed in
4.2), logical resource modeling should faithfully reflect resource dynamics in real system. For example,
parallel modification of time variables in a model should be consistent with physical time which is
monotonically and uniformly increasing. Physical time progress cannot be stopped.
For logical models the programmer is the “master of time”. He may instantaneously block time
progress to simulate urgency of actions. Thus model time may block or can involve Zeno runs (for
models with continuous clocks as such as timed automata). Manifestations of deadline misses are
deadlocks or time-locks and we should be careful to distinguish these anomalies from other functional
anomalies of the same type.

Another difference between physical systems and computing systems is robustness. Physical system
analysis techniques applied by practitioners, assume resource robustness: small changes of resource
parameters in some range of values, entail commensurable change of performance. A civil engineer is
sure that if he replaces a material by a stronger one with similar other mechanical characteristics, the
resistance of a building will improve.
Unfortunately, computing systems are not time-robust. One would expect that a system would exhibit
worst performance for worst-case execution times of its actions. In fact, performance degradation may
be observed for increasing speed of the execution platform. Non determinism is one of the identified
causes of such counter-intuitive behavior known as timing anomaly [10].
We lack theory for guaranteeing resource robustness performance should change monotonically with
resources. Only in this case, analysis for worst-case and best-case values of resource parameters
suffice to determine performance bounds.

6. Linking Natural and Artificial Intelligence

Artificial Intelligence was born in the Mid 60s, as a branch of computer science that "studies and

13
designs intelligent agents”. Over the years, it has had its ups and downs. Initially the AI community
has been profoundly optimistic about the future of the field: "machines will be capable, within twenty
years, of doing any work a man can do" "within a generation ... the problem of creating 'artificial
intelligence' will substantially be solved".
Later, the general problem of simulating or creating intelligence has been broken down into a number
of specific sub-problems such as Reasoning, Problem solving, Knowledge representation, Planning,
Learning, and Natural language processing.
Over the past ten years, we observe a strong a regain of interest in AI especially due to breakthroughs
in learning and data analytics. This is accompanied by an inevitable exacerbation of hype and
speculation not only about the importance of AI but also its potential dangers. According to prominent
scientists and engineers such as Elon Musk, Bill Gates, Stephen Hawking and many others, AI poses a
threat to the human kind.
The potential impact of AI is currently overestimated. It is important that the Computing community
reacts to this mixture of obscurantism and hype guaranteed to win favor with the media, and proposes
a lucid assessment of the AI perspectives based on rational and scientific grounds.

6.1 Seeking Commonalities

Living organisms intimately combine interacting physical and computational phenomena that have a
deep impact on their development and evolution. They share several characteristics with computing
systems: the use of memory; the analogy in the distinction between hardware and software and the
distinction between brain and mind; the use of languages. Nonetheless, some essential differences
exist. Computation in living organisms is robust, has built-in mechanisms for adaptivity and, most
importantly, it allows the emergence of abstractions and concepts.
Computers surpass conscious human thinking in that they compute extremely much faster and with
extremely much higher precision. This confers them the ability to successfully compete with humans
in solving problems that involve the exploration of large spaces of solutions or the combination of
predefined knowledge. This idea is applied by systems such as IBM’s Deep blue and Watson, as well
as Google’s AlphaGo.

Defeating in such a manner human intelligence make people believe that computers exhibit
intelligence and that they are even superior to humans in that respect. This idea of behavioral
comparison between machines and humans goes back to A. M. Turing. The Turing Test involves three
players. A player C, the interrogator, which is given the task of trying to determine which player – A
or B – is a computer and which is a human. The interrogator C is limited to using the responses to
written questions to make the determination.
Such a behavioral test may be criticized for several reasons. Searle’s Chinese Room Argument is a
thought experiment which shows that understanding the meanings of symbols or words – what we will
call semantic understanding – cannot simply amount to the processing of information. Furthermore,
Turing test may be diverted from its original purpose if the interrogator asks questions such as
“compute a digital expansion of length 100 for p”. The computer beats the human player. No surprise.
Computers are much faster than humans in performing any well-defined computation!

6.2 Understanding Human Intelligence

Current state of the art allows building intelligent systems that are specialized e.g. playing games,
driving cars, creating knowledge for a specific domain.
We need systems that exhibit general intelligence. The best route for achieving this may start with a
better understanding of human intelligence.

14
Humans combine perception and reasoning using a semantic model of the external world that has been
progressively built in their mind though learning and by consistently integrating knowledge acquired
along lifespan.
Consciousness is the ability to “see” the Self interact with this semantic model of the world
contemplating possible choices and evaluating the consequences of actions. The role of perception is
to interpret sensory information in terms of concepts in order to represent and understand the external
environment. Learning plays an important role in updating knowledge that is used in particular to
make decisions and choose objectives.

Recent progress in learning shows that computers are able to generate knowledge and predict with
some success. Big data analytics allow the discovery and interpretation of meaningful patterns defined
by specialists. This leads to the creation of a new type of knowledge that allows predictability but with
limited or even without understanding.

For computers to “understand” they should be equipped with a semantic model of their external
environment. Intelligent behavior requires algorithms that are not just recipes for the syntactic
manipulation of symbols. In order to be able to build such as model we need to analyze natural
language and create semantic networks (taxonomies) involving hierarchies of disjoint categories
(concepts) representing knowledge about the world. Furthermore, we need to define rules for updating
and enriching the knowledge used by the model.
Despite research efforts over more than fifty years, very little progress has been accomplished so far in
that direction [11]. Humans are much superior to computers in understanding situations and using
common sense knowledge. For instance, the instantaneous interpretation as an aircraft crash of the
sequence of frames of Figure 8 by a human requires the combination of implicit knowledge and of
rules of reasoning which is hard to make explicit and formalize.

Figure 8: Common sense reasoning

In our opinion, for computers to approach human intelligence we should successfully overcome the
limitations inherent to linguistic complexity and study methods for building semantic models even for
subsets of the natural language e.g., domain-specific languages for requirements specification.

Finally, regarding formal reasoning, a key limitation of computers comes from Gödel’s
incompleteness theorem. Among the three basic rules of reasoning, (deduction, abduction and
induction), computers cannot discover induction rules. If they could then program verification would
be a tractable problem. For instance, to prove partial correctness of a program we need invariants J
used in the following induction principle: J(s0) i.e. J holds at the initial state s0; Inductive step: J(s)
and for any statement st, st(s)=s’ implies J(s’). Similarly, to prove that a program terminates, we use a
ranking function : SN such that for any state s, any statement st, st(s)=s’ implies (s)>(s’).
Of course, computers can apply induction rules discovered by their programmers but this is an entirely
different matter.

15
This tentative comparison between artificial and human intelligence should take into account that
computers are artefacts based on models of computation rooted in a priori knowledge. Boolean
functions may be implemented by hardware consisting of relays, electronic tubes, transistors or any
switching devices. Artificial computational processes are discrete and sequential; they can only
approximate physical computing processes which are intrinsically continuous and parallel. Natural
computing e.g. neuromorphic computing, may have different underlying computation models not
subject to the limitations of Turing-based computing.

Another important remark is that human intelligence is the result of the combination of two types of
thinking [12]: 1) slow conscious thinking that is procedural and applies rules of logic; 2) fast
automated thinking that is used to solve computationally hard problems such as speaking, walking,
playing the piano etc. Clearly, fast thinking is a key factor of human intelligence. Over our lives we
nurture through conscious learning, the automated fast faculties that are essential components of our
intelligence.
Mathematics and Logic as the creation of conscious procedural thinking capture and reflect its internal
laws. These laws are implemented in computers which thus are adapted to model this type of
reasoning.
Natural computing is based on physical computational processes may be more adequate for studying
fast thinking. It is robust as the effects of small changes in physical systems are usually
commensurably small. It is parallel and (probably) not subject to current complexity limitations of
Computing. Discreteness makes practically impossible robustness for discrete models of computation.
Unfortunately, as fast thinking is non-conscious it is impossible to understand and analyze the
underlying mechanisms and laws, as we did for slow thinking.

These differences delimit a gap between artificial and natural computing systems. There exist today
many opportunities for interactions and cross-fertilization between Computing and Biology. For
instance, results from neuromorphic and cognitive computing could inspire new non von Neumann
paradigms. Conversely, synthetic biology could benefit from existing CAD and systems engineering
technology.

An interesting question is which functions are physically computable? Is it physically possible to do


better using directly the computational capabilities of natural systems e.g. analog computing, quantum
computing and neural computing? These questions are bound up with the foundations of physics.

6.3 Adaptive Systems

Adaptivity is a novel and realistic vision that uses “intelligence” as a means to enforce correctness in
the presence of uncertainty by using control-based techniques.

Increasing systems integration inevitably leads to systems-of-systems of mixed criticality that are
geographically distributed with heterogeneous sub-systems and various communication media. There
exist several incarnations of this systems-of-systems vision. The most general one is to build the
Internet of Things (IoT) intended to develop global services by interconnecting smart devices through
a unified networking infrastructure that will be used to send data to the cloud. Using big-data analytics
techniques, the collected data will be analyzed to allow enhanced predictability and globally optimal
management of physical resources. Reaching this technological vision requires a new type of systems
that heavily rely on knowledge-based techniques and exhibit adaptive behavior to meet increasingly
antagonistic trustworthiness and performance requirements in the presence of uncertainty and
exploding complexity.

Uncertainty can be characterized as the difference between average and extreme system behavior.
Non-determinism of physical environments increases uncertainty. Furthermore, execution platforms

16
have varying execution times due to layering, caches, speculative execution and variability due to
manufacturing errors or aging.

This makes technically and economically unfeasible classic critical systems engineering techniques
based on worst-case analysis of all the potentially dangerous situations. Static at design time
reservation of all the resources (memory, time) needed for safe operation, leads to over-provisioned
systems. The amount of physical resources may be some orders of magnitude higher than necessary.
This incurs high production costs but also increased energy consumption.

Adaptive systems are equipped with controllers which manage system resources globally and
dynamically so that critical properties are met in any case. The remaining resources can be used to
satisfy best-effort properties. This avoids costly static resource reservation and does not require a fine
worst-case analysis to determine potentially dangerous situations.

An adaptive controller monitors the state of a system and steers its behavior to meet given
requirements specified as a set of objectives including critical properties such as deadlines as well as
constraints expressing the optimal use of resources (Figure 9). It combines three hierarchically
structured functions: Learning, Objective Management and Planning.
Objective Management function is a multi-ojective optimization program that depending on the state
of the controlled system, chooses objectives that meet critical properties and fit better the resource
optimization criteria. A selected objective is passed to the Planning Function which computes a
corresponding execution plan driving system evolution so as to achieve the objective. The Learning
Function is used to cope with uncertainty. It computes on line estimates of parameters involved in the
constraints handled by the Objective Manager e.g. worst-case and average values of parameters such
as execution times and throughput.

Figure 9: Adaptive system

Adaptive control was initially studied in Control Theory [13]. It finds today an increasing number of
applications e.g. in robotics, quality control in multimedia systems, throughput control in networks,
and self-maintenance and recovery in distributed systems.
An important issue in adaptive systems design is keeping low the overhead due to monitoring and
control. This should be compensated by a much higher gain in quality and trustworthiness assurance.

7. Discussion

Computing is a distinct domain of knowledge, a broad field that studies information processes, natural
and artificial as well as methods for building computing artefacts.

17
Computing should be enriched and extended to encompass physicality – physical quantities such as
memory, time, and energy should become first class concepts. We should study natural computing
processes (such as DNA translation) that seem like computation but do not fit the traditional
algorithmic definitions.

Computers have a deep impact on the development of science and technology similar to the discovery
of mechanical tools and machines. As they help us to do work by multiplying force and speed,
computers multiply our mental faculties by extending our ability for fast and precise computation.
They allow us to handle and generate new knowledge to address planetary challenges such as global
management of resources and prediction of critical events. It also allows mastering the construction of
complex artefacts that exhibit intelligent behavior. Nonetheless, as an aircraft is not a bird, a computer
is not a mind. To make computers more intelligent we should better understand how our mind works
and cope with linguistic complexity of natural languages.

A central problem in Computing is the study of design as a formal process leading from requirements
to correct systems [5]. Providing design with scientific foundations raises several deep theoretical
problems including the conceptualization of needs using declarative languages, their proceduralization
and implementation. Design is at least as important as the quest for scientific discovery in physical
sciences and nicely complements the effort for scientific progress and its fruition.
Computing drastically contributes to the development of knowledge through cross-fertilization with all
domains as well as enhanced predictability and designability. In particular, it complements and
enriches our understanding of the world with a constructive and computational view different from the
declarative and analytic adopted by physical sciences.

18
References

[1] Peter J. Denning “Is Computer Science Science?” CACM, April 2005/Vol. 48, No. 4
[2] Vinton G. Cerf “Where is the Science in Computer Science?” CACM, October 2012, vol.55, no.10.
[3] P. W. Anderson “More Is Different - Broken symmetry and the nature of the hierarchical
structure of science” Science, New Series, Vol. 177, No. 4047. (Aug. 4, 1972), pp. 393-396.
[4] H.A. Simon “The Sciences of the Artificial”, 3rd edition, 1996. MIT Press.
[5] J. Sifakis. System Design Automation: Challenges and Limitations, in Proceedings of the IEEE,
vol. 103, num. 11, p. 2093-2103, 2015.
[6] C. Ferdinand, R. Heckmann, M. Langenbach, F. Martin, M. Schmidt, H. Theiling, S. Thesing, and
R. Wilhelm “Reliable and Precise WCET Determination for a Real-Life Processor” EMSOFT 2001,
LNCS 2211, pp. 469–485, 2001.
[7] S. Papert “An exploration in the space of mathematics educations”. International Journal of
Computers for Mathematical Learning. 1(1996).
[8] J.M. Wing, “Computational thinking” CACM. 49 (3): 33 (2006).
[9] R. Feynman, “Simulating Physics with Computers” International Journal of Theoretical Physics,
Vol 21, Nos. 6/7, 1982.
[10] J. Reineke, B. Wachter, S. Thesing, R. Wilhelm, I. Polian, J. Eisinger, and B. Becker, “A
definition and classification of timing anomalies,” in Sixth International Workshop on Worst-Case
Execution Time (WCET) Analysis, Dresden, Germany, July 4 2006.
[11] E. Davis, G. Marcus “Commonsense Reasoning and Commonsense Knowledge in Artificial
Intelligence” CACM September 2015, Vol. 58,No. 9
[12] D. Kahneman “Thinking Fast and Slow” Farrar, Straus and Giroux, October 2011.
[13] K.J Astrom, B. Wittenmark “Adaptive Control” 2nd edition, Addison-Wesley Longman
Publishing Co., Boston, MA, USA, 1994.

19

You might also like