Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

This letter supports Zadeh’s nomination for the Turing Award, claiming that his Generalized Theory

of Uncertainty [1] (where the quotes below stem from) is even more influential for the post-industrial
era than his seminal papers on fuzzy sets (1965) and on possibility theory (1978) were for the
industrial era.
The assertion is based on two cardinal paradigmatic breakthroughs: a) moving from “information is
statistical in nature” to “information is a generalized constraint”; b) setting as target “achievement of
NL-capability, [...] because much of human knowledge, including knowledge about probabilities, is
described in natural language”. Both are for service-oriented engineering – at least – as valuable as
moving from crisp to fuzzy sets was for product-oriented engineering. Both have less visible success
than their last-century counterparts had because they are judged according to the reigning
paradigm(s) – not because of their possible open-ended scientific value.
The letter’s paradigmatic stance is that artificial intelligence is reflected by word-oriented,
Turingtest-like, anthropocentric interfaces (aimed at managing situations, as in “The Imitation
Game”), rather than by number-oriented, Turing-machine-like algorithmic software (aimed at solving
problems, in line with the Church-Turing thesis). Corollary: since services are processes assessed in
NL, according to “Nlbis” (where NL stays for the Natural Logic underpinning it), the practicability of AI
in post-industrial engineering, can be substantiated only by systems with high NL-capability. (Only
when providing complex services involves simple human-computer interface, as in the case of
search engines, can fuzziness succeed with NL-capability achievable via algorithmic software.
Google is archetypical: its NL-capability necessary in the interface is reduced to a single phrase:
“About … results”. )
From the standpoint above, the fundamental role of GTU is illustrated summarizing – in order of
increasing paradigmatic gap – the pillars of algorithmic/mechanistic stance (still ruling in system
modelling and design) it shakes, because of their lacking or reduced NL-capability (in brackets are
hints to causes that impair NL-capability, as perceived from the other side of the gap,
through“Nlbis”):
- Probability. “There is a deep-seated tradition in science of dealing with uncertainty [...]
through the use of probability theory. Successes of this tradition are undeniable. But as we move
further into the age of machine intelligence and automated decision-making, a basic limitation of
probability theory becomes a serious problem. More specifically, in large measure, standard
probability theory, [...], cannot deal with information described in natural language.” (Two decades
after Dubois, Prade, and Smets won the challenge on undeniable mathematical grounds, defending
possibility theory in GTU was helpful, underlining that probability is inapplicable in real-world word-
oriented decision making. Indeed, “0” and “1” are used at most as placeholders of truth values, not
as real numbers. As regards model theory, after Carnap reinstated in mathematical logic the right of
meaning – through semantics – no logician felt the need of a universe of “probable worlds”.)
- Precision. “Basically, a natural language is a system for describing perceptions. Perceptions
are intrinsically imprecise, reflecting the bounded ability of human sensory organs, and ultimately the
brain, to resolve detail and store information. Imprecision of perception is passed on to natural
languages.” (In his 1999 paper, where he launched “computing with words”, asserting that humans
are able to perform mental tasks “without any measurements and any computations”, Zadeh goes
beyond “tolerance for imprecision”. In fact, he adds – albeit implicitly – to his “Rationale 2 for
granulation: precision is costly”, a “Rationale 3: precision is unnatural” (from a bounded rationality
stance, in the very meaning of Simon).
- Bivalence. “[B]ivalence is abandoned throughout GTU, and the foundation of GTU is shifted
from bivalent logic to fuzzy logic. As a consequence, in GTU everything is or is allowed to be [...]
fuzzy. [...], all variables are, or are allowed to be granular. [T]he conceptual structure of bivalent logic
[...] is much too limited to allow a full development of the concept of precisiation”. (The attempt to
reconcile the Boolean infrastructure of IT with fuzzy sets theory yielded frustrating side effects:
“Kelvin-number-oriented” scientists moulded complex theories where instead of dealing with two
integers, programmers have to consider the continuum of reals. Conversely, there is not even a
“third IF value” expressing uncertainty as generalized constraint according to the meaning
postulate.)

If challenging the pillars above involves tolerable moving away from Bayes, Kelvin, and
Chrysippus respectively – all three shifts having famous and conquering precedents – the two pillars
below. left untouched, but obviously exposed to criticism, are still perceived by scientists as taboos
and treated accordingly (with enthymematic NL-capability):
- Temporal dimension(s). NL-capability is essential in modelling living systems and sine qua
non for model interpretability in decision making. However, conventional models circumvent the
crucial role of NL, despite the well-known examples of Lewis Carroll and Kleene showing that AND,
perceived as noncommutative operator, suggest succession in irreversible (Bergsonian) time.
Moreover, ignoring the failure of CYC-like ontologies – and even the hurdles met by “precisiated-
domain” ones – models are yet either atemporal or have rudimentary temporal dimension, reflecting
in algorithmic LOOPs reversible (Newtonian) closed time. (Effectiveness of such time in automation
and robotics has no probative force, since robots need NL-capability at most in interfacing with
users.) The principle of cointensive precisiation is hardly applicable to future contingents. (For
instance, what mh-precisiands – where m stays for i-meaning – may be user-NL-interpretable when
the precisiend is an unhappened event, even expressed intensionally?) Such caveats of GTU are
left “in NL-innuendo” by DSS designers unwilling to accept that human decision-making is perceived
and treated as risk management. Humans sense risk in Bergsonian time, as made obvious in
insurance policy: companies compute probabilistic (their operational risk) but pay possibilistic
(according to NLperception of future contingency).
- Shannonian uncertainty. Dertouzos questioned the universally bit-oriented interpretation of
information, distinguishing between “Information as noun” (static knowledge mirrored in
objectoriented software), and “Information as verb” (know-how, mirrored in process-oriented
software). GTU goes further, suggesting the problem lies not in the fact that logarithms of
probabilities lack NLcapability. Defining information as general constraint, and focusing on i-
meaning, implies intensional assessment at the user, not extensional measurement at the provider.
Using the term pair “userprovider” instead of “receiver-sender” has a twofold rationale: it fits the
post-industrial “client-server” computing paradigm, transcending both the industrial “producer-
consumer” and the archaic “senderreceiver” ones; it underlines the GTU idea that the fundamental
concept of information cannot rely on formulae introduced in the late 40s, when information was just
transmitted, not yet processed. In short, using Eco’s phrases: intentio lectoris is paramount and has
weak relationship with a statistical account of intentio auctoris. A notorious example: the illocutionary
force of a very predictable, only three-phoneme long, “Yes” at a wedding ceremony is not described
by the number of bits necessary to notice it, no matter the distance.
The last two fundamental aspects challenge the Spinozian memeplex of “ordine geometrico
demonstrata”, still regarded as unique intellectual foundation of modern science (in spite of
Heisenberg, Gödel, Wiener, and Zadeh). Even deeper, in “preferring words to numbers” stays
hidden the subliminal message that Aristotelian propension to certainties was not shared by (his
fuzzy contemporary) Laozi. Superseding the hint to “Western” cultural tradition in his 1997 paper by
the neutral “deep-seated” quoted above, Zadeh could not make GTU more palatable. Hence, most
computer scientists, uncomfortable both with GTU and with challenging it, are reluctant to boost
support for its principles and above all for its huge unused potential.
This is hardly an excuse for overlooking the fruit of over half a century’s work, consolidated,
refined, and accomplished in GTU.

[1]. Zadeh, L.A. Generalized Theory of Uncertainty (GTU) – Principal Concepts and Ideas.
Computational Statistics & Data Analysis 51(1), 15–46, 2006.

Name: Boldur E. Bărbat


Title: Professor
Affiliation(s): None. (PhD advisor at “Lucian Blaga” University of Sibiu until
2012.)
Contact information: E-mail address: bbarbat@gmail.com; Phone: +40761606900.

You might also like