Metatrends: of Risc and Reward

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 24

RISC and Reward

MetaTrends:
of RISC and Reward
“In times of crisis and change, Knowledge is Power.”
— John Fitzgerald Kennedy — 1961

“Predicting is hard — especially the future.”


— Niels Bohr (Copenhagen) — 1938

On Foreknowledge

Wouldn’t it be wonderful to know the future — before it happens?

At the very least, it could be immensely profitable — to know the next


number to come up on the roulette wheel or the winner of tomorrow’s
Daily Double. Such simple greed is more than the grist for the writers of
old Twilight Zone or new Back to the Future II scripts. After all, it was the
roll of the dice and the spin of the wheel that prompted Blaise Pascal to
develop the first mathematical theories of Probability and Statistics in the
early Seventeenth Century and the lure of the track and the table that
seduced the first computer programmer, the Countess Ada Lovelace, in
the early Nineteenth.

A century later, the lure of a peek at the future inspired the Dow
Theory, the quantitative analysis of the peaks, the valleys and the
opportunities of the securities trading floor, an application area far more
profitable and hence far more respectable (until recently) than the casino
floor. The raw data fueling this engine of risk and reward have been
published each trading day since the 1920’s — duly accompanied by its
eponymous Industrial Average — in the Dow-Jones Corporation’s ever-
respectable Wall Street Journal, the Racing Form of the more modern
breed of speculator.

As we near the end of the Twentieth Century’s literally exponential


growth of science, technology and the changes wrought by them, those
most critically dependent on the ambiguous oracles of change are those
that stand to gain or lose the most on the roll of the technological die. It is
the investor, the entrepreneur and the user of these complex patterns of
information — whether inside and outside the giant corporate and
government entities — who most vitally depend on the new art and
science of managing complexity.

It is the thesis of this author and of his company — from experience in


researching and forecasting the future for those entities and individuals —
that the current evolutionary trend towards “open systems” is not an end in
itself. Indeed, we believe it is even more than a means for managing the
inevitable complexity of this complex “information age.” Instead, we see it
as the tip of an emerging iceberg, a manifestation of an inescapable
“Darwinian” force, the fabled “Invisible Hand,” reshaping our global
economy and society into a new connective topology or, in current
parlance, “a new paradigm” — or returning to a very old, even prehistoric
paradigm. It is a practical consequence of our thesis that the shape of the
world under this fundamental new paradigm can be foreseen, anticipated
and acted upon.
On Connections

“If I have seen far it is because I have stood on the shoulders of


giants.”
— Isaac Newton (London) — 1703

“In other disciplines, people make progress by standing on each


other’s shoulders —

in Computing, its seems, we make progress only by standing on each


other’s toes.”
— Edsgar Djikstra (ACM Pacific Conference, San Francisco) — 1975

As the video-journalist-historian-philosopher James Burke has so ably


pointed out, the entire history (and, perhaps, prehistory) of the human race
is a history of connections. Human speech and its descendant forms of
communication make possible the cooperation that is the basis of our
societies, from the first protohuman tribes in paleolithic Africa to the
satellite-linked world economy of today. With the development of the
written word and number, communication in space was augmented by
communication in time — from past to present to future — and the growth
of human culture and knowledge began its exponential growth, building
upon the knowledge of the past rather than reinventing it each generation.

For much of the last five-hundred years, from the Age of Exploration
onward, the physical aspects of connection — transportation more than
communication — historically has captured more of the attention of
society and had more visible impact on the economy. Just as in the
popular view, The Wheel is the most significant early invention, so the
roads of the Roman Empire, then the sailing ships of the various colonial
Empires, the steamship, the railroad, the automobile and the aircraft have
shaped our society both in deed and name. We’ve had the Steam Age,
the Auto Age, the Jet Age and the Space Age — all within a single recent
lifetime. There seems to have been no comparable reference to the
Printing Age, the Telegraph Age, the Telephone Age or any information-
based equivalents until the currently referenced Computer Age or
Information Age — each implying something more tangible, less transitory,
less ephemeral than the abstraction of pure communication.

In analyzing this elusive societal phenomenon — the aversion of the


average human mind to recognize or acknowledge that which it cannot
grasp — we have found as many explanations as there are experts, as
many opinions as there are observers. But from a purely practical and
directly utilitarian stance, at least one viewpoint recommends itself as a
successful technique in predicting the future, at least as it may be
extrapolated in a credible (if counterintuitive) way, from the past. The
trend, or perhaps a Metatrend (with apologies to John Naisbitt) which
emerges from this analysis is: ‘The place to watch for fundamental
transforming changes is at the “interface” between Goods and Services,
where new enabling technologies permit values once added by Services
now to be added by “Products.”
On Forecasting

“No more than 500,000 automobiles will ever be owned by families in


the United States,

as this is the greatest number of households which can afford the


requisite chauffeurs.”
— Unknown Market Predictor — 1906

“If telephone usage in these United States were to continue growing at


its current pace,

in fifty years the entire population would need to be employed as


telephone operators.”
— Anonymous Forecasting Wit — 1928

“Of course I am interested in the future — I plan to spend the rest of


my life there.”
— Isaac Asimov (New York) — 1963

Among the less pretentious and self-satisfied members of the market


research and technological forecasting communities (possibly an empty
set), the paraphrased predictions above stand as excellent examples of
the many dangers inherent in “linear extrapolation.” Modern forecasters
— be they academic economists, government statisticians or private
prognosticators (i.e., financial analysts and employees of syndicated
market research firms) — may smile at these naive efforts. But on semi-
log paper (or its Lotus 1-2-3 equivalent) our complex formulae for
CAAGRs (Compound Aggregate Annual Growth Rates) often plot to
equally boring straight lines, bland linear extrapolations transposed into
exponentials. Past events, on the other hand — the histories of actual
shipments, installations and use, rarely look so clean or well-behaved —
look very bumpy and irregular by contrast.
This is not to say that this persistent error of the professionals is readily
avoidable or that Economics is called the dismal science because of its
dismal record of accuracy its predictions have compiled. The amateur
predictor, whatever his stripe, is on the whole even worse: the only truly
consistent Dow-Jones datum predicting the way the market will not go is
the odd-lots trading figure, the paw-print of the small investor. The
convoluted sine waves of the seesawing market — where to be even
slightly out of phase, to be even slightly behind the information curve, and
hence perpetually wrong — are only one form of the complex exponential
which the unaided human mind seems predisposed to “linearize.”
Mathematically, the same tendency to flatten the emotionally
incomprehensible and ever-steepening exponential growth curve compels
us to draw straight lines through some point in the near future. Thus we
always overestimate progress for the near future but grossly
underestimate the possibilities of the long run. The latter accounts for the
popular press’s amusing technological predictions of decades past —
personal commuter helicopters for intracity travel, ultra-streamlined 200-
mph cars for highways, 500-mph transcontinental trains and twelve-
propeller flying boats or atomic-powered airships for intercontinental trips.

Most significantly, both of the “grossly underestimating”


prognostications regarding the automobile’s and the telephone’s future
markets were right in essence but wrong in interpretations based on their
“metalinear” mode of thinking — from which we have no scientific escape
even today, only the artistic flight of the true visionary. In reality, both of
the “predictions” were correct in principle, but the essential “value-added”
component of each of these two industries were transformed (in their
essence) from reliance upon raw “service” functions to reliance on
“products plus a support infrastructure,” specifically:
• Henry Ford’s economic advances in mass production and economies
of scale were paralleled by the technological advances of the electric
starter, synchromesh gears, pneumatic tire, automatic spark-
advance and later market-expanding enhancements like the
automatic transmission and power steering coupled with the
infrastructure of paved highways and refined petroleum followed by
the automotive fuel and service network and interstate highway
system, converging to create an environment wherein we have all
become our own chauffeurs.
• Bell Telephone Laboratories comparable advances in telephony,
beginning with the stepping relays that displaced the switchboard
and culminating in today’s digitized, computer-routed, satellite-
relayed, Touch-Tone direct international dialing, have now
transformed the vast majority of the original “service” functions into a
global network of mass-produced “products” all interconnected by a
vast support structure in which we have all become our own
telephone operators.

The forgotten forecasters may indeed have the last laugh on their
latter-day critics, for thoughts and deeds based on linear or even
metalinear extrapolation are subject to the penalties set forth in The Origin
of Species. The American railroads once constituted a rich, inbred
oligopoly that looked no farther for its competition than the other side of
the tracks. In the passenger markets, they never looked upwards to the
unfriendly skies of primitive air travel; nor, in the freight transport markets,
did they recognize the burgeoning network of publicly funded highways
shared by long-haul trucks with an explosion of private autos. The Swiss
arrogance in their early dismissal of the externally digital watch nearly cost
them an entire industry when quartz technology moved in to replace the
timepiece's internals.

Lest we be too quick to judge, in our own backyard, the U S


semiconductor industry in the ’70s virtually abandoned the “slow” high-
density, low-powered CMOS technology to the Japanese — for
calculators, watches and other “toy” applications. Meanwhile, we pursued
the bipolar and ECL leading edge of computer speed and power for nearly
a decade — until stopped by the twin bottlenecks of ultra-large, ultra-high-
speed systems: the thermodynamic limits of “speed-power product” and
physical limit of the speed of light. Somewhat belatedly we noticed that
power consumption is ultimately only the flip side of power dissipation, and
device density is only the obverse of device-to-device path length. Closer
still to home, most of us remember how the giant mainframes of IBM were
blind-sided by Digital’s first minicomputers, which the King of Computing
once took no more seriously than, say, Digital took the first personal
computers two decades later. And so, ad infinitum.
On Complexity

“It is not the case that our problems are so complex, but that we have
such small heads.”
— Edsgar Djikstra in “The Magic Number 7, plus or minus 2” — 1974

“It isn’t ignorance that gets us in trouble — it’s what we know that ain’t
so.”
— Will Rogers (Tulsa, Oklahoma) — 1932

To restate our primary thesis — that the critical transitions in the


history of technology, the greatest impacts on industries, economies and
societies, occur when “service-based” functions are transformed in to
“product-based” functions. In the Industrial Age context of the past few
centuries, mechanization transformed predominantly “human-based” tasks
into primarily “machine-based” ones — more accurately “machine-
augmented,” since machine power was still under the control of human
intelligence. In the present “information age” context, where automation
seeks to provide machine augmentation of human intelligence, the
success of these attempts often depends upon the system’s underlying
philosophy — whether the design seeks to augment or to replace human
capabilities.

Virtually without exception, only the augmentation approach achieves


success: even the autoteller is less an automated replacement for a human
bank teller than an augmented and simplified system interface for the bank
customer. This redefines the basic design task (and the problems
encountered) from a computation focus on “information management,” to a
communication focus on “complexity management.” Communication problems
— whether between humans and programs, programs and systems, systems
and machines, or between machines in a network, programs in a system, or
humans in an organization — are all problems in complexity. Consider five
arenas of competition in our industry, five points of concern, taken in sequence
from “bottom to top,” beginning at the hardware level, namely:
• RISC: Reduced Instruction Set Computing — Impact on computer
speed & standards

• Open Software: Portability, scalability & interoperability of systems,


software & data

• Open Systems: Impact & exploitation of networking,


multiprocessing & parallelism

• GUIS: Graphic User Interface Systems — Impact on human user


speed & standards

• Systems Integration: All of the above + managing the complexity


they necessitate

Fundamentally, each of these five topics is either the solution to a


specific complexity management problem or an example of such a
problem — or both. While this analysis could treat each of the five topics,
space and time permit us to examine in detail only the first — the
problems, challenges and opportunities of RISC — with an eye to tracing
the common thread connecting all this conference’s topics, namely the
management of complexity.
If complexity is the common problem then is software the common
solution — or just another problem? From the viewpoint of a hardware
engineer or corporate management, software is at best a black art and at
worst a black hole, devouring countless millions of man-months and
woman-weeks each year with no guarantees of delivery on time, within
budget or even as specified. Endless analyses of the “software
bottleneck” all cite the need for better tools and greater discipline. Rarely
do they address the possibility that we may be tackling problems of
excessive complexity, those beyond our psychological limits upon the
number of items and interrelations we can mentally manipulate: the seven
(plus or minus) cited by computer-science savant Edsgar Djikstra. In
short, the first step towards our goal may be change in the way we think of
software (in specific) and complexity (in general.)

Were we seeking an alternative definition of the RISC buzzword at a


more general level, Reduced Intrinsic System Complexity would fit
perfectly with Gerald Weinberg’s popular writings on general systems
theory, including An Introduction to General Systems Thinking, and Are
Your Lights On, but really beginning — most appropriately — with The
Psychology of Computer Programming, in 1969, when he was an IBM
fellow. One of his basic premises is that predictability occurs at either end
of the spectrum of complexity — Newton’s Laws of Mechanics holding
sway in simple domains with few elements, Boyle’s and Boltzmann’s Laws
of Statistical Mechanics guaranteeing outcomes in innumerably complex
domains — with “Murphy’s Laws” governing the intervening domain of
“complex system behavior.”

In examining the four critical technologies besides RISC (original


definition), we find that all five approaches attempt to circumvent Murphy’s
Laws — a goal that may seem like thwarting the Second Law of
Thermodynamics. In reality each methodology attempts to reduce the
number of possible states the system can assume — in Information
Theory terms “to minimize its entropy” — to a complexity level our little
Size 7±2 heads can manage:
• In Open Software, the IEEE 1003 “POSIX” standard and its
progeny — the N.I.S.T. Federal Information Processing Standard
used in United States government contracts, the X/Open Common
Application Environment embraced by European agencies and the
ISO Joint Technical Committee 1, Subcommittee 22 Working Group
proposal that hopes to grow up to be a Standard for all Seasons (and
reasons) — are primarily Application Program Interface
specifications. Each seeks to spell out “clearly” and precisely what
functions and services an application can expect and an operating
system must offer — the closest software equivalent to a hardware
bus specification. Each frees the user from the complexities of how
a program was written: for which system (portability) of what size
(scalability) or even when (maintainability). The converse
(interoperability) treats similar complexities for data rather than
programs.

• In Open Systems, the celebrated seven-layer ISO Open Systems


Interconnect model guarantees to each level — more critically, to
the programmers writing for it — a consistent and relatively simple
interface to the level below it and above it and the right to ignore
everything else — as if each lower level were well-behaved
hardware. The ultimate goal, particularly in the related areas of
multiple-processor and parallel execution, is not only to free the user
from the complexity of where in the network the processing occurs or
data resides, but from which or how many processors are used.

• In Graphic User Interface Systems, the portability-scalability-


interoperability motif (lower case, again) is extended to managing the
complexity of the human interface:

• a consistent “look and feel” makes expertise portable across


applications and reduces to a humanly manageable number the
skills needed to use the system;

• a progressive “command and control” interface makes skills


scalable across levels of expertise, leading from simplicity for
the novice to power for the expert;
• a regular repertoire of data models makes information
interoperable between applications, from simple “cut-and-paste”
to sophisticated interchange formats;

• a modular user interface makes systems maintainable,


presenting a fixed “level of abstraction” independent of the
underlying application, system or platform.

• In Systems Integration, arguably the most lucrative opportunity in


the computer industry today, managing complexity is not the means
to an end — it is the end itself. As H. Ross Perot proved with EDS
and its billion-dollar facilities-management empire, “To people with
headaches, sell aspirin.” The standard dismissal of this opportunity
by manufacturing companies is that the field is a labor-intensive
Service industry with finicky customers, fickle employees, faulty
software and failure-prone hardware. But Perot realized that there
are economies of scale possible in serving a large number of similar
customers with fundamentally the same needs, that while it may not
be feasible economically to build a complex set of tools, procedures
and skills for one site, it may very well be practical to create such
“products” if amortized across many customers.

The RISC revolution offers a prime example of the turmoil of an


industry in transition — ours. Indeed, the computer industry is naturally
fractious, both inter- and intra-company — an understatement of the first
magnitude — and this alone could almost account for the way in which
several large companies’ transition to RISC has nearly torn each of them
asunder. IBM lost savants Joel Birnbaum (to Hewlett-Packard’s RISC
design leadership), Glen Henry and Charlie Sauer (to Dell Computer’s
secret task force) and even team-leader Andy Heller (to Kleiner/Perkins
and even more secret services) and top manager Frank King (to Lotus).
Digital lost VMS-creator Dave Cutler (to Microsoft) in its internal RISC
wars, and the fabled loyalty of H-P customers was put to the ultimate test
in the years of delay for Spectrum/PA. Still, it appears that something
more fundamental is going on in the dark places of the mind.
On Management

“‘A slow sort of country!’ [said the Red Queen to Alice.] ‘Here, you
see, it takes all the

running you can do just to keep in the same place. If you want to get
somewhere else,

you must run at least twice as fast as that!’”


— Lewis Carroll’s Through the Looking Glass — 1872

Managing complexity seems to be the common thread weaving


together today’s critical problems, the key issues of our industry —
perhaps in all industries or perhaps in the world. If that sounds like a
grandiose, self-serving projection of our own concerns onto the face of the
planet, could not one legitimately define the failure of the Communist
system in Eastern Europe as a failure of central planning itself, an inability
to manage the intrinsic complexity? In the recent hostilities in Iraq, were
not the first targets attacked the command and control centers, the
enemy’s “complexity management systems”; were not a substantial
fraction of the Allied casualties from “friendly fire” a failure to manage
War’s incredible complexities?

From the Challenger disaster to the Hubble myopia, can we infer that
the complexity of coordinating the best and the brightest individual and
corporate talents is beyond NASA? Is not much of the yellow peril cited
by the American Automotive, Steel and even High-tech industries, our own
inability to match Japan’s uses of “just-in-time” inventory management, of
statistical quality control systems, of advanced manufacturing lines lending
themselves to robotics and automation, of dealer feedback systems tied
directly to manufacturing design?

Are these management problems an inevitable outcome of the


complexity of our age? Three classical approaches to reducing
complexity to manageable dimensions are neither unique to our times nor
to our industries, but are found in the works of three “managers” who
operated in the complex and competitive environments of more than
twenty centuries past:

• Divide and Conquer: Julius Caesar, in Bella Gallica, begins his


classic description of the conquest of Gaul, ancient France, in 49 BC,
with a simple declarative sentence: “Gaul is divided into three parts.”
Even today, satellite photography would not yield such an insight; it
was Caesar's understanding of the problem of conquering Gaul as
divisible into three simpler problems that was part of his “managerial”
genius.

• Know Thy Adversary: Sun Tzu, in The Art of War, his definitive
treatise of 202 BC, favors intelligence (military and otherwise) in
approaching the problem. He ranks an understanding of the
environment — weather, terrain and logistics — as comparable in
strategic value to a knowledge of the strength and disposition of
opposing forces.

• Keep It Simple & Straightforward: Alexander the Great, in legend,


resolved the complexity of the Gordian Knot by the simplest
approach: a bold stroke of his sword. (It is rumored that even B-
school graduates learn some variant of the KISS acronym.)
On RISC

“RISC — which is typically parsed as ‘Reduced Instruction-Set


Computer’ —

might be better understood if parsed as ‘Reduced-Instruction-Set


Computer’.”
— Dan Prenner (IBM, Yorktown Heights) — 1989

RISC — the technology of the Reduced Instruction Set Computer in its


various forms — seems to have flashed upon the hardware talk circuit as
the hottest topic of recent years, generating at least as much fire and
smoke — controversy and confusion — as it has heat. This is not to say
that this new technology is not truly “hot”: measured in the raw MIPS —
the equally ambiguous acronymic units of Millions of [computer]
Instructions Per Second — RISC processors blaze past everything but the
supercomputers they internally resemble. Rather, it is that RISC, in and
of itself, is neither particularly new, novel nor even necessary except as
the latest solution to a recurring problem — once again, managing
complexity.

Reduced Instruction-Set Complexity — the more liberal interpretation


espoused by one of IBM’s RISC pioneers in the quote above — is only the
most recent in a long sequence of hardware solutions to a problem of
software complexity management that seems to emerge “inevitably” once
a computer company reaches a “critical mass” or threshold complexity. At
the end of the 1950s, with a decade-and-a-half’s worth of mutually
incompatible system architectures and associated software to support,
IBM was the first to reach this threshold — or at least the first to recognize
the problem and attempt to solve it. The solution began with an analysis
of the mix of machine instructions actually used in running programs —
virtually the same analysis that led to RISC two decades later. Then,
however, the usage “pie chart” suggested a single, multipurpose, “full-
circle” (360-degree) virtual architecture. This “democratic” instruction set
was ultimately implemented on a whole family of machines of that name
by the all-to-common technique of each of the machines emulating the
target instruction set — but by design rather than default, using a “control
store” or microprogram.

At just about that time, Digital Equipment Corporation, lacking the


“golden albatross” — the large installed base to service, loyal customers
to support and market to cannibalize — burst upon the scene with the
minicomputer. But by the end of their own decade and a half, with more
than a half-billion dollars in annual sales and more than 30,000 copies of
the PDP-8, PDP-11 and siblings installed worldwide, DEC similarly found
themselves with the unmanageable complexity of too many machine types
and operating systems. (The PDP-11 alone harbored RT-11, RSTS, RSX-
11D, RSX-11M and variants.) They, too, examined the instruction set and
leapt into the 32-bit domain with the VAX (Virtual Architecture eXtended)
— the classic Complex Instruction-Set Computer. Arguably, of several
hundred proprietary CISC systems offered in 1976, only those of today’s
first- and second-place manufacturers (the IBM 360 [extended] and Digital
VAX architectures) can survive into the next century.

While it was half a decade later that the RISC principles (and acronym)
were formally set forth by U.C. Berkeley’s Glenn Patterson, the concepts
were not new: Digital’s 12-bit PDP-8 minicomputer and, a decade later,
Intel’s 8-bit 8008 microcomputer each had an Instruction set reduced to fit
their tiny brains — the same problem noted in Djikstra’s quote. At the
opposite end of the spectrum, supercomputer pioneer Seymour Cray had
stripped the Control Data 6600 and 7600 instruction sets of all
unnecessary complexities, facilitating “instruction execution overlap”
(better known today as pipelining) even before 1970. Thus by 1986, when
Hewlett-Packard had passed its own 15-year adolescence in the computer
industry, the young but proven RISC approach — “Spectrum” or “Precision
Architecture” — was the replacement of choice for the aging HP-1000,
-2000, and -3000 instruction sets.

Early that same year, RISC pioneer IBM — fearing its golden albatross
might be blown off course by the gale-force processor born in IBM
temporary building “801,” but wary of the rising winds of change —
launched the RT/PC as a (safely underinflated) trial balloon into emerging
workstation markets. There it sank, hovered, drifted into commercial
markets and ultimately rose above all but the very peaks of that terrain —
but never quite reaching the lofty slope of ideal IBM sales curves. Soon
the semiconductor houses joined the fray, not only second-tier Fairchild
and AMD, but Intel and Motorola placed their dominant microprocessor-
specific market shares at RISC. In each case, whether RISC pioneer or
risk-averse follower, whether carried by covered wagon or bandwagon,
the companies involved were — in their own eyes, at least —
unquestionably hardware manufacturers. With the advent of Electronic
CAD and CAE (Computer-Aided-Design and -Engineering, respectively)
they now had tools to build tiny devices of incredible complexity and/or
speed, and each designer recognized the need to “use it or lose it”
competitively. But to a hardware designer that “AND/OR” was an “illogic
gate,” the near-mystical rate-limiting “software bottleneck.” To a hardware
vendor, the requisite software to manage this new hardware complexity
was not within their control — possibly not in anyone’s control.

For despite the much-heralded birth of CASE (Computer-Aided


Software Engineering) — as Mark Twain might have opined “the rumors
of its birth were greatly exaggerated,” — the principal problems to be
solved were no longer in the clean and pure realm of hardware but
among the dark and murky black arts of software. The precipitating
rationale for the 85%-commercial/15%-technical IBM 360 family was
conceived, the original reason the ultra-CISC VAX was born and the initial
reason the H-P Spectrum saw the light of day was to solve what is actually
a software problem — the never-ending saga and ever-increasing costs of
maintaining multiple OS’s. Not surprisingly, the hardware vendors’
solution was not to address the software complexity problem directly but to
move the complexity downward into a domain they did understand.
Following Henry Ford’s engineering solution to the “one-color problem,”
they slashed the Gordian knot of software’s uncertainty, complexity and
flexibility by moving it into the hardware.

Ironically, for well over a decade there had already been an alternative
solution that captured the very essence of the VLSI revolution, the
advantages of designing around standard, modular components
interconnected in a uniform and controlled fashion without nearly the loss
of flexibility, expansibility and power. Today such approaches are called
open systems or (less ambiguously) open software (lower case, please)
— but at that time simply called the UNIX operating system. Even more
ironically, the RISC revolution, which was largely a delayed but “equal
and opposite” reaction to the hardware overcomplication and speed limits
imposed by the CISC solution, pushed the responsibility for complexity
back into the software, specifically into the compiler, the unquestioned
forte of the UNIX OS.

Most ironically, the number-one proponent and provider of advanced


UNIX system software outside of Bell Laboratories by early 1988 was
next-generation enfant terrible Sun Microsystems. Though barely six
years old but already exceeding a billion dollars per year in revenue they
saw the impending catastrophe by the time they had begun to implement
their second machine architecture, the Intel-based 386i. In full
accordance with politically correct “open thinking,” they preemptively
developed and published the SPARC binary design specification, a
general-purpose Scalable Processor Architecture instruction set, a RISC
for all reasons that everyone could (and properly should) employ, or so
they sincerely believed. The true irony is that this move may very well
have been a step backwards into the “hardware” solution by the most
successful champions of the UNIX operating system. Despite
misconceptions of UNIX systems being for engineers and scientists, the
UNIX forte is its standard arsenal of software tools — compilers, editors,
analyzers, debuggers, specialized languages, utilities — the best software
solutions to complex software problems.

A glib diagnosis of Sun’s apparent aberration would attribute it to “PC


envy” — a morbid fixation on the market size of the IBM PC and its clones.
The difficulty arises not from simple ambition but from the single-minded
explanation of the PC’s success almost entirely in terms of its “plug-and-
play, one-size-fits-all” binary compatibility. This simplification manages to
neglect such factors as relative market sophistications (or the PC market’s
lack thereof), total cost of ownership and usage or the level of complexity
appropriateness to the task at hand. Realistically, PCs are priced and
promoted as truly general-purpose devices with the all-too-subtle benefits
of reducing complexity by reducing capability. When experienced
computer users purchase a system — from supercomputer to workstation
— they have a fairly good idea of the machine’s intended use. When a
naive user buys a PC, even a sophisticated corporate user may have only
a partial idea of the ends and purposes to which the PC will ultimately be
put. This market difference, more than any technical limitation,
determines that the majority of the PC software will be obtained in the
“aftermarket,” with the volumes characteristic of such products. A more
serious armchair psychoanalysis of the corporation — if such a thing is
possible — suggests that the “corporate self-image” is that of a hardware
company seeking hardwaresolutions wherever possible, even though fully
85 percent of the technical staff (and at least that fraction of the value-
added and differentiation from rival products) is in software.
On Resolution

“When you have eliminated the impossible, whatever remains,


however improbable,

must be the truth.”


— Mr. Sherlock Holmes in A. Conan Doyle’s The Sign of Four — 1890

Our analysis suggests that perhaps the turbulence we are


experiencing signifies the boundary of a new domain, a new kind of
“sound barrier,” a transition from the classical physics of compressible
fluids, of the controllable complexity of the Hardware Paradigm. Far from
an impassable “wall in the sky,” the complexity barrier may be a gateway
to a new domain where the laws of nature are simply different — not
better or worse. It may be that in this new domain complexities can no
longer be handled sequentially and linearly but must be solved in parallel
and nondeterministically, with all the uncertainty, chaos and surprise that
nonlinear systems imply. Thus when (not if) the function of software —
today very much a “service” — is sufficiently understood and sufficiently
valued to be treated as a “product” — as hardware is today — then not
just the computer industry but the world itself may be a very different place
indeed. But where shall we find the key to this transformation? How will it
come about if we continue to solve the “easy” problems we understand
instead of the hard problems we don’t? And most certainly, the software
problems are the hard problems. Just like the inebriate searching under
the lamppost for the keys lost in the bushes, we continue “to look where
the light is better.”

Ultimately, the economic and intellectual capital must be ventured, the


design tools sharpened, and the disciplines imposed. But first the
fundamental way we think about software must undergo a transformation,
a paradigm shift back a dozen millennia, to before the comfortable, linear
predictable risk-averse world of agriculture and citification. We must
become familiar once again with the chaotic yet “strangely attractive”
world of the hunter-gatherer, the exploratory nature mapped into our
genes. Without yielding to fatalism or mysticism we may have to learn to
accept the impossibility of absolute predictability once a critical level of
complexity is reached, to “go with the flow” of uncertainty rather than
attempting to conquer it. Even if we are uncomfortable with the idea of
“God playing dice” at the macroscopic level, we may need to forego strict
determinism — as Djikstra proposes in his “guarded command language”
— to give up procedurally instructing the machine how to accomplish our
goals, specifying only what objectives and ends we wish to accomplish,
leaving successively lower and decreasing levels of complexity to
determine the means. Whether achieved by by applicative “fourth-
generation” languages, by object-oriented programming techniques, by
standardized application, intersystem and human interfaces or (most
likely) “all of the above” combined with ideas yet undreamed, we need to
learn the larger lesson inherent in the RISC phenomenon — and the X-
windows phenomenon and the Open Systems and Software phenomena
— as applied to managing complexity.
That lesson is that, increasingly, it is insufficient to escape from the
linear only to project our trend lines into a geometric or even an
exponential future. We must make a fundamental shift in our thinking, its
dimension, if we are to have any hope of capturing the metatrends that
allow us to stand on the shoulders rather than the toes of our intellectual
forebears. One examination of a trend in our own industry should at least
illustrate this concept. Consider, literally, the evolution in dimension of
the relation between human and computer:

• Dimension 0: (circa 1960): a point punched in a card in a deck


slipped through the slot to a white-robed high priest of the batch-
oriented mainframe, responding to one’s JCL commands in hours
or days and using a simple linear file system;

• Dimension 1: (circa 1970): a line of text typed interactively on a


Teletype on-line to a time-shared minicomputer with response in
seconds or minutes, a command-line interpreter, a simple line-
editor and a more powerful hierarchical database;

• Dimension 2: (circa 1980): a screen of text randomly accessible


on a single-task personal computer with a response in fractions of
seconds or seconds, a menu-bar command structure, character-
oriented word processor and relational database;

• Dimension 3: (circa 1990): multiple “overlapped” windows of


graphics and text on a multitasking workstation with a response in
milliseconds, an object-oriented icon-based command
structure, a WYSIWYG “desktop publishing system” and an
entity-object database seamlessly incorporating text, graphics and
images;

• Dimension 4: (circa 2000): ...left as an exercise for the reader’s


own capabilities...

–––––––––––––––––
Dr. Brian Boyle, currently Director of Research for NOVON Research
Group in Berkeley, California, holds an MD (Board Certified in Radiology,
Specialty in Nuclear Medicine and Medical Imaging,) and a PhD in
Medical Information Science from University of California, San Francisco,
an MSEE in Computer Science from the U.C, Berkeley, and a BS in
Biophysics and in Experimental Psycholgy from Harvey Mudd College.
Prior to founding NOVON in 1980 to investigate prospects for distributed
“Non-Von Neumann” architectures, Dr. Boyle held positions in academia
and industry and consulted for the scientific, commercial and government
sectors. As Manager of Varian Associates’ Instrument Data Systems
Division, he has worked with the UNIX® Operating System since 1974.
He investigated its usage and future managing the Systems and Software
Group of McGraw-Hill's high-technology market research arm until 1985,
when he returned to NOVON for its independent status as a research,
consulting and publishing organization with the charter of “exploring the
next generation of systems and software” technologies. An active member
of the ACM, AAMI, IDBMA (Pick OS), IEEE, Usenix and Uniforum, he has
participated in the creation of the IEEE 1003 “POSIX” standard since 1981
and founded (in 1983) the Uniforum Internationalization Technical
Advisory Committee, efforts he continues today as a frequent invited
speaker at academic, technical and trade conferences worldwide.

You might also like