Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

4.9.

7 Warfighting is a Safety Feature, not a Bug

Another way to think about the complex emergent social benefit of warfighting is to think about fire
safety. Fire safety engineers design special doors called fire doors. Fire doors improve building safety
because they can contain and isolate fires in specific parts of a building to prevent them from spreading
or to slow their expansion down. With this concept in mind, consider the function of national borders.
National borders are forged by warfare and have essentially the same safety features as fire doors. When
an abstract power hierarchy becomes hazardous (e.g. oppressive), national borders make it possible to
contain or isolate this hazard to a specific region of the world and to prevent it from spreading.

Thanks to national borders, exploitative and abusive abstract power hierarchies remain contained. So long
as a population can do a good job at securing their own borders, then the only threat of oppression they
have to worry about is oppression from their own ruling class. And so long as our species continues to do
a good job at warfighting to divide control authority over our valuable physical resources, we can minimize
the amount of damage that could be caused by a single ruling class. In other words, warfare is a safety
feature, not a bug. It protects humans from themselves – particularly their exploitable belief systems. It
prevents the spread of hazardous belief systems by containing them.

4.9.8 War is the Exact Same Primordial Game that All Organisms Play, but Given a Different Name

In addition to helping sapiens protect themselves against the hazards of their own abstract belief systems,
it could also be argued that warfighting has the same upsides as predation in nature. When sapiens engage
in warfare, they engage in the same physical power projection activity other living organisms have
engaged in for four billion years. It is therefore reasonable to believe that warfare could also produce the
same complex emergent benefits that physical power projection competitions provide to all species –
namely the ability to revector resources to those who are the strongest, most intelligent, and most
capable of adapting to their environment and therefore more capable of surviving in a world filled with
predators and entropy.

The explanation for why sapiens are so prone to warfighting could be the same explanation for practically
everything else we observe in nature: what we see is what survives. There appears to have been many
human polities over the past ten thousand years which didn’t raise militaries and didn’t go to war. We
know these societies existed because we can find and dig up their mass graves and see how early and
unpleasantly they died. It’s not the case that peace-loving societies didn’t exist, it’s the case that peace-
loving societies didn’t survive. If we account for our survivorship bias (i.e. account for the fact that what
we observe, including ourselves, went through a very rigorous selection process which weeded out a lot
of other possibilities), then we can reframe a question about why sapiens fight wars in a way that’s much
easier to answer: “why do some societies believe they will be able to survive without warfighting?” Or to
put it differently, “what gives humans the impression that they are the only animals in nature who have
an inherent right to live peacefully without predators?”

Admittedly, “survival of the fittest” is an unsatisfying explanation for warfare. Like eating a head of lettuce,
this explanation is neither salty nor sugary. It’s a stoic explanation; it just is. It’s neither a profound nor a
romantic way to think about war, it’s an amoral one which accepts physical conflict as natural behavior.
This point of view does nothing to justify or reconcile the cruelty and bloodshed we see in war, nor how
angry it makes us feel. And frankly, it’s boring, not to mention super annoying, to be reminded of fact that
sapiens are just as ordinary and unremarkable as the wild animals we like to look down upon and believe

201
we have outsmarted. It’s also quite frustrating to think that despite how bulbous our foreheads are – we
have not found a way to outsmart natural selection except only in our imaginations.

Nevertheless, here we are, growling, snarling, scratching, snapping, and biting at each other over food
and territory and other resources, no different than a pack of wolves or wildcats. The only difference
seems to be the abstract thoughts filling our bulbous heads with reasons to justify or condemn our
physical aggression. War appears to be the same power projection game we can all independently observe
in nature, just with a different name. Instead of “primordial economics,” people call it “war.” Instead of
“the survivor’s dilemma,” people call it “national strategic security.” Whatever names people choose to
assign to these phenomena and whatever fancy uniforms they like to wear while they wage it, the first-
principles explanation of warfare remains the same (hence why it's called a first-principles explanation).
Sapiens appear to be one of thousands of other species of pack animals playing the same primordial game,
using physical power to establish their dominance hierarchy, the same as practically everything else.

While perhaps not as emotionally satisfying as other pontifications about warfare, the previous chapter
about power projection in nature can at least give the reader an appreciation of “why” and “how” warfare
works without complicating our understanding of the matter with abstract and unfalsifiable ideologies.
War can easily be explained using basic principles in physics and biology. If we can wrap our heads around
the complex emergent benefits of power projection in nature, then we should be able to understand the
complex emergent social benefits of warfare, despite how many other abstract explanations people and
their bulbous foreheads have come up with over the last several thousand years.

4.9.9 War Highlights the Lessons of Survival of the Fittest and the Most Capable of Countervailing Entropy

We live with the immense burden of being prisoners trapped behind our own prefrontal cortices. We
cannot see or experience the world as it is, we can only experience a version of the world that has been
filtered through a minefield of emotionally-charged abstract thoughts and symbolism. Like looking
through a pair of rose-tinted glasses, sapiens look at the world through ideology-tainted glasses. This
makes it very difficult to present arguments about the benefits of warfare on society without it being
complicated by abstract ideas like ethics, morals, theologies, or modern politics. The solution? Simply
don’t call it warfare. Call it something else like primordial economics.

In case it wasn’t already clear to the reader, the previous chapter about power projection tactics in nature
was a clandestine attempt to get the reader to understand and appreciate the necessity and complex
emergent social benefits of warfighting. These benefits are backed by science and lots of independently
validated empirical evidence with naturally occurring data sets that can be analyzed ex post facto for
causal inference. Like most clandestine operations, an argument about why warfighting is good for society
must be presented to people in this manner because it is a highly illicit activity; it goes against social
custom to explain why warfare is good for society.

To circumnavigate conflicting ideologies regarding warfare, the author provided a detailed argument for
the emergent benefit of warfare without calling it warfare. Just by changing the name of the activity and
using non-human examples (like the origin story of birds and mammals), it’s possible to provide a logical,
first-principles, grounded theory about the necessity and merit of warfare that is backed by science and
loads of empirical evidence.

As a refresher, the logic presented in the previous chapter was as follows: power projection competitions
give life an accelerated existential imperative to innovate, self-optimize, and vector limited resources to

202
organisms which are demonstrably more fit for survival in a congested, contested, competitive, and
hostile environment – an environment they don’t have the option of escaping. Physical power projection
competitions and predation have a causally inferable tendency to make life stronger, more organized,
more intelligent, more adaptable, and thus more capable of countervailing the entropy of the Universe.

If the reader understands this logic, then they should be able to understand why it’s reasonable to believe
that warfighting provides the same complex emergent benefit to humans too. War is effectively the same
power projection game with a different name, so it stands to reason that warfare would have the same
complex emergent properties for humans as it would for other organisms. We can validate from our own
experiences that warfare appears to give society the ability to self-optimize and ensure their limited
resources are vectored toward the subset of the population who are more fit to survive in congested,
contested, competitive, and hostile environments. Warfare clearly appears to help society become more
capable of countervailing the entropy of the Universe. We have no shortage of empirical evidence to back
this theory; we see it practically everywhere on – and off – Earth.

4.9.10 The First Moonwalkers were Not Just Explorers – They were the World’s Apex Predators

The first Earthen life to escape Earth and walk on the surface of another heavenly body were sapiens –
Earth’s peak predator. The population of sapiens which walked on the moon was the population which
devoted itself to mastering the challenge of warfighting. Moonwalkers weren’t just any sapiens, they were
sapiens from the world’s most powerful military nation who got there by riding an oversized version of
nuclear intercontinental ballistic missiles partially designed by Nazi scientists and engineers. NASA’s
Director and chief engineer for the Saturn V program were Nazi rocket scientists and engineers ushered
into the US without public scrutiny via a secret intelligence program known as Operation Paperclip. [110]

Once humans made it to the moon and starting walking around on its surface, the first semantically and
syntactically complex language they spoke were English words. They didn’t just speak any random
language, they spoke the acrolect of a United Kingdom which had just spent several preceding centuries
conquering and colonizing the planet through several aggressive military campaigns, to include one which
successfully established a beachhead in North America, the place from which the humans who walked on
the moon launched. And where did these moonwalkers get the calories they needed to do all that walking
and talking on the moon? They got their calories from freeze-dried vegetables grown from fields plowed
by the oxen they entrapped & enslaved. Freeze-dried beef from the cattle they domesticated and
slaughtered over thousands of years.

The point is, there are clearly some complex emergent benefits of warfighting when it comes to
countervailing the Entropy of the Universe. The first moonwalkers weren’t just our species’ top explorers,
they were our species’ most powerful and aggressive power projectors. In fact, they were arguably life’s
most powerful and aggressive apex predator.

The Apollo campaign was a thinly-veiled effort to raise public funding and support for the research and
development of the critical enabling military technology needed to remain strategically competitive with
the USSR at a time when public support of the military was at an all-time low (the entire campaign
happened during the Vietnam War and a pacifist movement). In the face of growing pacifism caused by
discontent of an ongoing war, how do you convince the American public to send lots of public money to
newly patriated Nazi scientists and engineers to develop better intercontinental and cislunar nuclear
missiles? Simple: Swap nuclear warheads with astronauts, and pump funding into a different marketing
strategy which put lots of imagery of it on TV with inspiring narratives about peace and exploration. [110]

203
With this simple slight-of-hand, a workforce will have little to no reservation devoting substantial amounts
of their time, technical talent, and public resources towards the development of strategic military assets.
As the author has attempted to demonstrate, all one must do to circumnavigate a domesticated
population’s negative opinions about warfighting is simply call it something other than warfighting. Call it
primordial economics. Call it space exploration. Or perhaps, call it a peer-to-peer electronic cash system.
Simply change the branding, and people will have no problem pumping boatloads of money into the same
technologies with the same functional use cases with little to no reservations.

Look around, and we can see similar evidence of the emergent benefits of warfare everywhere. The
organizations which master warfare have a clear tendency to become technological and economic leaders
and general-purpose masters of their natural environment. This is not political dogma; this is an
independently verifiable observation backed by four billion years’ worth of empirical data. There is a lot
of supporting evidence to feel concerned about populations who condemn war and refuse to fight it. We
don’t need a theory to explain the benefits of warfare; we just need to look around us. Like physical power,
warfighting is proof of its own merit. We admire the footprints on the moon left behind by those who
master warfighting; we dig up the mass, early graves left behind by those who don’t master warfighting.

Primordial economics, the survivor’s dilemma, and the innovate-or-die and cooperate-or-die dynamics of
predation are clearly at play for sapiens just as they are for all other species. If an intellectually honest
reader can acknowledge there’s merit to this line of reasoning, then they should understand the argument
for why there are very important, complex emergent social benefits to warfighting that we have a logical,
moral, and most importantly, an existential responsibility not to ignore. We must be willing to entertain
an uncomfortable but potentially valid hypothesis that wars provide an irreplaceable social and technical
benefit to humanity. The self-inflicted stress of predation and global power competitions have clearly
made life more prepared to survive and prosper against the universe-inflicted stress of entropy.

The technical systemic benefits of predation wouldn’t change just because sapiens arbitrarily named this
primordial behavior “war.” The dynamics of physical power projection don’t change just because sapiens
levy abstract thinking to produce moral, ethical, or theological justifications or explanations for it. Morals,
ethics, and theologies are not first-principles explanations backed by empirical evidence or scientific rigor.
A first-principles explanation of warfare backed by empirical evidence is that all living organisms physically
battle each other over resources and have clearly experienced major systemic benefits from those battles,
like becoming organized, powerful, and resourceful enough to survive devastating existential threats like
meteor strikes.

Since the emergence of primordial life, the act of living has been fundamentally an act of physical power
projection to countervail a cold, harsh, unforgiving, and unwelcoming universe filled with predators and
entropy. This didn’t change just because sapiens grew overclocked, overpowered, oversized neocortices
capable of thinking of abstract, imaginary worlds where this isn’t incontrovertibly true. The human
capacity to believe in unicorns doesn’t make unicorns physically real, and neither does the human capacity
to believe in peaceful, alternative forms of physical power as a basis for settling disputes and establishing
their dominance hierarchy. At least when people choose to believe in unicorns, they don’t make
themselves systemically vulnerable to foreign invaders or to population-scale systemic exploitation
(unless you count the unicorn symbolizing the purity and power of the British monarchy).

It’s simply not logical to believe that sapiens are exceptions to these principles, especially when there is
so much empirical evidence backing it. It seems like it would be far harder to make the argument that it

204
was confounding effects, correlation, or pure coincidence that the first Earthen life to walk on the moon
were English-speaking Americans riding on top of the missiles they originally developed to kill each other.
The much simpler explanation backed by first principles logic and highly randomized, causally-inferable
empirical evidence is that the same reason Americans were the first to walk on the moon is the same
reason lions are ostensibly the king of the jungle (tigers are actually the king of the jungle, if you don’t
include fire-wielding sapiens).

Perhaps it is difficult to appreciate the complex emergent benefits of warfare because sapiens desperately
want to believe they aren’t predators or that they have somehow transcended the cruelties of survival in
nature. People like to imagine that they have discovered an equally effective basis for settling disputes,
establishing control authority over resources, achieving consensus on the legitimate state of ownership
and chain of custody of property, or otherwise just solving the existential imperative all animals face of
establishing pecking order limited resources. So what do they do? We have already discussed what they
do. They adopt abstract belief systems where people have imaginary power, and then they literally put
on costumes and LARP as people with imaginary power to settle their disputes.

Most pack animals use physical power to settle disputes, establish control authority over resources, and
achieve consensus on the legitimate state of ownership and chain of custody of property. This is both an
intra-special and inter-special protocol that is 20,000 times older than anatomically modern sapiens and
80,000 times older than behaviorally modern sapiens who have tried (and so far, been unsuccessful) to
create alternatives protocols for managing resources which doesn’t rely upon physical power –
alternatives which utilize imaginary power and are incontrovertibly and demonstrably dysfunctional.
Despite how much sapiens wish they could escape the energy expenditure and injury risk associated with
warfare, five thousand years of written testimony (plus another five thousand years of agrarian fossil
records) indicate sapiens have very clearly never found a satisfactory substitute for physical power.

All life forms owe their existence to the process of leveraging physical power to capture and secure their
resources. Humans are no different. Humans don’t negotiate for their oxygen; they capture it using
physical power. Humans don’t negotiate for the food they eat; they capture it using physical power (and
then negotiate with each other over price). Humans don’t negotiate for the land they occupy; they capture
it from nature using physical power (and then negotiate with each other over price). It’s incontrovertibly
true that living organisms gain and maintain access to their resources using physical power. To live is to
project physical power to capture and secure one’s control over resources. This isn’t something to
condemn, it’s something to devote oneself to studying and mastering.

4.9.11 A Sign of Peak Predation is the Hubris to Believe that One Has Transcended the Threat of Predation

Does natural selection condemn physical power projection as morally, ethically, or theologically bad? No,
it does the exact opposite; it asymmetrically rewards this behavior with more resources and the enormous
privilege of survival. Take a look around, and the reader will likely note that the only animals condemning
the use of physical power to establish intraspecies dominances hierarchies are sapiens. And they’re
probably not using a technical justification, they’re probably using subjective, abstract reasoning and
unfalsifiable claims about “good” or “god.”

Like Dave Chappelle’s fictional character Clayton Bigsby (a passionate member of an anti-African-
American hate group who is blind and therefore incapable of seeing he’s African-American), pacifism is
comedically ironic. Pacifists turn a blind eye to their own nature. They carry the genes of thousands of
generations of predators who scorched the world and slaughtered their way to the top of the food chain.

205
They are the children of their ancestors’ conquest, so comfortable and complacent in their homes they
built over the graves of the conquered that they forget their own heritage – they forget they’re the
colonialists, conquerors, and peak predators benefiting directly from the activity they condemn.

It's not surprising why modern domesticated sapiens can develop distorted or misattributed views.
Sapiens have become so high on the food chain that many eat thousands of animals without killing a single
one. They have mastered predation and killing to a point where they have turned it into a subscription
service. They outsource their predation services so effectively that they forget they’re predators and they
get upset – even horrified -- when they’re reminded about what they’re paying for.

Modern domesticated sapiens devour their meals sitting in cushioned seats, watching mentally-cushioned
videos of wildlife that has the cruelest and most brutal parts carefully edited out so the video better aligns
with the inspiring music and narration given by some guy with an English accent (sapiens crave stories
told by storytellers, especially when they are told by wise old shamans who help them reconcile the
mysteries and cruelty of nature). And they do it all from the comfort of well-insulated, air-conditioned
rooms with entrapped and genetically deformed wolves and wildcats licking their feet and worshiping
them. Modern domesticated sapiens sincerely and unironically believe they understand nature and have
transcended the threat of predation. They genuinely think they have discovered a viable alternative to
physical power projection – that they outsmarted natural selection.

Ironically, there is probably no greater sign of peak predation than the extraordinary amount of hubris
required to believe that one has transcended predation. The fact that people get so upset when they are
reminded where their steak comes from doubles as proof of their extraordinary success at predation.
They prey upon animals without even thinking about it. They scorch the world, colonize it, conquer it, and
kill off their competitors so effectively that they forget how they came to “own” the land they’re living on
and “morally” defending in the first place.

These same concepts apply to warfare. Sapient populations can become so good at warfare that they
develop the hubris to believe they have transcended warfare (this is a common problem in system safety
too – success at safety breeds complacency). As a simple illustration of the point, the reader is invited to
study every moral, ethical, or theological argument condemning warfare. Make a list containing the
countries of origin from which those arguments were made and then compare that list to a list of countries
with the most dominant militaries. Don’t just do linear regression to see correlations, do other techniques
like propensity score matching to determine if there are causally inferable relationships. Build predictive
models on that data and run the model against history to see how accurately it can predict where wars
are fought and won. When similar techniques have been tried by anthropologists, they find that their
models are strikingly accurate. Several anthropologists have committed themselves to research which
causally links the most prosperous, organized, cooperative, resource abundant societies which enjoy the
most amount of art and literature and social freedom, to the most warring societies. [22]

4.9.12 Peace Depends upon Demonstrably Flawed Assumptions about Predatory Human Behavior

Another way to describe the sapient desire for peace is that it’s fundamentally a desire for an end to
predation. This desire appears to have spontaneously emerged after our species preyed on enough
neighboring organisms to place ourselves comfortably at the top of the food chain. But is it realistic to
expect predation to end? Given the sociotechnical benefits outlined thus far, is it even a good idea to
desire an end to predation? The innovate-or-die and cooperate-or-die dynamics which emerge from
predation clearly benefit life’s ability to countervail entropy. It’s no secret that many of the most

206
revolutionary technologies developed by humankind over the past 10,000 years emerged from their
military conflicts against each other (i.e. human-on-human predation). It is also no secret that the
existential threat of warfare motivates sapiens to cooperate at scales which dwarf the levels of
cooperation shown by other species.

Even if we ignore the systemic benefits of warfighting, it remains true that unrealistic design assumptions
must be met for alternative approaches to physical power to function properly as a mechanism to settle
sapient disputes, establish control authority over sapient resources, and achieve consensus on the
legitimate state of ownership and chain of custody of sapient property. When high levels of sympathy,
trust, and cooperation exist within a sapient population, then abstract power hierarchies seem to be able
to function nicely as an alternative to warfare. But given a large enough population over a long enough
time span, it’s hard to believe that these conditions can be permanently satisfied well enough to prevent
them from becoming dysfunctional.

Consider how much of the population needs to be untrustworthy for modern abstract power hierarchies
to become dysfunctional. We can use the United States to serve as a better-case scenario of an abstract
power hierarchy which has a lot of checks and balances (i.e. logical constraints encoded into rules of law)
to logically constrain the exploitation and abuse of abstract power. The United States is one of many
presidential republics with a fully independent legislature supposedly capable of preventing the
consolidation and abuse of abstract power. But with 535 members of Congress representing the will of
336 million Americans, it would only take 0.00008% of the US population (the president plus ~51% of its
legislature) to be dishonest or incompetent with their imaginary power for the United States to
degenerate into unimpeachable, population-scale exploitation and abuse of abstract power. The reader
is invited to ask themselves: how responsible is it to entrust 0.00008% of the population with abstract
power and control authority over the remaining 99.9999%?

Combining this observation with the core concepts presented previously, here's another way to frame the
same point: because of our desire not to use physical power to establish our dominance hierarchy, people
adopt belief systems like presidential republics that can be exploited and abused even when 99.9999% of
the population is competent and trustworthy. With the exception of extremely limited (but revocable)
privileges provided to citizens by the second amendment of the US Constitution, the US abstract power
hierarchy relies on trust in people with imaginary power to manage their resources and keep them secure
against high-ranking members of the population wielding enormously asymmetric levels of abstract
power and control authority over them that they clearly have incentives to exploit it. 99.9999% of the
population must trust that 0.00008% of the population will be clever, competent, and honest enough with
their abstract power for the US presidential republic to function properly as a viable substitute for physical
power as the basis for settling their disputes and establishing pecking order over their resources. Again,
the US was chosen because it represents a better case scenario; many abstract power hierarchies would
be even easier to systemically exploit than the US presidential republic.

It is clearly unrealistic to expect 0.00008% of the population to be clever, competent, and trustworthy
with abstract power all of the time. Most citizens intuitively understand that the imaginary power and
control authority we give to our politicians represents a major attack vector which is practically
guaranteed to be systemically exploited and abused eventually, just as five thousand years of written
testimony tells us they have been exploited and abused in the past. There’s a reason why the word
“politician” is often considered to be a pejorative term. Citizens certainly recognize the risk of exploitation
and abuse of abstract power. They are simply willing to accept the risk of exploitation, and even to tolerate
a certain threshold of it, in exchange for not having to expend energy or risk injury settling their disputes,

207
managing resources, and establishing pecking order using real-world physical power. In other words, we
put up with it because we know that fighting to settle our disputes and establish our dominance
hierarchies would be time intensive, energy intensive, and destructive.

We citizens agree to take part in a semi-consensual imaginary structure to give us a temporary reprieve
from settling disputes, capturing and securing resources, and establishing pecking order the way natural
selection demands from all species. We put up with the reoccurring flaws and the dysfunctional behavior
of our rules of law. We acknowledge the tendency for politicians everywhere, both in our abstract power
hierarchies and in our neighbor’s abstract power hierarchies, to be untrustworthy with their rank, and we
put up with it because we don’t want to have to fight each other to settle our disputes and establish our
pecking order the way wild animals do. At least subconsciously, it appears that many people seem to
intuitively understand how the alternative to abstract power is physical power.

4.9.13 Peace has only been a Temporary Reprieve from War, not a Permanent Replacement to War

Like the lunar eclipse enjoyed by Admiral Columbus, the temporary and fleeting moments of reprieve that
sapiens get from the cruelty of predation are both beautiful and awe-inspiring. Unfortunately, both our
primordial nature and our written history indicate that these moments are an exception, not a rule. Peace
appears to be a reprieve from war, not a replacement for it. Sapiens can be thankful they get to experience
these reprieves on special occasions when exactly the right conditions align in exactly the right way, but
it is clearly not reasonable for them to expect it to last forever.

We can see that abstract power hierarchies can indeed function properly in a narrow subset of cases
where populations can reasonably expect their ruling class to use their imaginary powers effectively.
Sapiens have proven that it’s possible to use imaginary power to satisfactorily settle disputes, establish
control authority over resources, and achieve consensus on the legitimate state of ownership and chain
of custody of their property. They have proven that it’s possible to use their imaginations, abstract
thinking skills, and design logic to cushion and insulate themselves from a cold, hard reality filled with
predators and entropy. They can establish a pecking order in a way that doesn’t expend energy or risk
injury like physical power does.

The problem is, of course, that imaginary power is just that: imaginary – it doesn’t physically exist. Humans
are playing make believe. They only think they’ve found a viable replacement to physical power as the
basis for establishing their dominance hierarchy. The ability to use imaginary power as a surrogate to real
power is a story, an inspiring message that people are eager to believe in because it’s hard to reconcile
how cruel and unsympathetic the laws of nature and survivorship truly are. These stories help us mentally
escape from what could be the most difficult part of life to reconcile: the fact that we’re the cruelest and
most unforgiving peak predators of them all. But the reality is undeniable: we’re constantly at war.

Study enough nature and read enough history, and it becomes easy to see why some people argue that
war is the rule, and peace is merely a temporary exception. Crack open a history book or look at a
teaspoon of ocean water under a microscope to see evidence of this assertion on your own. Like life itself,
agrarian society has always been at war – it has always been projecting physical power to capture and
secure access to resources. Society has always been fighting to decentralize control over resources to
maximize its chances of survival. Based on what we can independently observe, war appears to be a
continuous and cyclical process that takes place because agrarian society has yet to discover a sufficient
alternative to settling their disputes, establishing control authority over their resources, and achieving
consensus on the legitimate state of ownership and chain of custody of their property. They attempt to

208
use abstract-power-based dominance hierarchies rather than physical-power-based dominance
hierarchies, but it clearly doesn’t work. Our species is continuously fighting on a global scale; it’s one of
our predominant behaviors.

4.9.14 It Might be Immoral to Claim that War is Immoral: The Common Argument Against Pacifism

“You may not be interested in war, but war is interested in you.”


Leon Trotsky [111]

One of the most successful military commanders of all time was a son-in-law descendant of Genghis Khan,
Timur the Great. Generally undefeated in battle, it is estimated that 5% of the global human population
were killed during the 14th century Timurid conquests and invasions. During the establishment of the
Timurid Empire, some cities fell so quickly that Timurid armies didn’t even need to kill their contenders or
haul their bodies to their graves. In some cases, the conquered would dig their own mass graves and then
be buried in them while still alive. [112]

The Timurid conquests serve as one of many examples in history illustrating a fundamental flaw with
pacifism: pacifists literally dig their own graves.

According to the Oxford dictionary, pacifism is a belief that war is unjustifiable under any circumstances
and that all disputes can and should be settled by peaceful means. The problem with this assertion is that
it relies on unrealistic views about nature and human behavior. For people to become pacifist, they must
believe there is an alternative to physical power as a basis for keeping people safe and secure against
predators that is equally as capable of keeping people secure as physical power is, despite many examples
that it isn’t. Unfortunately, there does not appear to be a viable alternative to physical power that has the
same complex properties needed to keep people secure against invaders and oppressors. When
populations refuse to master the art of projecting physical power to secure themselves, the outcome is
often the same: mass graves dug by the pacifists themselves – people who are clearly willing to die without
putting up a fight.

Pacifists seek peaceful adjudication to all property or policy disputes. The problem with this approach is
that it requires a judge or jury (people with abstract power appointed by an abstract power hierarchy) to
make a judgement about the dispute. Peaceful adjudication therefore requires trust in a judge or jury to
cast impartial or fair judgements (note how the term “fair” in this context is subjective, thus not impartial).
Peaceful adjudication also requires common consensus as well as trust in the people being judged to
honor their judgement. These all seem like good ideas in theory, but they rely on an unrealistic assumption
that these conditionalities can and will be met, and that people are going to remain sympathetic to their
verdict. As history makes abundantly clear, it’s not possible for these conditions to be met all the time.
Sooner or later, Timur or one of his many reincarnations are going to be unsympathetic to people’s desires
for peaceful adjudication.

An impolite but simple argument to make against pacifism is that pacifists are literally self-domesticated
animals who are unfit for survival in a world filled with predators. Pacifists forfeit their capacity and
inclination to project physical power for abstract reasons, usually because of the energy it expends or the
injury it causes. History makes it very clear what happens to pacifists. Their belief systems get
psychologically exploited or their resources get physically captured or invaded. They get oppressed by
corrupt, unimpeachable rulers, or they get steam-rolled by unfriendly neighbors who don’t subscribe to

209
the same pacifist ideologies. We have thousands of years of detailed written testimony about this, but
somehow people keep allowing themselves to forget the moral of the story.

It could be argued that pacifism is what happens when people spend too much time with the friendly,
docile, domesticated versions of themselves, or watch too many carefully edited videos of nature which
have the unforgiving, brutal, and cruel parts (i.e. the most natural parts) filtered out. Pacifism is what
happens when generations of people spend their entire lives outsourcing their physical security and
predation to other people, so they don’t have to experience the discomfort associated with these
activities. They spend their lives without having to earn their food or their freedom of action – without
having to kill the animals they eat, or to kill the people who are unsympathetic to their desire to live
comfortably with the property and policies they value. It should come as no surprise that these types of
people can develop distorted points of view about reality.

Pacifists are people who become spoiled by the spoils of war, oblivious to the reason why they can afford
to forget about Timur. Successful populations can get so comfortable living their leisurely, sedentary, and
domestic lives that they gain the luxury of developing unchallenged, imaginary beliefs about the world –
worlds where people aren’t actively capturing and securing access to everything using brute-force physical
power and where pacifists themselves don’t directly benefit from this behavior. Pacifists appear to live in
an imaginary world that does not exist (one devoid of predators), perhaps because they have little
experiential knowledge of wild nature from the safety and comfort of their neatly-structured, un-harassed
societies. By practically all written accounts, this world has never actually existed. Modern agrarian society
appears to have always been physically fighting each other intermittently.

It is incontrovertibly true that forfeiture of physical power makes a population physically powerless to
defend themselves. Moreover, there are clear, causally inferable relationships between physical power
projection and prosperity (or conversely, there is a clear, causally inferable relationship between pacifism
and mass graves). It is therefore just as easy to argue that warfare is justifiable in some cases, and that
mastering the art of warfighting is just as morally imperative for society. History makes it clear that
pacifism can be a security hazard, so it could just as easily be argued that it’s unethical for pacifists to
motivate people to adopt pacifist ideologies which make them demonstrably insecure against predation.

This same line of reasoning has been repeated many times throughout history. The lessons of history tell
us why it is strategically crucial for populations not to allow themselves to believe that physical power
competitions are bad for society just because they use a lot of energy or because they risk injury. When
pacifists morally condemn the use of physical power, they contribute to a systemic security hazard which
commonly invites invasion or oppression.

Predators feed on weakness. Oppressors benefit from the sanctimonious peer pressure of pacifists who
condemn physical aggression; oppressors want their population to be passive. Passive populations are
physically docile, and their belief system is easy to exploit. Again, this isn’t dogma speaking; it is
incontrovertibly true that docility leads directly to slaughter. We have more than three dozen naturally
randomized A/B testing experiments between wild and domesticated animals to causally infer a link
between lack of physical aggression and systemic-scale exploitation. There is a clear, causally inferable
relationship between pacifism and exploitation upon which humans continuously take advantage of. It is
a matter of fact that boars don’t experience peace when they are selectively bred to be less physically
powerful and aggressive, they experience being turned into bacon. Our civilization was built upon making
animals docile and exploiting them at massive scale to plow our fields and fill our stomachs. Sapiens are
animals too; it’s unreasonable to believe we aren’t equally vulnerable to the same threat.

210
For whatever reason, people keep allowing themselves to forget the basic lesson of domestication despite
how often it reappears in history. The lesson is simple: systemic exploitation and abuse is a byproduct of
people who don’t use physical power to settle their disputes, not a byproduct of people who do. Remove
a population’s capacity and inclination to impose severe physical costs on their neighbors, and they will
face severe systemic security problems. If it’s true for aurochs, boar, and junglefowl, then it’s reasonable
to expect it to be true for primates like sapiens, too. A human population’s aversion to physical aggression
and their forfeiture of physical power is likely to be a direct contributing factor to insecurity.

It could be possible that foreign invasion and the egregious levels of loss associated with wide-scale
systemic exploitation by corrupt government officials is directly attributable to pacifism – to the people
who don’t project physical power to impose severe physical costs on those who attack or abuse them.
That makes it just as easy to morally condemn people who aren’t physically aggressive as it is to condemn
people who are. At least the people who assert that physical power and aggression is good can back it
with billions of years of empirical evidence. They can point to the animals which enjoy the highest levels
of freedom and self-agency in the wild and note how mean and aggressive they are – how strong they
are, how sharp their teeth and their nails are. That’s probably not a coincidence.

Americans attempted to peacefully declare independence via a piece of paper in 1776, but that
independence wasn’t formally acknowledged until thousands of British soldiers were slain over the
following seven years. Years after winning that fight, the Constitution was written. Our founding fathers
were aware of the emergent benefits of physical power and also well acquainted with the threat of an
overreaching and abusive government. They understood that people who allow themselves to forfeit their
ability to project physical power for theological, philosophical, or ideological reasons are people who
forfeit their own security. This is why the US Constitution has a second amendment.

In conclusion, those who understand how domestication works can understand why pacifism is as easy to
call an unethical and immoral bane to society as the warfighting it condemns. Villainizing the use of
physical power (watts) to impose severe physical costs on potential attackers is perhaps the worst
strategic blunder a freedom-loving society could make. Pacifism causes populations to adopt the exact
opposite strategy they need to remain safe and secure. They cause populations to become weak,
complacent, docile, and unfit to survive in a congested, contested, competitive, and hostile environment
filled with predators and entropy.

4.9.15 There’s No Excuse for Failing to Understand the Importance of Physical Power Projection

“Your problem is not the problem. Your problem is your attitude about the problem.”
Captain Jack Sparrow [113]

Many have claimed that physical power breeds oppression, but this section provides a counterargument
that oppression is actually caused by the asymmetric application of physical power rather than physical
power in and of itself. It’s not simply the fact that one side uses physical power on the other that’s to
blame for oppression. It’s that one side is asymmetrically more capable of and willing to use physical
power on the other side that’s to blame for oppression. In other words, oppression occurs when one side
uses physical power and the other one doesn’t. This would imply that it’s actually the lack of capability or
willingness to utilize physical power is what breeds oppression. To put it bluntly: oppression is what
happens when people don’t fight for what they value.

211
Herein lies a counterargument about why it is not logical for the oppressed to blame their state of
oppression on the physical strength and aggression of their oppressor. It’s not reasonable for people to
villainize the use of physical power to capture and secure resources because this is simply how nature
works. Look around, and you will see that the strongest, sharpest, and most physically assertive creatures
rise to the top of every food chain in practically every biome on Earth. For people to ignore their primordial
roots and these basic lessons of nature and survivorship is not logical – it’s ideological. The author asserts
that this is what happens when people spend too much time separated from predators. They forget how
survival works.

The act of using physical power to settle disputes, establish pecking order, and secure resources preceded
agrarian civilization by billions of years – life has always behaved like this since it was nothing but a thin
film of organic material stretched across a volcanic rock, and there’s simply no logical reason to believe
that sapiens would be an exception to this behavior. The reasoning that people use to claim that the use
of physical power is “bad” is ideological, not logical. But ideologies alone can’t secure irrigated land or
protect society from neighboring societies who want to take that irrigated land back – only physical power
does that.

If we were to factor basic lessons of nature and survivorship into our calculus, then one counterargument
to people who claim that physical power and aggression is the root cause of oppression is that the
oppressed should take accountability for their lack of physical power and aggression. It’s irrational to
expect sapiens to be exempt from the basic principles of survivorship that we can all independently
observe and verify. Life has always been about survival of the fittest – about finding the best ways to
project power and adapt. Our imaginations and our ideals don’t change the laws of physics.

As argued throughout this thesis, survival of the fittest didn’t suddenly disappear when sapiens started
using abstract thinking to form moral, ethical, or theological concepts. Sapiens are not exempt from life’s
rigorous natural selection process. We do not get to unsubscribe from primordial economics and the
survivor’s dilemma just because we happen to be capable of believing in imaginary alternative realities
where physical power isn’t the primary basis for settling our intraspecies disputes and establishing our
dominance hierarchies.

For the past four billion years, the universe has been consistently harsh and unsympathetic to organisms
which don’t find ways to project power in increasingly clever ways. If any organism is to survive and
prosper in this environment, they must stay accountable to themselves and not allow themselves to stop
projecting power. Sapiens are organisms. Like any other organism, they are responsible for developing
increasingly clever ways to project power for the sake of survival and prosperity in a world teeming with
predators and entropy. Since sapiens are peak predators, they especially must keep searching for
increasingly clever power projection tactics, techniques, and technologies which maximize their ability to
survive against themselves.

These are stoic lessons which many military officers (like the author) have learned to accept. The blunt,
logical reasoning goes something like this: Your opponent’s strength may not be the only thing to blame
for your losses; your own physical weakness, incompetence, or complacency could just as easily be blamed
for your losses. Your opponent’s inclination to be physically aggressive may not be exclusively to blame
for your losses; your disinclination to be physically aggressive at appropriate times and places could just
as easily be blamed for your losses. Your fear and aversion to using physical power could just as easily be
to blame for your losses. Your trust in untrustworthy people, or your reliance on imaginary power which
doesn’t physically defend you could just as easily be to blame for your losses. So if you don’t want to

212
experience losses or become oppressed, it stands to reason that you should consider taking responsibility
for your own weakness, ignorance, and incompetence rather than assigning the blame to other people. If
you don’t fight for what you value then you will be devoured, like everything else is in nature.

If you allow yourself to become physically weak, docile, and domesticated, then it stands to reason that
you should expect to suffer the same fate as dozens of other animals which became physically weak,
docile, and domesticated. If you fail to adapt to shared objective physical reality, then it is unreasonable
for you to expect (or even more ridiculous, for you to believe that you deserve) a different outcome than
countless other species who failed to adapt to physical reality over the past hundreds of millions of years.
We can all independently observe how the world functions outside of that imaginary one we keep building
inside of their heads. Proof-of-power is all around us, we can see and measure it everywhere. There is no
shortage of evidence outside our homes, inside our history books, or on top of our dinner plates.

There is simply no excuse for not recognizing the essential role that physical power plays for one’s own
security and prosperity, no matter how energy intensive it is, and no matter how much physical injury it
risks. Incontrovertible proof of how essential physical power projection is for survival is present in
practically every observable corner of our environment, at every size and scale. It doesn’t make sense to
expect nature to function differently for sapiens than for everything else. In shared objective physical
reality, every decision (including and especially the decision not to project physical power) has material
consequences, and nobody gets to unsubscribe from them.

4.10 National Strategic Security

“Before we can abolish war, we need to understand it.”


Peter Turchin [22]

4.10.1 Modeling How Modern Agrarian Society Controls its Resources

It is possible to model warfare as a resource control protocol. Section 4.5-4.7 discussed the differences
between abstract power and physical power-based resource control systems. These sections discussed
how agrarian society attempts to use abstract power to replace physical power. The word “attempt” was
emphasized because abstract power-based resource control structures clearly tend to break down. If
abstract power functioned properly as the basis for managing resources, there wouldn’t be so many
physical conflicts. The fact that wars break out suggests that abstract power isn’t the only thing that
humans use to manage their resources, so we need to update our model.

To produce a more accurate model to describe how agrarian society manages their resources, it is
necessary to account for their use of physical power competitions. This can be accomplished by creating
a resource control structure model which incorporates both abstract power and physical power. To that
end, a hybrid resource control model has been provided in Figure 52. This model accounts for the fact
that agrarian society attempts to use abstract power to manage the state of ownership and chain of
custody of their resources, but routinely reverts back to using physical power.

213
Figure 52: A More Accurate Model of the Resource Control Structure Adopted by Modern Society
[88, 90, 76, 89]

This model contains the same controllers as the previously described resource control models and
combines them together into a single system. Three control actions worth the reader’s attention have
been enumerated and highlighted in purple. The first control action is “subscribe.” As previously
discussed, everyone tacitly subscribes to the control authority of physical power by virtue of the fact that
nobody gets to unsubscribe from the influence of physical power. No matter what belief systems people
adopt, and no matter how people choose to design their abstract power hierarchies, nobody who wields
abstract power gets to “unsubscribe” from the effects authority of physical power.

As previously explained, physical power is unsympathetic; it works the same regardless of whether or not
people believe in it or sympathize with it. This means our presidential republics, semi-presidential
republics, parliamentary republics, constitutional monarchies, semi-constitutional monarchies, absolute
monarchies, or one-party states are all equally subordinate to physical power and equally incapable of
escaping its impact. For these reasons, physical power is placed above abstract power in this control
structure model.

The second control action worth special attention is “constrain, decentralize, enforce/legitimize.” As
discussed in this chapter, a primary value-delivered function of warfare is that it allows people to
physically constrain and decentralize abstract power. Warfighting is the reason why control authority over
214
Earth’s dry land has been divided across 195 different abstract power hierarchies (what we now call
countries). The precise boundaries of these abstract power hierarchies have been adjusted many times
over the past several thousand years, but the control authority over Earth’s resources has always
remained globally decentralized precisely because of physical power projection.

In addition to physically constraining and decentralizing control authority over Earth’s natural resources,
physical power projectors also legitimize the abstract power wielded by abstract power projectors. This
references the concepts presented in sections 4.5 and the example about how kings utilize the physical
power projected by their knights to legitimize and convince people to believe in their own abstract power.
By having physical power projectors project physical power within the same context of the king’s assertion
of abstract power, the imaginary power wielded by the king is easier to misperceive as concretely real.
Technically speaking, this is a false-positive correlation between an abstract input produced by the human
imagination, and a physical sensory input produced by a power projector. Nevertheless, it works at
legitimizing abstract power.

As many societies have proven time and time again through countless rebellions, revolutions, and coup
d’états, Kings are physically powerless in shared objective reality. All it takes to undermine the abstract
power and control authority of anyone with abstract power is to simply (1) stop believing in their abstract
power or (2) countervail the physical power of the king’s physical power projectors (i.e. the king’s army –
the people with real power). We know this process works because it is the reason why monarchies today
are almost entirely ceremonial and have virtually no abstract power (one major exception being the
Kingdom of Saudi Arabia).

The reason why most monarchies today are almost entirely ceremonial is because power projectors stop
legitimizing their abstract power. The populations living under monarchies in the past got so fed up with
the exploitation and abuse of their belief system that they (1) stopped believing in the king’s imaginary
power or (2) started projecting real power to countervail or their monarch’s army, thus delegitimizing the
king’s abstract power. If kings truly had the power they claimed to have, this would not have happened.
But a series of revolutionary wars have made a very compelling case that kings don’t have real power,
they just have abstract power. For this reason, abstract power projectors have been placed in a lower and
more subordinate position to physical power projectors in this model.

The last enumerated control action is “request enforcement/legitimization.” This is another tacit control
action that is easy to explain using the same king and knight example. When a king orders his army to
carry out his will, what he is really doing is asking physical power projectors to legitimize his abstract
power. The same thing happens during virtually any form of physical enforcement. Physical power is
superior to abstract power, which means physical power projectors are superior to abstract power
projectors. Additionally, laws are only imaginary logical constraints, not physical constraints. Because
imaginary constraints are demonstrably incapable of physically preventing anyone from doing anything,
laws must be physically legitimized using physical power. The request for physical legitimization and
enforcement of abstract power is therefore implicit, not explicit, but nevertheless still a request.

As many rulers have learned the hard way over many acts of mutiny, physical power projectors can and
often do choose to stop enforcing or legitimizing their ruler’s abstract power. When this happens, the
control authority that abstract power projectors have over people’s resources disappears. Conversely,
when abstract power projectors choose not to legitimize physical power projectors, the control authority
of physical power projectors does not change. For this and several other reasons, we know that abstract

215
power projectors require the tacit permission of physical power projectors, and not the other way around.
This cold, hard truth is backed by a well-recorded history of many rulers being physically overthrown.

4.10.2 Two Ways for the Oppressed to Countervail Both Abstract and Physical Power Projectors

Using this newly updated control structure design, we can see that it reveals two ways for members to
escape oppression if they feel like they are in an abusive or exploitative resource control structure. If a
member is being oppressed by either abstract or physical power projectors (regardless of whether those
power projectors are from a neighboring country or from the member’s own country), those members
have two ways to countervail their control authority. The first way was originally mentioned in section
3.11: members can simply refuse not to assign value to the resources being controlled.

As an example, consider the US dollar (USD) world reserve currency. USD is an international resource
controlled by an abstract power hierarchy backed by the world’s most powerful power projectors (the US
military). The US military legitimizes the abstract power of its presidential republic, and by virtue of the
design logic encoded into their rules of law, that presidential republic has executive control authority over
USD. While it is certainly true that both types of power projectors (the abstract power of the presidential
republic plus the physical power of the US military) have control authority over the state of ownership
and chain of custody of USD, that doesn’t make anyone have to value USD. Therein lies the key to
countervailing US power.

Hypothetically speaking, if the US were to forget how this power structure works and became systemically
exploitative and abusive with their control authority over USD (for example, if they started denying
people’s access to USD through sanctions, or debasing people’s purchasing power by inflating USD), then
members could countervail both the physical and abstract power of the US by simply not valuing USD as
their world reserve currency anymore. For this reason, the “assign value to resources” control action
which members can exercise seems small, but it is in fact very empowering. If the people in charge of the
USD were to do something which motivated members to exercise this control action and stop valuing
USD, their abstract power and control authority would disappear. It is therefore critical for the US to not
do anything to motivate members from exercising this control action, else they risk losing their power.

The US deliberately made themselves vulnerable to this attack vector by converting USD from a physical
system into an abstract belief system in the 1970s. Like so many organizations to come before them, the
US seems to have lost sight of the value of physical constraints to abstract power. By converting USD from
a money denominated by gold into a money denominated purely by bits of information (a.k.a. fiat), the
US converted their entire monetary system into an abstract belief system with no physical constraints
securing it against systemic exploitation by high-ranking people who control the transfer and storage of
those bits of information.

Additionally, the people with abstract power and control authority over USD only have it insofar as people
are willing to believe in it because physical power is irrelevant at securing money which doesn’t physically
exist. So not only do members have the freedom to choose not to assign value to USD, they also have the
freedom to choose not to recognize the abstract power and control authority of the people who control
USD. This means the entire USD monetary resource control system is backed by nothing but faith in the
value of the dollar and the abstract power of the people who control the dollar. Of course, people can
quickly lose their faith at any time, so it’s imperative for the US not to do anything to motivate people to
lose their faith in USD, which means it’s imperative for the US to not deny people’s access to USD or to

216
degrade its purchasing power. Yet, in fiat form, there’s nothing to physically constrain the US from doing
either of these things.

These power dynamics put the people who have control over USD on thin ice and make it especially
important for them to have the discipline not to deny people’s access to USD or degrade its purchasing
power. These people must be careful not to do anything to cause people to lose faith in their imaginary
power, because if they do, their abstract power and control authority over this valuable abstract resource
could quickly evaporate no matter how physically powerful the US is. This same principle applies to
virtually any form of non-essential resource. For example, many people use both physical and abstract
power to control the state of ownership and chain of custody of diamonds. Both physical and abstract
power over diamonds can be made obsolete by simply not valuing diamonds.

There are, of course, some essential resources which sapiens don’t have the option of not valuing (e.g.
food, water, oxygen). In cases where sapiens don’t have the option of escaping oppression by choosing
not to value resources, their second option to regain control authority over the state of ownership and
chain of custody of their resources is to become their own physical power projectors and impose severe,
physically prohibitive costs on their oppressors. This is the primary motivation behind all wars. Because
physical power is inegalitarian and inclusive, anyone can choose to become a physical power projector
regardless of their rank or title or standing within an existing abstract power hierarchy.

4.10.3 The National Strategic Security Dilemma

Now that we have a general model of agrarian society’s resource control structure, we can review the
topic of national strategic security. Members of abstract power hierarchies have two primary
vulnerabilities: foreign invasion or internal corruption. The former usually occurs when a foreign
belligerent actor uses physical power as the basis for their attack. The latter usually occurs when a
belligerent actor uses abstract power as the basis for their attack. Either way, both vulnerabilities have
the same solution: impose severe physical costs on the attacker until they don’t have the capacity or
inclination to continue their attack.

Based on this insight, we can see that the same primordial economic dynamics which apply to wild
organisms and organizations also apply to agrarian society. This makes perfect sense considering how
agrarian society is quite literally a pack of wild animals just like any other pack animal species observed in
nature. We can therefore describe the dynamics of national strategic security the same way we described
the survivor’s dilemma in the previous chapter.

Like any other wild organism in nature, every nation has a BCRA. A nation’s BCRA is a simple fraction
determined by two variables: the benefit of attacking it (BA) and the cost of attacking it (CA). BA is a function
of how much resource abundance and control authority is offered by a nation’s abstract power hierarchy.
Nations with large economies, high levels of resource abundance, and substantial amounts of control
authority over those resources have a higher BA. On the flip side of the equation, CA is a function of how
capable a nation is at imposing severe, physically prohibitive costs on attackers. Nations with populations
who are more capable of and inclined to impose physically prohibitive cost on attackers have higher CA.

Nations which survive and prosper are those which manage both sides of their BCRA equation effectively.
To prevent their BCRA from climbing to hazardous levels, nations must either shrink the numerator or
grow the denominator of their BCRA equation. They must either shrink their economy and control
authority to decrease BA, or they must grow CA by increasing their capacity and inclination to impose

217
severe, physically prohibitive costs on attackers. Decreasing the size of a nation’s economy is not an ideal
solution, so growing CA is the preferable option. If nations choose to grow their economy without growing
CA at an equal or higher rate than the rate at which BA increases, BCRA will climb. This explains why pacifism
(i.e. a decrease in a nation’s inclination to use physical power to impose physically prohibitive costs on
attackers) is such a systemic security threat. The more pacifist a nation becomes, the higher their BCRA
will climb, the more likely they are to be devoured by predators, either in the form of foreign invasion or
internal corruption.

To achieve long-term survival, nations must keep the BCRA level lower than the hazardous BCRA level of
the surrounding environment (i.e. the level which would motivate belligerents to attack). The space in
between a nation’s BCRA level and the hazardous BCRA level can be called its prosperity margin. This
margin indicates how much a nation can afford for its BCRA to rise before it risks being attacked.

Figure 53 provides an illustration of the resulting national strategic security dilemma (note this is exactly
the same figure as Figure 16). except with a different name). As a nation’s economy becomes stronger
and more resource abundant, its BA increases. This causes the nation’s BCRA to increase and get closer to
the environment’s hazardous BCRA level. As a nation’s BCRA level approaches the hazardous level, their
prosperity margin shrinks. This creates an unfavorable dynamic where the more successful a nation
becomes, the more vulnerable it is to either foreign invasion or internal corruption. To make matters even
more challenging, a nation cannot know for sure how much prosperity margin it has, nor how quickly it’s
shrinking. This is because nobody can truly know what the hazardous BCRA level is, as it a probabilistic
phenomenon which depends on the capacity and inclination of neighboring nations and is therefore
completely outside of a nation’s individual control. All a nation can know about their environment’s
hazardous BCRA level is that it will continuously drop as the environment becomes increasingly congested,
contested, competitive, and hostile.

Figure 53: Illustration of the National Strategic Security Dilemma

218
The national strategic security dilemma puts all nations into a predicament where they have the same
three response options described in the previous chapter. Option #1 is to do nothing to counterbalance
the effect of their increasing BA. The upside to this strategy is that it is more energy-efficient (this is
because the population effectively ignores its security responsibilities). The downside of this strategy is
that it causes the nation’s BCRA to continue rising ad infinitum, shrinking prosperity margin until the point
where the nation is virtually guaranteed to be invaded or internally corrupted.

Options #2 and #3 represent strategies where a population doesn’t ignore their security responsibilities
and uses physical power to impose severe physical costs on attackers (i.e. increasing CA). The difference
between option #2 and option #3 is that option #2 only grows CA at the same rate as BA grows, causing
the nation’s BCRA to remain fixed. Unfortunately, this will still cause prosperity margin to continue
shrinking because it does not account for the fact that the environment’s hazardous BCRA level
continuously falls as it becomes increasingly congested, contested, competitive, and hostile. Option #3
remedies this flaw by endeavoring to increase CA faster than BA grows, causing BCRA to fall and prosperity
margin to grow, assuming the nation succeeds at increasing their CA fast enough to out-pace their
environment’s falling hazardous BCRA level.

Not surprisingly, the best strategic move a nation can make to solve the national strategic security
dilemma is the same move any living organisms or organization can make to solve the survivor’s dilemma:
option #3. As illustrated by the agrarian fossil record and thousands of years of written testimony by
survivors, if an agrarian population wants to survive and prosper, they need to endeavor to master their
capacity and inclination to project physical power so they can continually increase CA and buy themselves
as much prosperity margin as possible. This creates a national strategic Schelling point for all nations to
vector a portion of their resources towards increasing their capacity to impose CA. Unfortunately, this
Schelling point causes the surrounding environment to become increasingly more contested, competitive,
and hostile, causing the environment’s hazardous BCRA level to fall faster. This creates a self-reinforcing
feedback loop which makes it increasingly more imperative for nations to continue increasing their CA and
lowering their BCRA as much as they can afford to do so.

The emergent effect of this self-reinforcing feedback loop is the same as what we observe in nature.
Agrarian societies grow in size and scale, organizing in larger and increasingly clever ways, and developing
increasingly clever power projection tactics. They focus much of their time, attention, and resources on
discovering and adopting dual-use power projection tactics which help them manage both sides of their
BCRA equation, just like the behavior observed with the evolution of life. Just as these dynamics explain
why nature’s top surviving wild animals are often fierce-looking and tough, they also explain why the most
successful nations with the best-performing economies often have the largest and most successful
militaries. Eventually, this power projection game scales into what we see today with massive-scale
militaries and extraordinary power projection capabilities. Thus, the same dynamics which explain power
projection in nature simultaneously offer us a simple explanation for how and why behaviorally modern
sapiens scaled their physical power projection capacity to the point of risking nuclear annihilation.

219
4.11 Mutually Assured Destruction

“Because of your leaders’ refusal to accept the surrender declaration that would enable Japan to
honorably end this useless war, we have employed our atomic bomb... Before we use this bomb
again to destroy every resource of the military by which they are prolonging this useless war,
petition the emperor now… Act at once or we shall resolutely employ this bomb and all our
other superior weapons to promptly and forcefully end the war.”
US Warning Leaflets dropped on Japanese Cities Following the Atomic Bombing of Hiroshima [114]

4.11.1 Cooperate or Die

Anthropologists like Peter Turchin have outlined a compelling case that more than 10,000 years of
warfighting have produced enough data to indicate a causally inferable relationship between warfighting
and the size and scale of human cooperation. He asserts that sapiens became the world’s greatest
cooperators as a direct result of the existential imperative caused by agrarian warfighting. Additionally,
thanks to their increasingly higher levels of cooperation, less of the human population is required to
participate in warfighting.

To support this theory, Turchin developed a methodology to measure the size and scale of human
cooperation and trace how the number of people in polities has grown since the dawn of the Neolithic
age, which he then compared to the proliferation of miscellaneous warfighting technologies. He was able
to build a model which accurately predicts the size, scale, and time of human civilization’s growth based
on nothing more than the development and proliferation of their warfighting technologies. [22] The
model accurately predicts where and when the largest, most cooperative, and most prosperous human
societies reigned.

Based on experiments like this and other forms of cultural evolutionary analysis, Turchin argues that there
could be a causal relationship between warfare and cooperation which drove human cooperation to
scales far exceeding the largest colonies of organisms observed in the wild, even in comparison to
organisms which are famous for having highly cooperative colonies, like ants and termites.

Turchin also argues that warfare creates a counterintuitively positive dynamic of creative destruction
where agrarian society channels its resources to the most stable, cooperative, and productive societies
capable of sustaining high-functioning cooperative relationships for the longest amounts of time. In other
words, warfare creates trickle-up dynamics where natural resources flow to the most well-organized and
cooperative civilizations.

While Turchin’s theory may be novel from an Anthropological perspective, it should not be surprising to
anyone who understand the core concepts of power projection in nature and the “innovate, cooperate,
or die” dynamics of predation discussed in the previous chapter. What Turchin describes in his theory
appears to be nothing more than the continuation of a four-billion-year-old trend of power projection
tactics in nature. It is incontrovertibly true that living organisms have been evolving into increasingly more
organized, cooperative, and innovative creatures as a direct response to the existential threat of predation
since at least the first bacteria discovered phagocytosis: the capacity to subsume or “eat” other bacteria.

As discussed in the previous chapter about power projection tactics in nature, cooperation is first and
foremost a physical power projection tactic which emerged at least as early as 2 billion years ago. To
project more power and to make it harder to justify the physical cost of attacking them, organisms simply

220
sum their individual physical power together using cooperation. The concept is very simple, but quite
difficult to execute – especially on large scales. The simple truth of the matter is that teamwork is hard.

It’s extraordinarily difficult to get a bunch of organisms to trust each other and work together effectively
to mutually increase their CA and lower their BCRA. Cooperation requires pack animals to come up with
solutions for very challenging questions, and one of the most difficult challenges they face is the
existentially important challenge of establishing a pecking order. In a world filled with predators, entropy,
and limited resources where organisms must learn how to cooperate to survive, who gets feeding and
breeding rights? What’s the best way to divvy up the pack’s limited resources to ensure they have the
best chances for survival as a team, rather than as individuals?

A repeating theme of this chapter is that there’s no reason to believe sapiens are special exceptions to
the same struggles faced by other organisms in nature. It would be difficult to make the argument that
sapient organizations wouldn’t benefit from the same trend of predation in precisely the same way as the
wild animals from which they evolved – a way that’s quite simple to empirically validate by spending some
time outside observing wild animals. It seems natural that sapiens would learn that the best way to survive
against predators and entropy is to cooperate together at increasingly higher scales. Sapiens would also
experience the same cooperation challenges. Therefore, does it make sense to believe that humans
wouldn’t adopt the same pecking order strategy that practically all other organisms with the same
problem have adopted? Why wouldn’t humans feed and breed the most physically powerful and
aggressive members of their tribe if that’s what pack animals have independently verified as an ideal
survival strategy?

If the reader has made it this far, then you now have a deep-seated appreciation for how sapiens have
attempted to deal with the “pecking order problem” differently than other animals. Instead of using
“might is right” or “feed and breed the physically powerful first” as their pecking order strategy, sapiens
use their imagination to come up with abstract pecking orders where people with imaginary power get
first dibs rather than the people with real power. Why exactly is that? The author has now offered two
potential explanations. The first was because it uses less energy and the second was because it’s less
physically destructive. However, here Turchin provides another viable explanation: because it scales
human cooperation.

There is theoretically no limit to how many people can choose to believe in the same thing. Thus, there’s
theoretically no limit to how far an abstract power hierarchy can expand its control over resources. As
long as people are willing to adopt the same belief system, there’s no limit to how far humans scale their
cooperation (for all we know, there could have already been a galactic-scale empire long ago in a galaxy
far, far away that figured out how to scale their cooperation to the galactic level). The takeaway is that
abstract power hierarchies have essentially zero marginal cost to scale because belief systems are
essentially free to adopt.

Because abstract power hierarchies represent nothing more than a belief system, all it takes to build an
abstract power hierarchy is to simply share an idea. This becomes exceptionally easy to do when there is
an existential imperative to adopt a particular belief system. For sapiens, the “cooperate or die” dynamics
of survival turn into “believe in this idea or die.” Therefore, out of existential necessity and for the sake of
self-preservation (particularly in response to the threat of invasion), sapiens are compelled to adopt
increasingly larger-scale belief systems because if they don’t, they’re much less likely to survive against
other sapiens who adopt increasingly larger-scale belief systems. This dynamic appears to be what led

221
agrarian society towards adopting massive-scale belief systems like nation states, as well as the national
power alliances formed by multiple nation states (e.g. European Union, NATO).

200,000 years ago, humans traveled in foraging bands comprised of tens of people. 10,000 years ago,
sapient populations increased to farming villages comprised of hundreds of people. 7,500 years ago,
sapiens developed simple chiefdoms (small cities) comprised of thousands of people. 7,000 years ago,
these simple chiefdoms turned into complex chiefdoms comprised of tens of thousands of people. 5,000
years ago, the first archaic states emerged and were comprised of hundreds of thousands of people. Five
hundred years after that, human cooperation exploded into macro-states comprised of millions of people.
2,500 years ago, the first mega-empires emerged, comprised of tens of millions of people. Remarkably,
the large nation states we live in today are merely two hundred years old. [22]

Abstract power hierarchies clearly have the ability to scale quickly because ideas have essentially no
marginal cost. This makes them a useful countermeasure against nations attempting to scale their physical
power projection capabilities. Abstract power hierarchies help humans coordinate their efforts better
(assuming the people with abstract power are competent and trustworthy). And due to the constant
existential threat of warfare, humans are forced into accepting these belief systems where people have
imaginary power over them, because they need high levels of cooperation to survive in a world filled with
predators and entropy.

These dynamics imply that sapiens are placed into a strategic pickle where they must choose which
abstract power hierarchy they want to subscribe to, or else they will find themselves fending for
themselves alone in the wild. Except our Paleolithic ancestors weren’t helpless in the wild; they had the
freedom and confidence to roam the continents hunting and gathering. They didn’t have to worry about
massive-scale nuclear-armed tribes run by dictators on abstract power trips.

Turchin’s anthropological theory is a straightforward argument which matches biological theory quite
nicely. His theories seem intuitive to people who study nature with scientific and amoral rigor, but it can
be unintuitive to (domesticated) sapiens who have adopted the habit of believing they’re “above” the
routine physical confrontations observed in nature, or believing that physical conflict and aggression (both
of which are very common behaviors observed in every size, scale, and corner of the wild and rewarded
by natural selection) is immoral according to whatever their individually subjective and unfalsifiable
definition of moral is.

If we were to combine these concepts with the core concepts presented in the previous chapter about
power projection in nature, then we can see how the emergent behavior of agrarian society is precisely
the same as primordial economic dynamics observed in evolutionary biology. This means we can
summarize the complex emergent behavior of national strategic security and the phenomenon of
warfighting using simple bowtie notation. To that end, Figure 54 shows a bowtie representation of
national strategic security dynamics. Here the author illustrates how it’s possible to boil the complex
dynamics of 10,000 years of agrarian warfighting down to a simple little illustration where organisms pair
up with other organisms and adopt increasingly larger-scale abstract power hierarchies so they can
increase their ability to impose physical costs on neighboring organizations, all for the sake of surviving
and prospering in a congested, contested, competitive, and hostile environment filled with predators and
entropy.

222
Figure 54: Bowtie Notation of the Primordial Economic Dynamics of National Strategic Security

The top left portion of this figure shows individual nations projecting national power. Each nation has their
own individual BA and CA and thus their own individual BCRA. To increase their ability to survive against
predators and entropy, they learn how to organize and cooperate and sum their CA together, as shown on
the top right portion of the figure. By summing their CA together, they create a Schelling point where other
nations must sum their CA together or else their individual BCRA will be too high. Fast forward these
dynamics across thousands of years and we arrive at the situation shown at the bottom of this figure,
where alliances of individual nations double as organizations in and of themselves (e.g. NATO), comprised
of billions of people endlessly searching for increasingly clever ways to project power, sum it together,
and use it as a mechanism to make it impossible for adversaries to justify the cost of a fight.

So there you have it. Sapiens are exactly the same as any other multi-cellular organism learning how to
stick together through colonization or clustering. Like archaeon, little sapient conquers go around
capturing cities, forcing them to stick together, and turning them into city states. A few generations pass,
and another sapient conqueror goes around capturing city states, forcing them to stick together, and
turning them into nation states. Each time this happens, the pressurized membrane surrounding each city
state (i.e. their militarily-enforced borders) grows longer and stronger, increasing their capacity to project
power. Under the mutually beneficial security of these increasingly powerful borders, colonies of

223
clustered humans grow increasingly more interdependent and reliant on each other for vital nutrients,
materials, and gene swapping. They are able to form highly complex structures, self-assembling into
increasingly more specialized workforces, trading various goods and services and becoming ever more
efficient, productive, and resource abundant. Through this special combination of robust security and
high-functioning internal economy, sapiens were able to follow a multistep biological path towards ever-
increasing structure until it managed to self-assemble into complex, massive-scale economies we see
today.

4.11.2 Warfare is a Self-Perpetuating Process that Naturally Increases in Size and Destructive Power

A primary takeaway from Turchin’s theory on the relationship between warfare and cooperation is that
humans are compelled out of existential necessity to adopt common abstract belief systems which make
them more capable of cooperating and combining their power projection capabilities together for mutual
survival and prosperity. Expanding on Turchin’s theory, it’s possible to make two follow-on observations.
The first observation is described in this section, the second observation is described in the next section.

First, the size and asymmetry of modern agrarian abstract power hierarchies (i.e. governments) appear to
be a direct result of human warfighting. This is a subtle but remarkable observation to make because it
implies that warfare is its own root cause. If Turchin’s theory is valid, then we can thank the existential
stress of warfare for being a reason why abstract power hierarchies have more asymmetric power and
control authority over valuable resources than ever before. But at the same time, we also know that
dysfunctional abstract power hierarchies are one of the primary motivations behind warfare. Add these
two insights together, and we can generate a profound observation that the root cause of warfare is the
emergent effect of warfare. In other words, warfare causes itself.

Warfare creates an existential imperative for people to adopt increasingly larger (and thus more
dysfunctional and vulnerable to systemic exploitation) abstract power hierarchies which create
increasingly larger security hazards capable of leading to increasingly larger losses. Dysfunctional abstract
power hierarchies motivate people to wage wars, which are won by adopting larger-scale abstract power
hierarchies (e.g. national power alliances) to scale cooperation and sum enough physical power together
to win the war. This creates a cyclical, self-perpetuating process where civilization learns to cooperate at
higher scales, but also learns to fight at increasingly larger and more destructive scales, driving them to
adopt increasingly more systemically insecure and hazardous belief systems. Through this spiral, what
starts as a comparatively minor territorial dispute over irrigated land between Neolithic chiefdoms hurling
spears at each other apparently snowballs into modern nation states pointing nuclear intercontinental
ballistic missiles at each other (which is now expanding extra terrestrially into space and cislunar orbit).

If the author’s description of national strategic security dynamics is accurate, then it’s quite ironic. It
means that even though agrarian society uses warfare to physically constrain and decentralize abstract
power hierarchies and prevent the world from falling under the rule of a single ruling class, the emergent
effect of that activity causes the size of agrarian society’s abstract power hierarchies to grow and gain
asymmetrically higher amounts of control authority over everyone, making populations increasingly more
vulnerable to systemic exploitation from their own ruling classes. To put it more simply, in our attempts
to physically constrain and decentralize our neighbor’s ruling class so that they can’t exploit us with their
imaginary power and resource control authority, we inadvertently make ourselves more vulnerable to
exploitation and abuse from our own ruling class. By winning world wars against our neighbors, we create
massive-scale abstract power hierarchies with extraordinarily asymmetric amounts of abstract power and
control authority over our most valuable resources (thus high BCRA), and we attract systemic predators

224
like moths to a flame. We practically lay down a red carpet for predators and invite them to come
systemically exploit the common belief systems we have to adopt to win our wars, which inevitably causes
our belief systems to break down and lead to more war. It’s a vicious, tragic cycle.

4.11.3 Nation States are an Untested Multicellular Organism Living in the Wild

A second follow-on observation from Turchin’s theory is that nation states are relatively untested in
nature. For 99.9% of the time that anatomically modern sapiens have walked the Earth, they have not
lived in nation states. For 99.6% of the time that behaviorally modern sapiens have been thinking
abstractly and constructing shared imaginary ideological constructs for themselves, they have not
believed in nation states. Abstract power hierarchies have been around for thousands of years, but these
abstract power hierarchies have never been large enough to qualify as nation states and have never been
as asymmetrically powerful (thus as systemically insecure) as they are today. We feel like nation states
have been around forever because enough time has passed where we (and all the people we have ever
met) have only ever lived in a world governed by abstract power hierarchies large enough to qualify as
nation states. But comparatively speaking, nation states are very new, and we don’t know if they’re a
properly-functioning, long-term survival strategy for our species.

Nation states have clear systemic security flaws. They inherit all the flaws of abstract power hierarchies
discussed at length in this chapter, and then magnify them to unprecedented scale. It is incontrovertibly
true that people who are given asymmetric abstract power and control authority over people’s valuable
resources can’t be trusted not to exploit or abuse that abstract power. We have thousands of years of
written testimony of people exploiting and abusing their abstract power. Based on that same written
testimony, we also know that it is incontrovertibly true that attempting to logically constrain abstract
power by encoding rules of law are demonstrably insufficient. Nevertheless, modern agrarian society has
scaled this repeatedly-dysfunctional belief system to the point where hundreds of millions of people (even
billions of people in some cases like China) must trust hundreds of people not to exploit their abstract
power and control authority over the population’s most valuable resources.

Why would people agree to this? If Turchin’s theory is valid, it’s because it’s existentially necessary for
survival. This would imply that modern agrarian society effectively backed itself into a corner with another
strategic Schelling point. We must adopt massive-scale abstract power hierarchies because it is the only
way to survive against our neighbor’s massive-scale abstract power hierarchies. But as a result of adopting
these massive-scale abstract power hierarchies, we entrap ourselves.

Are nation states really a good idea for agrarian society in the long-term? How can we know? Nation states
didn’t emerge until the last 0.1% of our anatomically modern time here on Earth. Our nation states and
their corresponding national power alliances are practically untested belief systems. We don’t yet know
how well they will enable sapiens to survive in the wild (particularly survival against ourselves). It’s
possible they could backfire on us just like so many other emergent power projection tactics have
backfired on other lifeforms over the past several billion years. We can’t yet know if these types of abstract
power hierarchies can function properly at this scale because we don’t have enough data to know yet.

However, we do have enough data to conclude that at smaller scales, abstract power hierarchies are
highly prone to exploitation and abuse. Why are they so vulnerable? Because metacognition; because
humans live according to their abstract thoughts and imaginations and symbolic knowledge, not according
to experiential knowledge or what they can gain from shared objective physical reality. Abstract power
hierarchies are fundamentally belief systems, and all sapiens are vulnerable to psychological exploitation

225
and abuse of their belief systems. Sapiens can and routinely do allow themselves to be systemically
exploited in, from, and through their belief systems – especially when those belief systems involve highly
asymmetric amounts of abstract power and control authority over valuable resources.

So far, all we have to go on to determine if nation states are a good idea is a statistically irrelevant sample
of about 200 years’ worth of data. That data is inconclusive, to say the least. It’s filled with many of the
best examples of sapient aspiration and achievement that would have never been possible if it weren’t
for the extraordinarily high scales of cooperation that nation states enabled. Through the cooperation and
coordination of our nation states, we walk on the moon and build international space stations. But at the
exact same time, we carpet bomb cities and drop nuclear warheads on them. Many of the worst examples
of destruction ever experienced by sapiens have occurred in the past 200 years.

4.11.4 Like Cyanobacteria, Sapiens may Have Discovered A New Type of Bounded Prosperity Trap

Nevertheless, no matter what our opinions about nation states are, they’re essentially irrelevant to the
subject of security. Nation states may be a few hundred years old, but the first principles dynamics of
security are billions of years old. National security is therefore the same physical power projection game
that has been played for billions of years, but with a different name and different branding. This makes
national security a very straightforward process. If you want to know how to be good at national security,
simply study nature and observe the behavior of nature’s stop survivors.

To be good at national security, make your nation better organized, cooperative, and innovative so that
it can find increasingly clever ways to project physical power against neighbors. Like any other type of
organism in the wild, nations must strive to continually evolve if they want to keep themselves secure
against predators and entropy. The more they can continually increase their capacity to impose severe
physical costs on neighboring nations in increasingly clever ways, the more they can make it impossible
to justify the cost of attacking them. The more a nation can make it impossible to justify the cost of
attacking them, the more prosperity margin it can buy for its internal population. At scale, this survival
strategy produces the same dynamics discussed in section 3.8 where nations must chase after infinite
prosperity by continually searching for new and innovative ways to increase the cost of attacking them ad
infinitum. It should be no surprise that this process led agrarian civilization right to the brink of nuclear
annihilation.

In the previous chapter, the author discussed how organisms sometimes struggle to find increasingly
clever power projection tactics, techniques, and technologies, making them incapable of countervailing
predators and entropy. The author named this situation a “bounded prosperity trap.” One of the most
dramatic examples of a bounded prosperity trap was the example provided about life’s mass extinction
event called The Great Oxygenation Event. As a refresher, cyanobacteria discovered an innovative tactic
called photosynthesis, but it backfired on them by covering the world with highly combustible oxygen and
setting themselves and the oceans ablaze for millions of years.

Fortunately, life had the ingenuity to learn how to (literally) stick together and cooperate at increasingly
higher scales. Organisms continually experimented with different types of power projection tactics until
they were able to discover the right countermeasures needed to countervail the blaze and escape their
fiery hell. This was perhaps the most compelling display of life’s rebellious “do not go quietly into the
night” ethos against the cold and unsympathetic nature of the Universe. Entropy quite literally lit a flame
under life’s hindquarters, and life responded not by giving up, but by becoming stronger and more
intelligent and more powerful than it had ever been before. Cyanobacteria crawled out of their fiery

226
hellscape by innovating, and they exited their bounded prosperity trap with multicellular membranes and
many other innovations to which we owe our everlasting gratitude today, billions of years later.

With the concept of bounded property traps fresh in mind, let’s examine the state of our world today.
The author asserts that life appears to have found its way into yet another bounded prosperity trap. This
time, sapiens sprung the trap and placed much of life on Earth into yet another fiery hellscape: a perpetual
state of global-scale agrarian warfare and now a looming threat of nuclear extinction. Herein lies a core
hypothesis of the author’s theory on softwar grounded in theoretical concepts from biology, psychology,
anthropology, political science, game theory, and systems security theory.

4.11.5 A Recurring Contributing Factor to Warfare is the Hubris to Believe it isn’t Necessary

With extraordinary hubris, sapiens appear to believe they can outsmart natural selection and find a viable
alternative to physical power for settling disputes, managing internal resources, and establishing a pecking
order using nothing more than their imaginations. They linked their prefrontal cortices together through
storytelling and adopted imaginary points of view where physical power isn’t needed to settle their
disputes, manage their internal resources, and establish their pecking order. They made faulty design
assumptions and adopted systemically exploitable belief systems where people with abstract power
placed at the top of abstract power hierarchies are allowed to settle their disputes, control their resources,
and decide the legitimate state of ownership and chain of custody of their most valuable property.

Like cyanobacteria and photosynthesis, sapiens and their abstract power hierarchies were a remarkable
innovation that helped them achieve unprecedent levels of resource abundance. But also like
photosynthesis, sapient abstract power hierarchies literally backfired on them and set the world ablaze
by creating an emergent phenomenon we call warfare. Now life appears to be on the precipice of yet
another mass extinction event, where they must figure out a way to stick together if they want to escape
this new bounded prosperity trap.

By choosing to believe in abstract power and becoming over reliant on abstract power hierarchies as an
ostensibly “peaceful” alternative to settling their disputes, managing internal resources, and establishing
their pecking order, the belief systems sapiens design to avoid warfighting appear to be a leading cause
of warfighting. In yet another tragic example of irony, the human desire to avoid physical conflict to settle
small-scale policy and property disputes repeatedly cascades into massive-scale physical conflicts. By
trying to avoid the use of physical power as the basis to settle disputes and manage resources, sapiens
use their oversized foreheads to adopt belief systems which make them vulnerable and incapable of
surviving in a congested, contested, competitive, and hostile environment filled with predators and
entropy. In their attempts to avoid the energy expenditure and injury risk of human-on-human physical
conflict, sapiens seem to invertedly contribute to creating more of both.

Some agrarian populations become so self-domesticated by their imaginations that they forfeit their
capacity and inclination to project physical power altogether. Not surprisingly, a population which doesn’t
believe in using physical power because of ideological reasons is a population incapable of protecting
themselves against invaders who don’t share the same ideologies. Alternatively, these populations
become so docile and unsuspecting that they allow themselves to be exploited on a massive scale through
their own ideologies. As history has shown, many populations would sooner worship oppressive god-kings
who literally brand and herd them like domesticated animals than to project physical power to physically
secure themselves and the property they value against systemic exploitation and abuse. In their desire for
peace, they become oppressed.

227
People keep thinking their laws will keep them secure. They keep subscribing to demonstrably flawed
beliefs that logical constraints encoded into laws and signed by people with abstract power are sufficient
enough at protecting them against systemic exploitation and abuse, and they inevitably find themselves
entrapped in states of major inequality and oppression with no ability to recognize the source of the trap,
thus no hope of escaping it. They mentally ensnare themselves by believing that logical constraints are
viable replacements to physical constraints as a mechanism for keeping themselves and their property
secure. They make no effort to understand the difference between logical or physical constraints, nor the
difference between imaginary power and real power, and they get devoured.

Surviving societies which don’t get devoured are the societies which figure out the source of their
vulnerabilities, call on their compatriots to take arms with them, and make it impossible to justify the
physical cost of either physically invading them or systemically exploiting their belief system. They learn
how to organize better, cooperate at larger scales, and invent innovative technologies to win their battles.
When an endangered society fights off the threat of a neighboring abstract power hierarchy, it’s called a
war. When an endangered society fights off the threat of exploitation by their own abstract power
hierarchy, then if they win it’s often called a revolutionary war. If they lose, it’s often called a civil war.

Either way, war is war. It’s the same power projection game, with a different name. There is nothing
different happening in shared objective physical reality when people engage in physical power
competitions and call it regular warfare, civil warfare, or revolutionary warfare. The same species uses the
same physical power projection tactics, techniques, and technologies. The primary difference is the stories
they tell – the abstract thoughts people use to motivate themselves to organize and impose severe
physical costs on each other.

In every war, the cause people fight for is often imaginary; people frequently fight for nothing more than
a belief system. Trapped behind the cage of their overpowered, overactive neocortices, sapiens construct
abstract realities indistinguishable from physical reality which are more meaningful and satisfying for
them. Their imaginary mental models of the world become so important to them that they will gratefully
line up and die for them. Their abstract thoughts completely overpower their instincts not to harm their
own kind, and they commit unnatural and unprecedented amounts of intraspecies fratricide, gutting and
mangling each other for the sake of “good” or “god” or “government” because people can’t seem to be
able to come to global consensus about what these things mean or what the correct design for them is.

And despite all this suffering, as much as people hate to admit it, warfare keeps resurfacing because it has
complex emergent social benefits. But at the same time, it’s also clearly not beneficial because it creates
a self-reinforcing dynamo of self-destruction. The more societies go to war with each other, the more they
must cooperate at higher scales by forming larger and more exploitable abstract power hierarchies. This
cooperation allows them to sum more power together to accomplish extraordinary achievements like
winning world wars and traveling through space. But the more they create asymmetric imaginary power
and authority over larger amounts of resources, the more asymmetrically advantageous it becomes for
self-serving sociopaths to exploit populations through these belief systems, or to invade them.

The more people get exploited through their belief systems, the more motivated they become to cry
havoc and let slip the dogs of war once again, to restore society back to an acceptable state of systemic
security. But the more people fight wars, the more they must cooperate at larger scales by creating larger
abstract power hierarchies. This creates a larger window of opportunity to be systemically exploited at

228
even larger scales, which must be resolved using larger scales of physical conflict to impose larger amounts
of physically prohibitive costs on exploiting them.

If this is starting to sound repetitive, that’s intentional. The author is illustrating to the reader that war is
not only cyclical it’s tragically predictable. On, and on the dynamo of self-destruction turns, the same
process with the same root causes repeating itself ad infinitum and ad nauseum, snowballing over tens of
thousands of years to the point where sapiens subscribe to belief systems which make it socially
acceptable to bomb cities to secure themselves against the imaginary belief systems adopted by the
people living in the cities being bombed. And what do these people believe? They believe there is a viable
substitute to physical power as the basis for settling their disputes, establishing control authority over
their resources, and achieving consensus on the legitimate state of ownership of property. They believe
there is a moral or ethical alternative to physical power – that we could live in peace so long as we let
them define what “right” means, give them asymmetric abstract power and control authority over our
resources, and entrust them not to exploit it.

New alliances are formed at increasingly remarkable levels of cooperation to win global-scale wars against
self-serving sociopaths, only for those alliances to make larger populations more systemically vulnerable
to future generations of self-serving sociopaths who wear lapel pins and shake their hands in the air as
they tell stories and try to convince people they know what’s “right.” People line up to subscribe to these
belief systems and let themselves get exploited at unprecedented scale, so entrapped behind their
abstract thoughts and imaginary realities that they’ll do nothing to stop it. Through their belief systems,
billions of people allow themselves to be gaslighted and systemically exploited in broad daylight by
increasingly brazen god-kings. Onwards, agrarian society stumbles over itself like brain-dead zombies,
dead men walking toward larger, more devastating wars until they finally hit brink of thermonuclear
annihilation.

Imagine, if instead of trying to replace physical power with abstract power, that people learn to accept
there is no replacement for physical power, and we instead sought to replace kinetic power with electric
power. If we could figure out a way to use non-lethal, electric warfare that still enabled global-scale
strategic power competitions to settle disputes, manage resources, and establish a pecking order, then
we might be able to break this cycle. If only there were a global-scale physical power projection technology
out there to which society had zero-trust, egalitarian, and permissionless access… enter Bitcoin.

4.11.6 Warfare could be Described as a Blockchain

Study war long enough and it starts to look as predictable as clockwork. The self-reinforcing feedback loop
of flawed human belief systems are so reliable that it seems like we could set our watch to it. Populations
adopt abstract power hierarchies and stop projecting physical power, causing their BCRA to increase
beyond a safe threshold, creating opportunities where populations will revert back to their primordial
instincts to either capture high-BCRA resources or impose high physical costs on attackers. A lot of watts
(and lives) will be expended until populations have sufficiently lowered their BCRA, settled their disputes,
established control authority over their resources, and achieved consensus on the legitimate state of
ownership and chain of custody of their property.

A block of time will pass where people can enjoy a reprieve from this global-scale physical power
competition. This reprieve is so revered it’s given a special name: “peace.” And then when enough time
has passed for people to become complacent, the population will forget how painfully predictable they
are, their BCRA will climb, the predators will return, and the whole process starts over again. These blocks

229
of time link together linearly, forming a chain of time blocks, or a blockchain. The winners of this
continuous global power competition are given the privilege of writing history, which is nothing more than
a globally distributed ledger that keeps account of who has control of what and what the general state of
consensus is regarding the legitimate state of ownership and chain of custody of the world’s resources.
This clockwork behavior of modern society is illustrated in Figure 55.

Figure 55: How the Cycle of War Creates a “Blockchain”

Countless people have been sent to early graves, warning future generations through bone trails dug up
thousands of years later. Countless other warnings have been written into the pages of history by the
survivors. For thousands of years, people of the past have been trying to warn the people of the future
that something isn’t working. Our belief systems are clearly flawed; they clearly don’t work the way we
wish they would. Countless times people have proven through their actions that there is no viable
substitute for physical power for any population who wants to live free from the threat of foreign invasion
or who wants to remain systemically secure against the threat of corrupt, self-serving sociopaths who
psychologically exploit and abuse people through their belief systems.

In our extraordinary hubris, we believe we can do better, and we ignore the warnings of our predecessors.
We keep scaling our beliefs about abstract power to the point where those beliefs become a clear and

230
present danger to the survival and prosperity of our species. By convincing ourselves early in the Neolithic
age that we are above nature, that we don’t need physical power to establish our pecking order, we
appear to have set ourselves on a path to destroy ourselves. We keep telling ourselves a lie that we don’t
have to spend the energy or risk injury to use physical power to establish our pecking order like the
animals we domesticate. We keep avoiding fights when and where they should have happened, where
energy expenditure and injury would have been minimal. We keep kicking the can down the road, hoping
to avoid the physical conflict for moral or ethical reasons until the hazards blossom into such extraordinary
levels of dysfunction and losses that it must be resolved using far more energy causing far more
destruction and injury. And now we appear to have backed ourselves right into a corner where it has
become too costly to physically settle our biggest property and policy disputes.

4.12 Humans Need Antlers

“Peace? No peace.”
The Alien from Independence Day [115]

4.12.1 Hitting a Kinetic Ceiling

Caught in a bounded prosperity trap, agrarian society appears to have reached the crescendo of this
10,000-year-long song by running it straight into a kinetic ceiling. In our endeavors to use an imaginary
replacement to physical power as the basis for settling our disputes, managing our resources, and
establishing our pecking order, we ironically seemed to have scaled our capacity to impose physical costs
on attackers to the point where it is no longer practically useful as a mechanism to keep our property and
policy physically secure against systemic exploitation.

Sapiens are extraordinarily clever and resourceful. They are constantly finding new and innovative ways
to be faster and more efficient at solving their problems. One of agrarian society’s biggest problems is the
survivor’s dilemma, also known as national strategic security. To solve this problem, sapiens are
continually searching for increasingly clever and more efficient ways to impose severe, physically
prohibitive costs on others to make it impossible to justify the cost of attacking them. But there’s a catch:
it is theoretically possible for sapiens to become so efficient and resourceful at imposing severe, physically
prohibitive costs on each other, that it defeats its own purpose. The author asserts this could be what
happened with the invention of strategic nuclear warheads.

The evolution of human physical power projection tactics can be visualized in graphs like Figure 56 using
two evaluation criteria: (1) how efficient they are, and (2) how much physical power they have the capacity
to produce. The efficiency of a given physical power projection technology is a function of how much
physical power it can project divided by the cost required to produce that power (cost can be measured
several ways, to include money or casualties). The more efficient physical power projection technology is,
the easier it becomes to project large quantities of power on a potential attacker to make it impossible to
justify the cost of an attack. In other words, the more efficient power projection technology becomes, the
better it becomes at growing CA, reducing BCRA, and buying prosperity margin for the nation utilizing that
technology.

231
Figure 56: Evolution of Physical Power Projection Technologies Developed by Agrarian Society
[88, 89, 116, 117, 118, 119, 120, 121, 122]

Strategic nuclear warheads and deterrence policies like mutually assured destruction suggest it is possible
to engineer physical power projection technologies that are so efficient, they’re not practically useful. If
multiple nations have the means to place thermonuclear warheads in multiple independent reentry
vehicles sitting on top of intercontinental or even cislunar ballistic missiles, that suggests we have become
so efficient at projecting kinetic physical power on each other that it represents an existential threat to
the species. In yet another display of irony, human ingenuity appears to have made kinetic physical power
so inexpensive to project (in terms of size, weight, matter, and monetary resources) that it has become
too expensive to project (in terms of infrastructure destroyed and lives lost).

In their quest to become more efficient at national security, humans appear to have accidentally created
the most inefficient national security capability ever: strategic nuclear warfare. It is hard to imagine
anything else in this world that could project more power for less size, weight, matter, and money than a
thermonuclear bomb. Yet, it’s hard to imagine anything else in this world that could be more costly to
agrarian civilization than mutually assured nuclear annihilation. Such is the paradox of kinetic power
projection – if you get too efficient at it, it will become too inefficient. The author calls this phenomenon
the “kinetic ceiling” and illustrates it on Figure 56.

232
4.12.2 Kinetic Stalemates Don’t Create Peace; They Create Major Systemic Security Hazards

Sapiens could be described as an Icarus-like species who tried to outsmart natural selection and got
burned. In their quest for efficiency, they make themselves inefficient. They try to avoid the energy
expenditure of using physical power as the basis for settling their disputes and establishing their pecking
order, only to end up using much larger and costly quantities of physical power as the basis for settling
their disputes and establishing their pecking order. Humans chased after more efficient power projection
technologies for more than 10,000 years, only to build the most inefficient power projection technology
possible. In their hubris, sapiens brought agrarian society to the brink of nuclear wars that cannot produce
a winner. And now they have cornered themselves by running straight into a kinetic ceiling. They appear
to have scaled kinetic physical conflict to the point where it is no longer practically useful as a basis for
settling disputes and establishing a pecking order. And they do not appear to understand how systemically
hazardous this stalemate is.

We have scaled our capacity to project kinetic power and compete in global-scale kinetic power
competitions beyond the point where it would be practically useful as a basis for settling disputes and
establishing pecking order. It is possible that agrarian society is in the middle of a strategic-level kinetic
stalemate. Incidentally, the seventy years of time that has passed since the invention of nuclear warfare
is just enough time for sapiens to forget the hard-earned lessons of history and the root causes of warfare
discussed throughout this chapter. Our peace – that temporary block of reprieve between wars – appears
to be stretching towards its limit. We may be overdue for another strategic-scale war, but what happens
if a strategic-scale kinetic war cannot be waged or cannot produce a winner?

Is a strategic-scale nuclear kinetic stalemate a good thing or a bad thing? The knee-jerk reaction from
someone who doesn’t understand the necessity of physical power projection in agrarian society would
probably assert that a nuclear stalemate is a good thing. “Finally,” they might claim, “we can have peace
because we have made strategic-level kinetic warfare too expensive to wage!”

Now that we have reached the end of this chapter on power projection dynamics in modern agrarian
society, the reader should have a thorough understanding of the flaws of this line of reasoning. For
starters, it presumes that the only threat to a population is invasion from a neighboring abstract power
hierarchy. A strategic-level stalemate might secure a population against an invasion from a neighboring
abstract power hierarchy, but it wouldn’t secure a population against massive-scale systemic exploitation
and abuse from their own abstract power hierarchy. So right out of the gate, people are ignoring a major
security hazard and a reoccurring cause of war (this is why the author dedicated so much of this chapter
to explaining the security flaws of abstract power hierarchies – to help the reader understand the threats
they would likely face in a kinetic stalemate).

Peace is not an option so long as predators exist. Peace has never been a replacement to war, it has only
been a name that we assign to a state of reprieve between wars, when people are competent and
trustworthy enough with their abstract powers to settle our disputes, manage our resources, and
establish our pecking order without needing physical power. But as 10,000 years of evidence would
suggest, peace doesn’t last. It is as fleeting and as fragile as the imaginary power it gives to people and
entrusts them not to abuse. Like clockwork, our abstract power hierarchies become dysfunctional, and
the next war comes.

No species, to include sapiens, have ever walked the earth without subscribing to the physical power
projection game. We have never had the option of unsubscribing from this game. We are not special

233
because we have big brains capable of imagining a different world where we have outsmarted natural
selection and aren’t constantly under threat of being attacked, invaded, or exploited. No matter what our
imaginations show us, the real world remains filled with these threats.

We have never walked on a planet that isn’t filled with predators; we have only allowed ourselves to
forget about them. We become so comfortable and complacent in our high tower of success built for us
by the sacrifice of our predecessors that we become docile and domesticated. We get drunk off the luxury
of forgetting that we’re surrounded by predators who don’t take reprieves. We may be able to stop
predators from invading us, but we can’t stop predators from exploiting our belief systems – in particular
the belief systems we use to manage our most valuable resources (like our money).

The more abstract power and control authority we give to a ruling class, the more benefit they gain from
exploiting their abstract power and control authority over us. To expect a ruling class not to exploit
increasingly asymmetric levels of abstract power is, frankly, ignorant. We know that this is incontrovertibly
true because if it weren’t true, people wouldn’t be compelled to fight wars. Their abstract power
hierarchies and the imaginary logical constraints they encode into their rules of law, would be sufficient
to secure them without physical power. But it clearly isn’t sufficient, hence warfare.

So what happens when agrarian society hits a kinetic ceiling and stalemates itself at the strategic level?
To believe that a stalemate is a good thing is to make a tacit assumption that there are viable alternatives
to physical power as a basis for settling our strategic disputes, establishing control authority over our
strategic resources, and achieving consensus on the legitimate state of ownership and chain of custody of
our property in a zero-trust, permissionless, and egalitarian way. Is it possible that sapiens are the first
species on earth to have discovered an alternative to physical power competition as the basis for
establishing pecking order? Perhaps. But it seems more likely that the people who think a strategic-level
stalemate is a good thing are ignoring the core concepts of natural selection, human metacognition, and
the differences between abstract and physical power.

The point of view that a strategic-level stalemate is good for agrarian society hinges on a delusion that
sapiens have the option of living in a world without predators and entropy. It implies we can use our
imaginations to adopt belief systems where people can be entrusted not to use their imaginary power to
exploit our belief systems. It implies we can keep ourselves secure against systemic exploitation and abuse
using nothing more than logical constraints encoded into rules of law to which systemic predators aren’t
sympathetic. For these and the reasons discussed at length throughout this chapter, it’s unreasonable to
believe that a kinetic stalemate represents a lasting peace. Instead, it’s far more reasonable to believe
that a kinetic stalemate represents a major systemic security hazard.

4.12.3 A Stalemate to War would Represent a God-King’s Paradise

To improve our own capacity to survive and prosper, it is vital for us to understand that there is no such
thing as a substitute to physical power as a zero-trust, permissionless, and egalitarian basis for settling
disputes and establishing pecking order, no matter how much people like to preach about alternatives. If
it’s true that agrarian society has stalemated itself at a global-scale strategic level with nuclear warheads,
then that means agrarian society has never been more vulnerable to the threat of unimpeachable
systemic exploitation and abuse than it has ever been in the past 10,000 years of populations suffering
from the oppression of their god-kings.

234
To understand the hazard we could be in, simply ask “what are the complex emergent social benefits of
warfare that society would lose in a kinetic stalemate?” The author has already enumerated these
benefits, but to summarize to top four: (1) zero-trust, permissionless, and egalitarian control over
resources, (2) the ability to physically resist, constrain, and decentralize dysfunctional abstract power
hierarchies, and (3) the existential motivation to innovate and cooperate at increasingly higher scales, and
(4) the ability to vector limited resources to the strongest and most intelligent members of the pack who
are demonstrably the best suited to survive in a world filled with predators and entropy.

If these are valid benefits of warfare, then we can see that a stalemated society would be a trust-based
(thus systemically insecure), permission-based, and inegalitarian society where a ruling class has
unimpeachable control over a ruled class. A stalemated society would have no capacity to use kinetic
power to physically constrain people from adopting a single ruling class to settle all their disputes, manage
all their resources, and determine ownership of all their property. A stalemated society would have no
existential motivators to cooperate at higher levels than the level required to maintain their stalemate. A
stalemated society would have no way to identify who among them is demonstrably capable of navigating
chaos and surviving in a world filled with predators and entropy. A stalemated society would have
successfully mitigated the threat of invasion by neighboring abstract power hierarchies, only to make
themselves far more exploitable by their own abstract power hierarchy.

A global agrarian society locked in a strategic-level kinetic stalemate would necessarily have to adopt
belief systems which utilize abstract power to settle their disputes, manage their resources, and establish
their pecking order because they would have lost the option of using physical power to accomplish this in
a zero-trust, permissionless, and egalitarian way. To establish a global pecking order, the entire population
would necessarily have to adopt a single abstract power hierarchy and give a single ruling class
extraordinary amounts of abstract power and control authority over them. And then the whole world
would have to trust that single ruling class not to exploit them, because they would be physically
powerless to stop it at a strategic level.

Hopefully now the reader can understand why the author spent so much time discussing the power
dynamics of modern agrarian society. These dynamics help us understand why modern agrarian society
may have entered an unprecedentedly hazardous state after the invention of nuclear warfare and policies
like mutually assured destruction. Without the ability to physically constrain and decentralize abstract
power hierarchies, society would lose its means to physically secure itself against global-scale exploitation
and abuse by a single, tyrannical ruling class. All it would take for a tiny percentage of the population to
gain completely centralized and unimpeachable control authority over the rest of the global population
would be for high-ranking people inside nuclear-armed nations to collude within and between other
nuclear-armed nations. Neo oppressors would be able to exploit the world’s belief systems without
penalty because the rest of the global population would no longer have the practical means to make it
impossible to justify the physical cost of exploiting them.

In other words, a kinetic stalemate between different nuclear-armed nations would also represent a
kinetic stalemate between the ruled and ruling classes of those nations. In addition to making it too
impractical to fight a strategic war between nuclear-armed nations, a stalemate could also make it too
impractical to fight a civil or revolutionary war within nuclear-armed nations. If an incompetent,
belligerent, or self-serving group of systemic predators were to regulatorily capture the abstract power
granted to them within these nuclear-armed hierarches, there might not be a practical way for civilians
to escape their exploitation the same way they have always done it in the past (by summing together their
kinetic physical power to make it too physically costly to continue exploiting them). In this kind of

235
situation, an oppressed populace would always be trapped in a trust-based, permission-based, and
inegalitarian system where a ruling class must always be trusted not to exploit their abstract power, simply
because the population is otherwise physically powerless to countervail them.

To put it in plain terms: a strategically stalemated populace would be a god-king’s paradise. The
population would be backed into a corner where they would have to rely on people with abstract power
to settle their disputes and establish their pecking order for them. They would have to adopt a global-
scale abstract power hierarchy to establish a global-scale pecking order, tacitly giving one ruling class more
asymmetric abstract power and resource control authority than any ruling class has ever achieved. The
global human population would have no choice but to trust their rulers not to exploit their abstract power,
because they would otherwise have no practical means to physically countervail them.

4.12.4 Non-Nuclear Kinetic Warfare can Lead to a Non-Nuclear Stalemate

After hitting the kinetic ceiling with strategic nuclear warheads, society appears to be attempting to back
themselves out of a corner by turning to non-nuclear kinetic warfare. The Korean War, Vietnam War, and
the Global War on Terror are some examples of nuclear superpowers deliberately choosing to use weaker
and less efficient power projection technologies to settle their disputes. Why would nuclear superpowers
deliberately choose to use weaker and less efficient power projection technologies? Because it might be
the only way kinetic power can still be useful as a basis for setting disputes and establishing pecking order.
In other words, non-nuclear kinetic power might be the only way that a kinetic war can still have winners.

Similar to the emergence of the idea of nation states, kinetic warfare in the age of strategic nuclear
warheads is another one of those situations where it is still too early to know how useful it will be as a
mechanism for agrarian society to continue solving their disputes, managing their resources, and
establishing their pecking order in a zero-trust, permissionless, and egalitarian way. The results so far
appear to be inconclusive.

No two nuclear superpowers have gone head-to-head with each other in physical combat to settle a
meaningful dispute. Instead, they have fought proxy wars (e.g. cold wars and trade wars). Since the
invention of strategic nuclear warheads and the adoption of policies like mutually assured destruction,
non-kinetic warfare has only been used to settle minor disputes in comparison to kinetic wars of the past.
These disputes have ostensibly been between non-nuclear nations or asymmetrically powerful nations
where one side has nukes, and the other side doesn’t. This means we essentially have no idea if kinetic
warfare is useful at solving large-scale property or policy disputes between nuclear superpowers anymore,
because we have not tried it yet.

The author has a difficult time believing that non-nuclear kinetic warfare could settle a major global-scale
dispute between nuclear superpowers without escalating to nuclear warfare and once again stalemating
at some derivative form of mutually assured destruction. If we assume this is true, then we can conclude
that society has indeed reached a kinetic stalemate at both the nuclear and non-nuclear level and is
therefore highly vulnerable to the systemic security hazards discussed throughout this chapter. But for
the sake of argument, if we were to assume that it is still possible to settle major global-scale strategic
disputes using non-nuclear kinetic warfare, then the author asserts it’s not reasonable to believe this
capability will last for long. If there still is a window of opportunity for kinetic warfare to be useful as a
means to settle major strategic disputes, then that window of opportunity might be closing.

236
As mentioned before, sapiens are obsessed with efficiency. Nuclear warheads represent what happens
when sapiens create more efficient power projection technologies; they design and build power
projection technologies that are so efficient they aren’t practically useful. Nuclear warheads prove that
too much power projection efficiency can become counterintuitively too inefficient; it’s clearly possible
to build power projection technologies that are so efficient at projecting power that they’re too costly to
use because of how destructive they are.

With this in mind, consider what it means when agrarian society deliberately chooses to settle their
disputes and establish their pecking order using non-nuclear technologies. It means they’re going to strive
to make their non-nuclear kinetic power projection technology more efficient. What is the end state of
making non-nuclear kinetic power projection technologies more efficient? We already know what the end
state is: eventually, non-nuclear kinetic power projection technology will become too efficient to be
practically useful.

By deliberately choosing to withhold from nuclear warfare and engage in non-nuclear kinetic warfare as
the primary basis for settling global disputes in a zero-trust, permissionless, and egalitarian way, agrarian
society is setting itself up for a situation where it discovers yet another way to make kinetic warfare too
expensive to wage. Instead of nuclear technology, it would just be some non-nuclear technology that
becomes too efficient at projecting power to be practically useful. The way things are starting to play out,
it looks like it could be something involving artificial intelligence and swarms of flying, crawling, and
swimming drones.

If we return to the chart showing the evolution of power projection technologies employed by agrarian
society, we can illustrate this issue by adding another arrow to the chart, as shown in Figure 57. By
choosing to engage in non-nuclear forms of kinetic warfare, agrarian society is essentially trying to fork its
evolutionary path. The problem is that the forked path is just as vulnerable to running into the same
kinetic ceiling! The end state of increasingly more efficient kinetic power projection technologies is the
same regardless of whether it’s nuclear: kinetic stalemate. All that agrarian society will accomplish by
forking the evolutionary path of kinetic power projection is to discover yet another way that kinetic wars
can’t be won – yet another form of mutually assured destruction that leads to a kinetic stalemate at both
the strategic and tactical level.

237
Figure 57: Evolution of Physical Power Projection Technology, Shown with an Attempted Fork
[88, 89, 116, 117, 118, 119, 120, 121, 122, 123]

Nations have started to realize that the best way to make their non-nuclear kinetic power projection
technologies more efficient at projecting power is to take humans out of the loop. Software-operated
drones are replacing human-operated machines. The size, weight, and power of these drones are
collapsing. They’re starting to fly in swarms. They’re starting to drop bombs on people’s heads with
extraordinary precision. They’re starting to think for themselves and predict human behavior better than
humans can.

The marginal cost of putting precisely the right amount of kinetic power in precisely the right place to
impose the maximum amount of physical cost on attackers is plummeting thanks to drones. We’re getting
extremely good at kinetically striking people with surgical-like precision for cheap. At the same time, other
technologies like artificial intelligence are converging. We are marching straight towards the dystopian
future envisioned by the Wachowskis in The Matrix, where swarms of sentient killer robots patrol the
skies over the ruins of their creators.

Mutually assured destruction was the byproduct of the collapsing marginal cost of kinetic power
projection technology following the discovery of nuclear technology. The more the marginal cost of
projecting kinetic power decreases, the easier it becomes for multiple nations to scale it. But this same
dynamic applies to non-nuclear technology too. The easier it is to scale non-nuclear kinetic power
238
projection technologies, the easier it becomes for society to reach the kinetic ceiling again, and once again
arrive at the point where our kinetic power projection technology becomes too mutually devastating to
be practically useful as a method for solving disputes.

Human society cannot be safe from the threat of mutually assured destruction merely by switching to a
non-nuclear form of kinetic warfare, because switching to a non-nuclear form of kinetic warfare doesn’t
do anything to address what caused mutually assured destruction to happen in the first place: the
collapsing marginal cost of kinetic power projection caused by the discovery of new technologies. So even
if we assume that non-nuclear kinetic wars between peers can still be conclusively won (which is already
a tall assumption to make considering the track record of kinetic conflicts which have taken place after
the invention of nuclear warheads), then we still have to acknowledge that the most that can be gained
for our species from non-nuclear kinetic warfighting is temporary and incremental advantages along an
evolutionary path of technology development that point towards the same dead end (pun intended).

Restructuring a military to win non-nuclear kinetic warfighting campaigns could represent the act of
optimizing a military for a local maximum, not a global maximum. The end state of that effort is potentially
a stalemate between nations at both the strategic and tactical levels, using both nuclear and non-nuclear
technology. The key takeaway from this thought exercise is that there is essentially no way for agrarian
society to escape from the bounded prosperity trap of a stalemate with kinetic warfare; all they can do is
keeping forking the evolution of their kinetic power projection technologies to buy themselves small
windows of time to settle disputes and establish their pecking order in a zero-trust and egalitarian way
before they discover yet another way to mutually assure their own destruction.

4.12.5 Non-Kinetic (i.e. Electric) Power Competitions Could Enable Mutually Assured Preservation

Now we finally arrive at a core insight about human physical power projection tactics which is essential
for understanding the potential national strategic implications of Bitcoin. As much as people wish for it,
there can be no peace, because peace requires a world without predators – a world which doesn’t exist.
Consequently, if we stalemate ourselves with kinetic warfare, we don’t create an environment without
predators, so we don’t get peace. Instead, we create a huge window of opportunity for predators to
exploit us through our belief systems at unprecedented scale because they cannot be physically
challenged or overthrown.

A stalemate forces people into adopting abusive and exploitative abstract power hierarchies teeming with
corruption because people lose the option of settling their disputes and establishing their pecking order
in a zero-trust and egalitarian way using physical power. And if society tries to fork the evolution of kinetic
power projection technology into a non-nuclear direction, they aren’t going to solve the paradox of kinetic
power projection; they are just going to discover another path to mutually assured destruction and find
themselves stuck in the same bounded prosperity trap we’re already stuck in, with the exact same
systemic security vulnerabilities. The most that can be gained from continuing down the kinetic path could
be minor victories against asymmetrically weak nations, not strategic victories against peers.

How does society escape from this trap? Nature offers an idea: antlers. We need a technology that will
allow us to continue to use physical power as the basis for settling our disputes, managing our resources,
and establishing our pecking order in a zero-trust and egalitarian way, while still keeping ourselves
systemically secure by empowering us to impose severe physical costs on our attackers – but we need a
way to do this that either minimizes or eliminates the need to kill our own kind. To accomplish this, we
need a different, non-kinetic form of physical power projection technology that could allow us to engage

239
in global-scale physical power competitions in a zero-trust and egalitarian way, but non-lethally and non-
destructively. It needs to be a power projection technology that will not become increasingly impractical
to use as people become increasingly more efficient at projecting physical power and imposing severe
physical costs on their neighbors. This new form of physical power projection technology should never be
too efficient to be practically useful, no matter how efficient we try to make it.

Using this new type of non-kinetic power projection technology, a kinetically stalemated society could
continue to along their path of technological evolution without having to worry about hitting a kinetic
ceiling. While maintaining the kinetic stalemate, a society could continue to chase after an infinite
prosperity margin for themselves by maximizing their CA and minimizing our BCRA. This would allow them
to keep themselves secure against both foreign invasion and domestic exploitation by making it
impossible to justify the physical cost of attacking or exploiting them, no matter how efficient they make
their power projection technology, and no matter how much physical power they project.

Here we uncover a remarkable idea: the future of global-scale strategic warfighting is perhaps more likely
to be a form of electronic warfare rather than a form of kinetic warfare. Why? For the simple reason that
kinetic warfare is literally a dead end for agrarian society. Society could have already stalemated itself by
identifying a kinetic path to mutually assured destruction where wars can’t be conclusively won, and
disputes can’t be conclusively settled. There might be very little to gain by identifying yet another kinetic
path to mutually assured destruction where wars continue to be unwinnable, and disputes continue to be
unsettleable using kinetic technology – and that’s assuming it’s still even possible to fight strategic
conflicts with nuclear peers that don’t escalate to the nuclear level (which the author highly doubts).
Forking the evolutionary path of kinetic power projection tactics, techniques, and technologies to pursue
non-nuclear options isn’t going to change the direction of this 10,000+ year trend – agrarian society is still
marching towards another form of kinetic stalemate, and that’s assuming we make a tall assumption that
we haven’t already reached it.

Instead, the future of warfare seems to be a non-kinetic form of global physical power competition that
can continue evolving in the same direction as other physical power projection technologies (up and to
the right) without running into a kinetic ceiling where it becomes too lethal or destructive to be practically
useful. The future of warfare seems like it will be the kind of physical competition that can actually be won
so that our property and policy disputes can actually be settled. Otherwise, it’s not really the future of
warfare, it’s just the continuation of the same stalemate we could be in right now. The key to winning
future wars therefore seems to be finding the battlefield where battles can still be fought and won no
matter how powerful and efficient sapiens make their power projection technologies – and clearly that’s
cyberspace.

The future battlefield of national-scale warfare could be an electro-cyber battlefield where nations impose
severe physical costs on each other electronically rather than kinetically. This concept is shown in Figure
58. In this future, nations would increasingly rely on “soft” forms of non-kinetic power projection that
involves charges passing across resistors, as opposed to exclusively relying on “hard” forms of warfare
that involves forces displacing masses. Of course hard war would still exist, but perhaps mostly to preserve
the kinetic stalemate caused by the kinetic power paradox discussed in the previous section (i.e. the more
efficient nations get at kinetic power projection, the less efficient it becomes as a power projection tactic
because it becomes too dangerous to use).

240
Figure 58: Evolution of Physical Power Projection Technology, Shown with Non-Kinetic End State
[88, 89, 116, 117, 118, 119, 120, 121, 122, 123]

To put it simply, if sapiens are going to evolve antlers so they can continue to engage in global power
competitions to settle property and policy disputes and establish pecking order in a zero-trust and
egalitarian way that can’t be kinetically stalemated, it seems highly likely they’re going to be electronic
antlers. Electric power projection technology would allow a population to keep themselves physically
secure against systemic predators while the kinetic stalemate would continue to keep them secure against
foreign invaders. The resulting electric power competition would also preserve a zero-trust,
permissionless, and egalitarian way for settling disputes, determining who gets control over resources,
and achieving consensus on the legitimate state of ownership and chain of custody of property.

4.12.6 Nikola Tesla Already Predicted This Would Eventually Happen

This idea isn’t new; it’s actually more than a century old. Tesla predicted something like this in his essay
entitled “The Problem of Increasing Human Energy.” Tesla predicted the kinetic power projection paradox
in 1900. He already saw the potential for kinetic power projection to scale to the point where it would no
longer be useful as a basis for engaging in physical disputes. As one of the world’s leading experts in
electricity at the time, he also saw how electronic warfare could one day replace kinetic warfare. He
predicted society would hit a ceiling with its large and clumsy kinetic power projection technologies and
241
be forced out of existential necessity to evolve towards human-out-of-the-loop, global-scale, machine-
on-machine warfare. And he saw this all before the invention of cars, tanks, airplanes, aerial
bombardment, nuclear bombs, and drones. [8] In his words:

“What is the next phase in this evolution? Not peace as yet, by any means. The next change which should
naturally follow from modern developments should be the continuous diminution of the number of
individuals engaged in battle. The apparatus will be one of specifically great power, but only a few
individuals will be required to operate it. This evolution will bring more and more into prominence a
machine with the fewest individuals as an element of warfare, and the absolutely unavoidable
consequence of this will be the abandonment of large, clumsy, slowly moving, and unmanageable units.
Greatest possible speed and maximum rate of energy-delivery by the war apparatus will be the main
object. The loss of life will become smaller and smaller, and finally the number of the individuals
continuously diminishing, merely machines will meet in a contest without blood-shed, the nations being
simply interested, ambitious spectators. When this happy condition is realized, peace will be assured.” [8]

Here we can see Tesla predicting that people would develop power projection technologies capable of
projecting great power, but simultaneously requiring less human-in-the-loop operation. This assertion is
followed by another assertion that assured peace wouldn’t be possible if these power projection
technologies caused bloodshed. He claims that mutually assured peace would only be feasible if these
power projecting machines did battle against each other via an energy competition. Otherwise, power
projection technologies would become too savage and destructive. His prediction aligns to what was
outlined in this section, that nations would run into kinetic stalemates and not be able to achieve mutually
assured peace until they settled on a way to fight their wars non-kinetically, in a way that eliminates
bloodshed and mutually assures the preservation of human life no matter how powerful it becomes.

“No matter to what degree of perfection rapid-fire guns, high-power cannons, explosive projectiles,
torpedo-boats, or other implements of war may be brought, no matter how destructive they may be made,
that condition [of assured peace] can never be reached through any such development… Their object is to
kill and to destroy… To break this fierce spirit, a radical departure must be made, an entirely new principle
must be introduced, something that never existed before in warfare – a principle which will forcibly,
unavoidably, turn the battle into a mere spectacle, a play, a contest without loss of blood.”

If we combine the core concepts provided in this chapter with Tesla’s insights, then we get the following
argument: the way for agrarian society to assure a lasting peace between nations is not to keep building
increasingly more efficient kinetic power projection technologies which do nothing but win small victories
and identify new ways to place ourselves into situations where wars can’t be won because they’re too
lethal and destructive. Instead, the way to assure lasting peace between nations is to give people the
means to engage in “peaceful” non-kinetic warfare with each other. To that end, it is reasonable to believe
that nations will continue to compete against each other in global-scale physical power competitions to
settle their disputes and their establish pecking order in a zero-trust, permissionless, and egalitarian way,
but they will learn how to do it increasingly more electronically rather than kinetically. They will extend
the battlefield into a new domain, where battles can still be won – where people can still physically secure
the resources they value.

The key to a sustainable peace that is safe against systemic predators therefore appears to be some form
of electronic warfare. Whereas kinetic power competitions lead to mutually assured destruction and
stalemates that place society into a systemically vulnerable pickle where they can be exploited at
unprecedented scale by abstract power hierarchies, electric power competitions lead to mutually assured

242
preservation. Electric power makes it possible to settle disputes, establish control authority over
resources, and achieve consensus on the legitimate state of ownership and chain of custody of property
in a zero-trust, permissionless, and egalitarian way that can’t be exploited by people wielding imaginary
power. All that needs to be done to keep people’s valuable resources (like bits of information, to include
financial information) physically secure against both foreign invasion and internal corruption is to
maintain the kinetic stalemate while utilizing an electronic form of physical power projection. That way,
people can impose severe physical costs on all forms of attackers.

Einstein’s theories support Tesla’s theories. Einstein theorized that matter is swappable with energy. If
that’s true, then instead of fighting wars by blasting people with projectiles, we should be able blast them
with energy in a non-kinetically destructive manner and get similar emergent properties (in this case, the
emergent property is physical security). A watt of physical power is a watt of physical power regardless of
whether it’s generated using forces displacing masses, or charges passing across resistors. It should
therefore be possible to settle global-scale physical conflicts using an energy-based or “soft” form of
global power competition rather than a mass-based or “hard” form of global power competition. The
question is, how? Tesla attempted an answer:

“To bring on this result, men must be dispensed with; machine must fight machine. But how to accomplish
that which seems impossible? The answer is simple enough: produce a machine capable of acting as
though it were part of a human being… a machine embodying a higher principle which will enable it to
perform its duties as though it had intelligence, experience, judgement, a mind!”

Tesla made this prediction more than 40 years before the first operational general-purpose computer. He
died a year before it was built, and two years before the first successful detonation of a nuclear warhead.
Already admired by history as one of the world’s greatest scientific minds, these predictions may one day
make Tesla famous for also being a military strategic thinker. He essentially predicted that software would
lead to soft war – that a lasting peace could be achieved if agrarian society could figure out a way to wage
wars by having their machines battle other machines using great sums of energy, and humans watched
eagerly from the sidelines. If we assume Tesla’s predictions will eventually come true, then humankind
has one extremely important question it needs to answer as quickly as possible, before they reach the
next, non-nuclear form of mutually assured destruction. What would a global-scale electronic power
competition look like, where machines battled machines in a non-lethal way, and humans watched from
the sidelines? In other words, what would agrarian society’s electronic antlers look like?

243
Chapter 5: Power Projection Tactics in Cyberspace

“[Software constraints] cannot substitute for the physical constraints encountered


naturally in other disciplines. Without a harsh and uncaring nature forcing us to
make hard choices… We are willingly seduced.”
G. Frank McCormick [124]

5.1 Introduction

At this point, the reader may be wondering, “what does all of this talk about abstract power, abstract
power hierarchies, physical power, warfare, and intraspecies property disputes have to do with
software?” The answer is simple: it has everything to do with software, because in the 21st century, many
of the world’s largest abstract power hierarchies are software-encoded abstract power hierarchies. For
example, Facebook, Google, Twitter, and Amazon are all software-encoded abstract power hierarchies.
Every person or institution that has centralized control over our computer networks or our computer
programs (from which we tacitly need their permission to send, receive or store our digitized information)
is operating within an abstract power hierarchy.

This chapter links together key concepts in computer theory and cyber security that are needed to
understand why software is fundamentally a belief system which gives a select few people abstract power.
People subscribe to these belief systems similar to how they subscribe to other belief systems which make
them systemically vulnerable to exploitation. People in control of our software wield a new form of
abstract power, and not surprisingly for anyone who understands power projection tactics in human
society, modern software developers have adopted the habit of using design logic to give themselves
special permissions and authorities which give them asymmetric control authority over a new resource
that has emerged in 21st century society: bits of information.

The way we have built our computer networks and designed our software systems has caused some
people (namely the people in charge of the most popular software we use) to gain immense abstract
power and control authority over our digital-age resources. Consequently, everyone who uses this
software is systemically vulnerable to exploitation and abuse through their software. A new form of god-
king is rising in the 21st century, except this time they’re in control over our digital resources rather than
our agrarian resources. Consequently, how we establish control authority over our data appears to be
emerging as one of the most contentious intraspecies property disputes of the modern era.

To believe that society will be able to constrain the far-reaching asymmetric abstract power of these neo
god-kings using purely design logic is to ignore 5,000 years of our predecessors’ attempts to logically
constrain the abstract power of previous rulers at the top of their respective abstract power hierarchies.
Whether they’re written using parchment or python, society has already demonstrated beyond a shadow
of a doubt that logical constraints are not sufficient to protect people against the exploitation and abuse
of people wielding too much abstract power. If history and cultural evolution repeats itself, then sooner
or later, people are going to have to start using physical power to secure their digital property rights by
imposing severe physical costs on oppressors who try to exploit them through software. To believe that
people won’t eventually resort to physical power to secure themselves is to ignore the lessons of history.

Using power projection theory as a new frame of reference, we can observe the state of the world and
see the signs of systemic exploitation and abuse of our digital information practically everywhere.
Weaponized misinformation, troll farms, bot farms, sybil attacks, DoS attacks, hacking epidemics,
244
widescale online fraud, online censorship, shadow banning, routine data leaks, state-sponsored
surveillance through entertainment apps, social network targeting campaigns, software companies
routinely running ethically questionable social experiments on their users, unsupervised artificial
intelligence controlling the primary information streams of billions of people, social media networks being
captured by special interest groups, and amidst all of this, no reasonable expectation of privacy for any
activity online. No matter where you go online, no matter what you do online, it’s become standard
practice for your valuable bits of information to be censored, surveilled, or sold to the highest bidder.

Our bits of information are under the centralized control of the people who control our computer
networks and the software running on those networks. Meanwhile, software companies are becoming
asymmetrically wealthy and influential thanks to the data they’re harvesting from the population. The
evidence of population-scale systemic exploitation of people through their software is mounting. Except
this time, the resource that technocratic god-kings are competing for control over is digital information.
Not surprisingly, in the age of information, information has become a precious resource. People’s ability
to assemble, to collaborate, to speak freely, and to exchange valuable resources like money is increasingly
facilitated by software, which means these vital social functions are under the complete control of the
people who control that software.

This begs several important questions: How could people secure themselves against the growing threat
of digital oppression by a technocratic ruling class? Should our rights (particularly our property rights, our
speech rights, and our rights to defend ourselves) not apply to our digitized information? Is it possible to
take back control of our digital information? How do we secure ourselves against systemic exploitation
and abuse of our data streams by people who wield extraordinary amounts of abstract power over them
because of the software and the computer networks they control? Is it reasonable to believe that we
should simply trust in our technocratic ruling class not to exploit our data streams for their own personal
advantage? Is it reasonable to believe that we will be able to secure our digital information if we simply
find a better combination of logic to constrain a software engineer’s abstract power, despite having 5,000
years of written testimony to suggest that attempting to constrain people from abusing their abstract
power using encoded logic doesn’t actually work?

10,000 years of warfare would suggest that the most effective way to keep people’s rights secure against
attackers is to figure out a way to impose severe physical costs on attackers. The solution to the emerging
threat of systemic exploitation via software seems clear when viewed through the lens of Power
Projection Theory: to secure our digital property rights, we have to find new power projection tactics,
techniques, and technologies which give people a way to impose severe physical costs on people wielding
too much abstract power and control authority over digital information. The solution makes rational
sense, but it’s just not clear what those tactics, techniques, and technologies should be.

Enter Bitcoin. With a background in Power Projection Theory now thoroughly established, we can see
Bitcoin in a new light. Bitcoin is compelling not as a candidate monetary system, but as an electro-cyber
security system. Bitcoin’s underlying proof-of-work technology is proving itself to be a successful way to
physically secure bits of information against systemic exploitation and abuse by giving people the ability
to project physical power to impose severe physical costs (costs denominated in watts) on belligerent
actors who try to exploit them through their software. Bitcoin demonstrates that people can gain and
maintain zero-trust, permissionless, egalitarian, and decentralized control over bits of information so long
as they are willing and able to project physical power to secure it. This would suggest that Bitcoin isn’t
merely a monetary system, but perhaps some kind of “soft” form of power competition that has
successfully replicated the same complex emergent benefits of warfare, but without the destructive side

245
effects. Through the ongoing global adoption of Bitcoin, people appear to be building the largest physical-
power-based dominance hierarchy ever created in human history, and the sociotechnical implications of
this could be extraordinarily disruptive to all existing power hierarchies, including and especially nation
states.

If the theories presented in this chapter are true, they would imply that Bitcoin represents far more than
just a monetary system. After all, bits of information secured on the Bitcoin network could denote any
type of information, not exclusively financial information. Instead, Bitcoin could represent a completely
new system for securing any information in cyberspace – a way to keep bits of information secure against
belligerent actors by physically constraining them, not logically constraining them. This could not only be
revolutionary to the field of cybersecurity, but it could also be transformational to the field of physical
security in general – to include national strategic security.

To understand all of these concepts in further detail, it’s necessary to rewind back to the early
development of computers and to retrace the technological steps we took to get to where we are today,
all while leveraging our newfound knowledge in power projection tactics in nature and human society. To
that end, we begin the third and final chapter of Power Projection Theory.

5.2 Thinking Machines

“Innumerable activities still performed by human hands today will be performed by automatons.
At this very moment scientists working in the laboratories of American universities are attempting to
create what has been described as a ‘thinking machine.’”
Nikola Tesla [7]

5.2.1 General-Purpose State Mechanisms

Two years following Tesla’s death, a Hungarian-American mathematician, physicist, and engineer named
John von Neumann summarized novel theories on electronic computing in a report prepared for the US
Army Ordinance Department. The report, titled Preliminary Discussion of the Logical Design of an
Electronic Computing Instrument, explained technical ideas for the design of a fully electric general-
purpose state machine with a remarkable capability: storing programming instructions as states within its
own electronically accessible memory and performing operations on its own programming, as if it had its
own intelligence, experience, judgement, and mind. [125]

At the time this paper was published, programming state machines was a tedious ceremony involving
days of lever pulling, dial turning, switch flipping, cable manipulation, and circuit plugging. Neumann and
his colleagues believed that electronically storable computer programs would represent a dramatic
improvement to this process. To accomplish their idea, Neumann and his colleagues reasoned they would
need to build a machine that could store "not only the digital information needed in a given computation…
but also the instructions which govern the actual routine to be performed." One way to do this would be
to find a way to convert manual instructions into numerical code. "If orders to the machine are reduced
to a numerical code," the report says, "and if the machine can in some fashion distinguish a number from
an order, the memory organ can be used to store both numbers and orders." [125]

It was in US DoD-sponsored research like this paper where computer engineers began to communicate
the enormous potential of what would eventually be known as software. The theoretical potential for
general purpose computing had been discussed a decade prior by mathematicians like Alan Turing, who

246
predicted the feasibility of universal computing machines capable of implementing finite sequences of
instructions to solve complex math problems. But it wasn't until the first general-purpose computing
machines became operational that people like Neumann began to practically demonstrate how they could
be digitally instructed using their own memory. [126]

With 80 years of hindsight, we can read Neumann's report and appreciate what might be one of the
biggest understatements of the century, when he made the claim that state machines capable of digitally
converting human instructions into numerical codes would enable society to build machines that could
"conveniently handle problems several orders of magnitude more complex than are now handled by
existing machines." [125] After making this assertion, Neumann and his colleagues went on to pioneer the
development of the first stored-program general-purpose state machine, a.k.a. the modern computer.

Computers are by no means new technology. Special-purpose state machines date at least as far back as
2,100 years ago to astrolabes like the Antikythera mechanism. In the two millennia following Greek
astrolabes, the fundamental technological concept behind computers has not changed. They remain state
machines named after their primary function of computing things. [127]

People decided to name state machines after what they do (their function) rather than what they are
(their form). This is perhaps because their function has remained constant while their form continually
changes over time. A state machine's form can take many different sizes, shapes, materials, and
complexities depending on what it’s designed to compute. Nevertheless, the core function of computing
discrete mathematical states has not changed over two centuries. So for all intents and purposes, state
machines have remained the same computing technology.

Prior to the early 1800s, state machines were special-purpose, non-programmable instruments used to
assist people with very specific computations. The first viable general-purpose state machine and
forerunner of the modern digital computer was the analytic engine designed by English mathematician
Charles Babbage in 1833. Babbage's concept was novel because it meant people wouldn't have to build
expensive special-purpose instruments to make single computations anymore. One general-purpose state
machine doubles as a nearly unlimited number of disembodied special-purpose state machines. Thus, by
using a general-purpose mechanical instrument to perform multiple different computations, it would
cause the marginal cost of computing to decrease, while simultaneously turning the act of designing a
special-purpose state machine into a disembodied abstraction.

This is a subtle but important distinction to make: thanks to the invention of general-purpose computers,
instead of having to build a special-purpose state mechanism to make a computation, engineers only
needed to think of a sequence of instructions to assign to a general-purpose state mechanism. The act of
computing changed from an exercise in machine-building to an exercise in abstract thinking. [128]

Unfortunately, the design of Babbage’s analytic engine was too complex and expensive (and the benefits
of general-purpose computing too poorly understood) for it to be manufactured in the 1800s, so Babbage
never lived to see his mechanical general-purpose state machine built. Nevertheless, the mere concept of
an operationally viable general-purpose state machine was revolutionary in the field of computing. Armed
with a general-purpose state machine, a person could perform an extraordinarily wide range of specialized
computations. For the first time, the primary limiting factor of computing was no longer the machine, but
the imagination of its programmer.

247
With the invention of general-purpose computing, the process of designing and building special-purpose
state machines to perform specific computations became a disembodied abstract concept separable from
the physical implementation of the machine making the computations. Machines which had previously
been physically impossible or impractical to build suddenly became feasible, because instead of having to
build a special-purpose state machine, a person simply needed to imagine what instructions they needed
to give a general-purpose state machine. By following these instructions, the general-purpose state
machine would, in effect, become the physical embodiment of a special-purpose state machine imagined
by the programmer. This process of getting general-purpose state machines to role-play as special-
purpose state machines is what’s known today as ordinary computer programming. We take this for
granted today but in the early 1800s, this was a revolutionary engineering concept.

Babbage’s analytical engine was inspired by the design of Jacquard machines, devices fitted to looms that
enabled complex textile patterns using punched cards to dictate the weave pattern of different colored
threads. Borrowing from this design concept, the analytic engine was programmed using punched cards.
The presence of a hole in the card indicated a symbolically important Boolean state like "1" or “true” while
the absence of a hole in the card indicated an opposing state like "0" or “false.” Using formal logical
methods theorized by people like George Bool (who lived at the same time as Babbage), the sequence of
holes punched into the card could issue a series of instructions to the state machine in a way that would
mimic how a human might program a computer by pressing buttons, flipping switches, or turning dials.
This technique for programming general-purpose computers was so effective that punched cards
remained a popular programming technique for more than a century, until it was obsoleted by new
technologies emerging in the 1980s. [129]

Although Babbage’s analytic engine was too expensive to build, the disembodied and abstract nature of
computer programming it enabled made it possible for mathematicians to write sets of instructions for
Babbage’s analytic engine using nothing more than a published design concept. To that end, one of
Babbage’s students, a mathematician named Ada Lovelace, published the design concept of an algorithm
in 1843 for the analytic engine to calculate Bernoulli numbers. By publishing these instructions, Ada
Lovelace became the world's first computer programmer. This illustrates how the art of computer
programming is an abstract and highly creative process akin to writing a fictional story. Just like
playwrights write stories for actors to perform, computer programmers write stories for general-purpose
state machines to perform. Unfortunately, Lovelace did not live to see a general-purpose machine
perform her story. [130]

5.2.2 Stored-Program Computers

Over the century following the publication of Lovelace’s program, the emergence of new techniques like
Boolean algebra and new technologies like electro-magnetically controlled relays made programmable
general-purpose state machines far more feasible to design and manufacture, reinvigorating interest in
general-purpose computing during the 1930s. Having studied the 100-year-old design principles of
Babbage's analytic engine, Alan Turing and other mathematicians started publishing theories about the
feasibility of universal computation by machines capable of implementing finite sequences of instructions
to solve complex problems. [131]

Inspired by Babbage, Harvard physicist Howard Aiken proposed the original concept for a general-purpose
electromechanical computer and began searching for a company to design and build it. International
Business Machine (IBM) Corporation accepted Aiken's request in 1937 and after 7 years of design and
manufacturing, delivered the first general-purpose electromechanical computer to Harvard in February

248
1944. Formally called the Automatic Sequence Controlled Calculator (ASCC) but colloquially known as the
"Mark I" by Harvard staff, this 9,500-pound, 51-foot-long machine was immediately commandeered for
military purposes to assist with making computations for the US Navy Bureau of Ships. [132]

Weeks after IBM delivered the ASCC to Harvard in February 1944, Jon von Neumann arrived on campus
to commandeer the machine and put it to work on a series of computer calculations for a mysterious
project he was working on. During this timeframe, Neumann began to write highly classified computing
algorithms for the machine, including some of the most common algorithms used today (e.g. merge sort).
Impressed by the ASCC's general-purpose computing capabilities, Neumann quickly turned his attention
to the work of John Mauchly and Presper Eckert when he learned they had been commissioned by the US
Army Ballistic Research Laboratory to build something even more special than IBM's electromechanical
machine: the world's first fully-electronic general-purpose computer. [125]

The first fully-electric general-purpose computer was commissioned by the US Army Ballistic Research
Laboratory and built for the US Army Ordnance Corps to perform ballistic calculations for firing tables.
Called the Electronic Numerical Integrator and Computer (ENIAC), Mauchly and Ecker's machine reduced
artillery trajectory calculation time down from 20 hours to 30 seconds; one ENIAC replaced 2,400 humans.
But immediately after it was built, John von Neumann once again appeared on campus, commandeered
the machine, and put it to work on a series of different computer calculations for his mysterious project.
[133]

This time, however, John von Neumann's mysterious project became far less secret. News of the
Hiroshima and Nagasaki bombings stunned the world as the US President publicly disclosed the project
on which Neumann and his colleagues had been working: the Manhattan Project. While the world
celebrated the surrender of Japan, Neumann continued to (literally) plug away on the first fully-electric
general-purpose programmable computer to help him design the next version of a nuclear bomb.

Instead of punched cards, the ENIAC used plugboards as its state changing mechanism. This allowed it to
run a program at electronic speed rather than at the speed of a punched card reader. The tradeoff to this
approach was that it took weeks to figure out how to map a computing problem to the plugboard, and
days to rewire the machine to execute each new program (in contrast to simply swapping punched cards).
Nevertheless, ENIAC was a powerful calculating device that left an impression on Neumann and others
who operated it. Its impressive capabilities, combined with how tedious and cumbersome it was to
program, inspired Neumann and his colleagues to search for a solution on how to improve the machine
by converting the state changing mechanism from manual circuit plugs into electronically-activated
actuators. Then, by applying Boolean logic to the position of the actuators, this would allow programming
actions to be digitized and stored as actuator states within the state mechanism. [134]

Stored-program computers, Neumann theorized, would not only allow a computer to store its own
programming instructions as states within its own “memory organ” (a.k.a. the state space dedicated to
storing the information needed for computer programs), it would also allow the machine to automatically
perform operations on its own programming instructions. Neumann believed this architecture would
allow general-purpose state machines to handle problems several orders of magnitude higher than even
the world’s most powerful computers at the time, like the ENIAC he used to design the detonation device
for the first atomic bomb. [125]

Inspired by the potential of these types of machines, Neumann and his colleagues wrote the
aforementioned report to the US Army Ordinance Corp. After getting the funding and green light to design

249
and build one, Neumann teamed up with Mauchly and Eckert to create the Electronic Discrete Variable
Automatic Computer (EDVAC), the world’s first stored-program general-purpose computing machine.
[135]

5.3 A New (Exploitable) Belief System

“Lessons learned over centuries are lost when older technologies are replaced by newer ones.”
Nancy Leveson [136]

5.3.1 How to Talk to Thinking Machines

General-purpose and stored-program computing were not only technological breakthroughs, they were
also two major back-to-back leaps in human abstract thinking. The first leap in abstract thinking came
when general-purpose state machines converted the exercise of computing from a physical machine-
building process to an abstract design process. The second leap in abstract thinking came when computer
scientists introduced methods for storing computer programming information as physical states on a state
mechanism using symbolically and syntactically complex communication mediums like machine code.

Instead of applying symbolic meaning to specific audible waveforms or scribbled images like we do with
the words we speak or the letters we write, early computer scientists figured out how to apply symbolic
meaning to state changes within a general-purpose state machine. They accomplished this by applying
mathematically discrete Boolean logic to state-changing mechanisms like switches and transistors to
metacognitively convert them into “bits” of information, in much the same way humans metacognitively
convert symbols or wave patterns into words. This gave rise to a new novel form of semantically and
syntactically complex higher-order language consisting of binary and multi-decimal symbols called
machine code.

From the broader context of metacognition and the evolution of human abstract thinking skills, it’s hard
to understate how remarkable machine code is. Stored-program computing and the emergence of
machine code represent the emergence of an entirely new symbolic language as well as a new medium
through which syntactically and semantically complex information can be communicated. Using machine
code, it is possible to convert practically any type of physical state-changing phenomenon in the universe
into digitized bits of information (this is an important concept to remember for future discussions about
Bitcoin, because the author will assert that bitcoin represents the act of apply machine code applied to
quantities of electric power drawn out of the global electricity grid).

At first glance, machine code seems deceptively unremarkable. Sapiens have been creating and using
higher-order languages for tens of thousands of years. We have been converting physical state-changing
phenomenon like audible waveforms into information since we first discovered spoken language, so
what’s the big deal? What makes machine code so profound is the recipient of the information. Machine
code was not invented for sapiens to communicate directly with other sapiens, but for sapiens to
communicate directly with machines.

Therein lies the significance of what computer engineers designed and built in the 1940’s. They didn’t just
build gizmos, they invented a completely new, machine-readable language – a medium through which
humans can “talk” to inanimate objects using machine code, and just as astonishingly, a medium through
which inanimate machines can “talk” back to humans. This new form of symbolic language created an

250
entirely new form of storytelling where people can communicate with each other indirectly and
asynchronously through the computers they program, as shown in Figure 59.

Figure 59: Illustration of the Difference Between Traditional Language and Machine Code
[137, 76]

By issuing computing instructions to a general-purpose state machine using a new form of machine-
readable language, Neumann and his team realized it would be possible for a computer programmer to
produce practically any conceivable sequence of operational instructions because there is theoretically
no limit to the state space (i.e. the number of different possible states) that a general-purpose state
machine can have. Since there is no theoretical limit to the size of the state space or the sequence of
operations that a programmer can create using a stored-program digital computer, a programmer has
near-unlimited design options and flexibility. With the right set of instructions, they can make a general-
purpose computer perform any operation. Neumann described this potential as follows:

“It is easy to see by formal-logical methods, that there exists codes that are in abstracto adequate to
control and cause the execution of any sequence of operations which are individually available in the
machine and which are in their entirety, conceivable by the program planner.”

Neumann explains how instead of having to physically execute a sequence of operations to instruct a
general-purpose state machine how to behave, the programmer of a stored-program general-purpose
state machine simply needs to think about a sequence of operations in abstracto, that is, from a purely

251
imaginary or theoretical point of view. Then, the programmer could use machine-readable languages to
speak to the machine and tell it how to operate, rather than to manually operate it. These ideas were the
precursor to what we now call “operating systems.” The important takeaway is that computer operating
systems are belief systems, not physical systems. Like all belief systems, modern computer programs are
abstractions – figments of the imagination that are programmed into a computer rather than spoken or
written. Therefore, like all belief systems, computer programs are vulnerable to systemic exploitation
(more on this later).

With the invention of stored-programming computers and machine code, sapiens took a major leap
forward in abstract thinking, and the sociotechnical consequences of this major leap are still not yet fully
understood. Neumann’s stored-program computer removed the physical constraints of computer
programming by moving it out of the domain of shared objective reality and into the domain of shared
abstract reality. Unbounded by physical constraints and with infinitely scalable state spaces, the primary
limiting factors of general-purpose computer programming became the imagination and design skills of
the programmer. Insofar as a computer programmer can think of a functional design and communicate it
effectively to a general-purpose computer, a general-purpose computer can faithfully act it out.

5.3.2 A New Way of Storytelling

A core lesson of computer science is that computer programs don’t physically exist. They are abstract
concepts hypostatized as concretely real things for the sake of reducing the intellectual effort required to
use stored-program general-purpose state mechanisms. Although society has adopted the habit of
describing software as if it were a physical thing that is “loaded” onto a computer, it simply isn’t. And
although society has adopted the habit of describing software as if it were comprised of an orientation of
objects (e.g. folder, trash can, recycle bin, thumb nail), it simply isn’t. This is merely an abstraction
technique used to make it easier to understand the complex emergent behavior of computer programs.

Modern computers are usually made of complex circuitry and electrons stored in floating gate transistors.
When a program is “stored” onto a computer, nothing is physically added to the machine. The computer
remains the same general-purpose state machine built out of the same complex circuitry and electrons
stored in floating gate transistors. When a program is “removed” from a computer, nothing is physically
removed from the machine. Before a computer runs a program, all that physically exists is a state machine
made of complex circuitry and electrons stored in floating gate transistors. While a computer runs a
program and after it completes the program, all that physically exists is the same machine made of the
same circuitry and electrons stored in floating gate transistors. At no point before, during, or after a
computer program is added, removed, or executed does anything get physically added or removed from
the system. The only physical change that occurs is a state change in the circuitry of the machine. A switch
is flipped, or electrons move from one side of a floating gate transistor to another.

A computer program is technically nothing more than symbolic meaning assigned to the state changes of
a state machine. Just like sapiens learned how to assign symbolic meaning to audible wave patterns or
visible shapes to form spoken and written language, they also learned how to assign symbolic meaning to
state changes on a state machine using Boolean logic. In the case of machine code, the symbolic meaning
assigned to a state change is usually binary code like “1” or “0,” which is enough to generate increasingly
more semantically and syntactically complex information. This is one of the many reasons why computer
science is such a fascinating (and difficult to grasp) field. Sapiens figured out how to apply symbolic
meaning to computer circuitry, which can then repeat that symbolic meaning back to humans using

252
communication protocols that humans can understand. Thanks to computer theory, we learned how to
talk to machines and taught them how to talk back to us.

Revisiting the core concepts of metacognition discussed in the previous chapter, the reader is reminded
that within the domain of shared objective physical reality, our spoken words, written words, or states of
a state machine have no inherent meaning. It takes a powerful neocortex capable of abstract thinking to
assign semantically or syntactically complex symbolic meaning to these phenomena. All computer
programs are abstractions – figments of the imagination which do not physically exist as things with mass,
volume, or energy in physically objective reality, completely incapable of producing their own physical
signatures. Computer programs are imaginary concepts within shared abstract reality that are transferred
from neocortex to neocortex using physical media, just like all other forms of imaginary thoughts and
ideas travel between sapient brains.

A common source of confusion about computer science occurs when people conflate the physical media
through which a computer program is communicated with the program itself. There is a major difference
between an imaginary sequence of operating instructions spoken to a machine using symbolic language,
and the physical media through which those instructions are communicated to and from a computer.
When programming a modern computer, what physically exists is the machine through which sapiens
communicate their abstract ideas using a carefully programmed sequence of state-changing operations.
The rest is abstract.

When a human speaks to another human, the audible waveform of their voice is physically real, but the
symbolic meaning and information assigned to those audible waveform patterns is abstract. If a person
were to believe that words are physically real things, they would be making a false correlation between
abstract inputs detected by the brain (i.e. the symbolic meaning within a semantically and syntactically
complex higher-order communication protocol) that were produced by their imagination, and their
sensory inputs (i.e. the audible pattern detected by their ears). The same reasoning holds true for
computer programs. Just because your eyes can see a computer acting out a program doesn’t mean the
program is physically real; you are looking at a general-purpose state machine, not a computer program.

Like a painter’s canvas, the digital computer is a portal through which an imaginary idea travels from one
brain to the other. The person depicted in a painting isn’t physically real; the only thing that is physically
real is the paint and canvas through which the person’s image is communicated to its viewer. Likewise,
what the reader sees written on this page is not the author’s ideas in and of themselves, but merely the
medium through which the author’s ideas pass from his neocortex to the reader’s neocortex. Using this
same logic, the switches on the circuit board and the electrons stored within the floating gate transistor
are not the program, but merely the physical medium through which an abstract idea is communicated
from the computer programmer to the computer user.

5.3.3 Computers are Electromechanical Marionettes Acting out Scripts Written by Neo Playwrights

To make the abstract nature of computer programming more digestible for readers who don’t have
computer science backgrounds, the reader is invited to reflect on Shakespeare’s Romeo & Juliet. As
previously discussed, sapiens use their abstract thinking skills to design virtual worlds within their minds,
and then they get other people to envision the same world (i.e. synchronize their individual abstract
realities into a single shared abstract reality) via syntactically and semantically complex symbolic
languages using storytelling.

253
The story of Romeo & Juliet was an imaginary scenario conceived within the neocortex of a notable
storyteller named William Shakespeare a little over four centuries ago. Our own neocortices can envision
what Shakespeare imagined because Shakespeare used a symbolic language (i.e. English) as a medium
through which he could share what otherwise existed exclusively within his imagination.

The English symbols of Shakespeare’s language were copied and printed onto pieces of paper and given
to actors as instructions. These instructions are commonly called scripts. Actors speak and move according
to the instructions received from their scripts. If Shakespeare did a good job writing these instructions and
if actors did a good job at following them, then a complex effect will emerge: an audience of people will
be entranced by the performance, so emotionally captured by what they’re watching that they’ll forget
that it’s completely fictional – they’ll forget that Romeo and Juliet don’t exist anywhere except exclusively
within their imaginations.

Because of physical actions taken by role-playing actors in shared objective reality, people can see with
their own eyes (a.k.a. sensory inputs) a physical representation of Shakespeare’s imagination. Through
the actors’ performance, an audience can experience a full range of emotions that Shakespeare
experienced when he first conceived of the story, even though many lifetimes have passed since the last
synapse of his neocortex fired. Such is the beauty of storytelling and other forms of art – the ability to
share one’s ideas far beyond one’s own physical limitations, long after one’s tenure on Earth has ended.

Nowhere in the sequence of events between Shakespeare’s first imagination of Romeo & Juliet, to an
audience of people emotionally entranced by a theater’s performance 400 years later, did Romeo & Juliet
become anything more than a fictional story. Based on the very definition of fiction, we know that Romeo
and Juliet never physically existed. The fact that the story Romeo & Juliet can be conceived by multiple
different brains using tools like symbolic language printed on a physical script, physically-spoken
soundwaves, or physically-moving actors, doesn’t mean the imaginary story of Romeo & Juliet is anything
except imaginary. Romeo & Juliet is merely an imaginary story told by a gifted storyteller.

If someone were to argue that the imaginary story of Romeo & Juliet physically exists because it can be
written, spoken, or acted out in physically objective reality, they would be guilty of hypostatization, the
previously discussed fallacy of ambiguity. They would be doing that thing sapiens are instinctively inclined
to do where they believe something imaginary is something concretely real because of a false-positive
correlation between matching abstract and sensory inputs, as illustrated in Figure 60. Because the motion
of the actors matches a scenario imagined by the neocortex, people are quick to believe the imaginary
scenario is physically real. Much like people who believe the king’s abstract power is physically real
because knights display physical power when they get orders from the king, people will believe Romeo &
Juliet is physically real because actors physically perform it based on the orders they receive from
Shakespeare. Of course, it is a logical error to believe that a fictional story which exists only within the
imagination is physically real just because the pages on which it’s written are physically real, or the stage
on which it’s performed is physically real and can be seen, smelled, touched, tasted, and heard.

254
Figure 60: False Positive Correlation Produced by the Brain’s Realness-Verification Algorithm
[76, 138]

By now the reader may be asking, what does Shakespeare have to do with computer programming?
Simply change English symbols encoded on parchment to Boolean logic encoded on transistors, and
replace the actors with electro-mechanical marionettes, and you get the modern art of computer
programming. Computer programmers are, fundamentally speaking, storytellers. They conceive of
fictional stories and create abstract descriptions of imaginary scenarios, events, objects, and ideas.
Computer programmers write their stories using scripts and they hand those scripts to machines just like
Shakespeare would hand his scripts to actors. Then, the general-purpose computers role-play according
to the directions they receive in their scripts, just like actors role-play Romeo & Juliet.

One could say that Romeo & Juliet is an imaginary story “stored” within the pages of a script, just like a
computer program is an imaginary story “stored” within the circuitry of a computer. This is a technique
computer programmers use to make it easier to talk about programming. For the sake of simplicity,
engineers have adopted the habit of hypostatizing computer programs and treating them as if they were
concretely real things comprised of concretely real objects. Like a form of shorthand notation, it’s simply
easier on the brain to communicate complex abstract things like computer programs as if they were
physically real things, because that’s what we’re used to experiencing. As the reader is no-doubt
experiencing right now, it's quite tedious to explain and to comprehend how computer programs actually
work, so we simplify using abstractions.

With the invention of stored-program general-purpose state machines, sapiens created something more
complex than they have the capacity to fully comprehend. Faced with the overwhelming complexity of
modern digital computers, sapiens do what they have been doing since the Upper Paleolithic era: they
put their neocortices to work coming up with abstract explanations for things they can’t fully comprehend,
and they pretend like something imaginary is something physically real for the sake of simplicity. This is

255
an extraordinarily helpful technique for reducing the metacognitive burden of managing the complexity
of designing and operating computer programs, but it’s important for the reader to understand that it is
not technically accurate to describe a computer program as something physically real. This is an especially
important point to understand prior to future discussions about Bitcoin.

Computer programmers use syntactically and semantically complex symbolic languages like machine
code, assembly language, or higher-order programming languages as a medium through which they share
their ideas with machines, writing scripts for electromechanical marionettes to give a performance for an
audience. These scripts and their corresponding symbols have taken many physical forms over the past
80 years. In the early days of general-purpose computing, these scripts were pieces of paper just like
Shakespeare would have used, except with holes punched in them rather than images printed on them.
But after the invention of stored-program computing, scripts took the form of circuits built into digital
computers. Today, scripts often take the form of electrons stored in floating gate transistors.

General-purpose state machines can take different programmable states based on the instructions given
to them by a computer programmer. If a computer programmer does a good job of imagining a fictional
design and communicating it to the computer via symbolic language, and if the machine does a good job
following its instructions, then a complex effect will emerge: a desired computation will be made, a
desired behavior will emerge, and an audience will be so entranced by the performance of their
electromechanical marionettes that they will become convinced what they see is something physically
real. A well-programmed computer will cause a person to lose sight of the fact that they are sitting still,
doing little more than staring at an array of light-emitting diodes glued to a plane of glass and controlled
by a general-purpose computer, all of which was painstakingly orchestrated across decades of engineering
to present something which mimics the behavior of something observed in shared objective reality, but
isn’t actually there. It’s as real as Romeo and Juliet.

Nowhere in the sequence of events between a computer programmers’ imaginary thoughts, to an


audience of people enthralled by an array of light-emitting diodes, did the story written by the computer
programmer become anything more than imaginary. Just like skilled playwrights and highly capable actors
can write and follow scripts to make fictional tales like Romeo & Juliet look and feel physically real, so too
can skilled computer programmers and highly capable machines write and follow scripts to make fictional
tales like Solitaire look and feel physically real. People will become so moved by the performance of their
general-purpose computers that they will completely lose sight of the fact that what they see is merely a
canvas of symbols – an abstract representation of a completely virtual reality that would otherwise have
no detectable physical signature if it weren’t for the presence of the electromechanical marionette.

The general-purpose computer is a machine commanded to live-action role-play (LARP) according to


whatever lies within the imagination of its programmer. Everything printed on the screen of a general-
purpose computer is a computer-generated illusion. Whether it be a line of text, or a detailed image, or
an imaginary object, or a three-dimensional interactive environment that looks and behaves just like
environments experienced in shared objective reality, what a machine shows on a screen is virtual reality.
Virtual reality is, by definition, not physically real. The only knowledge a person can gain from looking at
a computer screen is symbolic knowledge, not experiential knowledge. This is true even if what’s shown
on screen is an image of something real or an event which did physically happen.

A stored-program general-purpose computer can therefore be thought of as a symbol-generating abstract


reality machine. With the invention of digital computers, sapiens transformed their ability to
communicate their abstract thoughts to each other to form a new type of shared abstract reality. Instead

256
of applying symbolic meaning to words, they apply symbolic meaning to electromechanical state changes
on a circuit board. Instead of utilizing actors to role-play imaginary stories, they use machines. This is a
core concept in computer science that is essential to understanding software’s systemic security flaws.
Software is nothing but a belief system, and belief systems are vulnerable to exploitation and abuse,
particularly by those who pull the strings of our computers.

5.4 Software Security Challenges

“The first step in creating safer software-controlled systems is


recognizing that software is an abstraction.”
Nancy Leveson [136]

This section gives a technological deep-dive into some of the most challenging aspects of computer
programming and cyber security. These concepts lay the groundwork for understanding how and why
common cyber security challenges could be alleviated by physical cost function protocols like Bitcoin.

5.4.1 Software Security is Fundamentally a Control Structure Design Problem

If the total amount of money stolen from cybercrime were treated as its own country, then it would
represent the third-largest economy after the US and Chinese economies. In a special report issued by
Cybercrime magazine, Cybersecurity Ventures stated that it expects “global cybercrime to grow by 15
percent per year over the next five years, reaching a $10.5 trillion USD annually by 2025, up from $3 trillion
USD in 2015. This represents the greatest transfer of economic wealth in history, risks the incentives for
innovation and investment, is exponentially larger than the damage inflicted from natural disasters in a
year, and will be more profitable than the global trade of all major illegal drugs combined.” [139]

Thanks in part to a substantial increase in nation state sponsored hacking activities, some have claimed
that there is a “hacking epidemic” plaguing the modern field of cybersecurity which will cause cyber
attacks to increase an estimated 10X between 2020-2025. This may not be surprising to some people
considering how routine ransomware attacks, data breaches, and other major cyber security incidents
have become. Cyber security is now such a significant challenge for US national security that the White
House recently passed an executive order addressing it. According to US President Biden, improving the
nation’s cyber security is essential to national strategic security and stability. “The prevention, detection,
assessment, and remediation of cyber incidents is a top priority to this Administration,” President Biden
has declared, “and essential to national and economic security.” [140, 139]

Today’s substantial amount of cybercrime suggests that software security engineering is challenging, and
there is room for improvement. An important first step towards improving software security is
understanding that software is fundamentally an abstract belief system which is vulnerable to exploitation
and abuse. When software leads to unexpected or undesired behavior like a cyber security incident,
people often claim their software “broke,” but this is just a figure of speech. Nothing physically breaks
during a software malfunction as it is physically impossible for something which doesn’t physically exist
to physically break. What actually happens during a software hack is that people find a way to exploit the
software’s design logic. This is why subject matter experts in software and system safety design like
Leveson assert that the first step to creating safer and more secure software-intensive systems is to
remind yourself that software is only an abstraction – it’s all in one’s imagination.

257
Computers behave exactly as they are instructed to behave. Therefore, when a computer produces an
unexpected or undesired emergent behavior, the root cause of that behavior is most likely the design of
the software. Except for very rare exceptions where computer hardware components are physically
damaged or experience something like a short or an unintended bit flip, computers don’t fail to operate
exactly as they’ve been programmed to operate. By that same logic, unless the state-changing mechanism
of a state machine has been physically impaired, there’s also no such thing as a “failed” or “broken” state
of a computer, because the machine was explicitly designed and built to be able to take that state.

What usually happens when a computer program gets hacked or leads to a safety or security incident is
that the original computer programmer attempts to encode logical constraints which are insufficient at
stopping a belligerent actor from systemically exploiting the logic of the computer program. As many
computer programmers have learned over the years, encoding logical constraints into software doesn’t
eliminate the threat of people exploiting the software’s logic, it just changes the way the software’s logic
can be exploited (note how this is the exact same concept discussed in the previous chapter about how
laws don’t prevent people from exploiting or breaking the law). Combining this observation with the fact
that software can’t break, then that means software security problems are fundamentally design
problems. If software gets hacked or behaves in a way that’s unexpected and it leads to an undesired
incident, it’s almost always the case that the root cause of that incident was the result of the programmer
producing a flawed design. Computers cannot be blamed for diligently and faithfully acting out the script
given to them by their director; the script is to blame. This is a core concept in the field of software systems
safety and security. [136]

Recall how a computer program represents a sequence of control signals issued to a state machine. All
software-related safety and security incidents are the byproduct of control signals which create insecure
or hazardous system states which then lead to an undesired loss event. The goal of software systems
security is to identify which control signals could lead to a hazardous state, and then design a control
structure which eliminates or constrains those control signals. This is a foundational concept not just in
software security engineering, but for system safety in general, as outlined by safety and security
engineering techniques like STAMP and STPA. [141]

There are four primary ways software control signals can produce insecure or hazardous system states.
First, software can provide a control signal that overtly places the system directly into an insecure or
hazardous state. Second, software could not provide a control signal that is needed to prevent a system
from being placed into an insecure or hazardous state. Third, software can provide a potentially secure
control signal, but do it too late or too early, resulting in an insecure or hazardous system state. Lastly,
software can stop providing a potentially secure control signal too soon or too long, resulting in an
insecure or hazardous state. [124, 136]

As discussed by pioneers in systems safety and security engineering, the systems approach to improving
security is to anticipate insecure or hazardous systems states using abstract thinking exercises like
scenario planning, then to identify what control signals (or lack thereof) would cause the system to reach
these undesired states. Once those sensitive control signals have been identified, the role of the security
engineer is to design a control structure that either eliminates those control signals or constrains them as
much as possible. To accomplish this, software security designers must keep strict account of all the
different control signals (or lack thereof) a piece of software can execute. [124, 141]

In systems security theory, the root cause of all software security incidents is attributed to insufficient
control structure designs which didn’t properly eliminate or constrain “unsafe” or sensitive control signals.

258
Therein lies the fundamental challenge of software security engineering; it requires an engineer who can
understand and anticipate different combinations of control actions or inaction which should or shouldn’t
occur. Software security engineers must be able to recognize these sensitive control signals and design
control structures which eliminate or constrain those signals, which is quite hard to do using only logical
constraints encoded into software, while still meeting desired functionality and behavior. This is one of
the biggest challenges in software security which makes it so different from security engineering in other
industries. Because software doesn’t physically exist, it’s not possible to secure software using physical
constraints unless the underlying state mechanism is physically constrained (this is the single most
important concept that the reader should note prior to a discussion about proof-of-work physical cost
function protocols like Bitcoin, because what proof-of-work represents is the act of physically constraining
software by physically constraining the underlying state mechanism).

But why exactly is it so challenging to design software control structures which can eliminate or sufficiently
constrain a computer from sending unsafe control signals using logical constraints rather than physical
constraints? The author offers six explanations. First, computers can have infinitely expanding state
spaces comprised of an infinite number of hazardous states. Second, programmed computers have shape-
shifting protean behavior which gives them unpredictable, non-continuous (thus non-intuitive) emergent
behavior. Third, because it’s imaginary, it’s very easy to build software with unmanageable complexity.
Fourth, software control signal interfaces are invisible and physically unconstrainable. Fifth, software
design specifications are arbitrary and semantically ambiguous, and the software engineering culture of
information hiding can also hide critical security information. Lastly, untrustworthy software
administrators deliberately design systems which give themselves abstract power and control authority.

5.4.2 Software Security Challenge #1: Infinitely Expandable State Spaces with Infinite Hazardous States

The first reason why software security engineering is challenging is because the state space of most
computers is practically infinite. As Von Neumann famously observed, there is theoretically no limit to the
number of states that stored-program general-purpose state mechanisms can have. Unfortunately, this
means there’s also no theoretical limit to how many insecure or hazardous states a programmed
computer can have. Consequently, as software becomes larger and more complex, the size of its
hazardous state space increases exponentially, often far exceeding what computer programmers can
reasonably expect to navigate.

This presents an extraordinary challenge for software security engineers who are responsible for
understanding a given computer program’s hazardous state space and designing control structures which
eliminate or constrain control actions which would cause the system to enter that hazardous state space.
If the hazardous state space is practically infinite, then it’s practically impossible to avoid all hazardous
states. [124]

5.4.3 Software Security Challenge #2: State-Changing Mechanisms Behave Like Shape-Shifting Monsters

The discrete nature of states also means a computer can have dramatically different emergent behavior
despite minor state changes. A state-shifting mechanism can be thought of as a bipolar, shape-shifting
monster. With a seemingly minute and inconsequential state change, a computer’s emergent behavior
can transform from something harmless to something significantly hazardous. This non-continuous
behavior makes computers unpredictable and “one wrong move” away from catastrophic malfunction
(note how this bipolar behavior is a popular plot line in cinema). One seemingly minor control action (or
inaction) can cause a discrete state change that causes a programmed computer to behave in surprising

259
ways, and it is practically impossible for software designers to anticipate all the possible different
combinations of unsafe control actions which could lead to every possible hazardous state change within
a given state space. This means it’s practically impossible for software engineers to know every single
“wrong move” or unsafe control action a complex piece of software can make.

5.4.4 Software Security Challenge #3: It’s Very Easy to Make Software Unmanageably Complex

Because software is abstract and because state mechanisms have infinitely expanding state spaces, there
is practically nothing limiting the complexity of the design of computer programs. In her book on systems
safety engineering, Nancy Leveson offers an explanation for why this can make software engineering
exceptionally difficult, citing observations of subject matter experts like Parnas and Shore and software’s
so-called “curse of flexibility.” [124]

“In principle,” Leveson explains, “[software’s flexibility] is good – major changes can be made quickly and
at seemingly low cost. In reality, the apparent low cost is deceptive… the ease of change encourages major
and frequent changes, which often increases complexity and rapidly introduces errors.” [142, 143, 124]

Shore explains software’s curse of flexibility by comparing software engineering with aircraft engineering.
When designing an aircraft, “feasible designs are governed by mechanical imitations of the design
materials and by the laws of aerodynamics. In this way, nature imposes discipline on the design process,
which helps to control complexity. In contrast, software has no corresponding physical limitations or
natural laws, which makes it too easy to build enormously complex designs. The structure of the typical
software system can make a Rube Goldberg design look elegant in comparison. In reality, software is just
as brittle as hardware, but the fact that software is logically brittle rather than physically brittle makes it
more difficult to see how easily it can be ‘broken’ and how little flexibility actually exists.” [143]

Leveson argues that software makes dramatic (and severely inappropriate) design changes so easy to
execute that it gives software engineers false confidence and encourages them to begin premature
construction of a system, leading to poor designs that remain unchanged later in development. “Few
engineers would start building an airplane before the designers had finished the detailed plans,” she
asserts, yet this is the norm in software development. [124]

Another issue emerging from software’s flexibility is the ease with which it’s possible to achieve partial
success, at the expense of creating unmanageable design complexity. “The untrained can achieve results
that appear to be successful, but are really only partially successful,” Leveson explains. “Software works
correctly most of the time, but not all the time. Attempting to get a poorly designed, but partially
successful, program to work all of the time is usually futile; once a program’s complexity has become
unmanageable, each change is as likely to hurt as to help. Each new feature may interfere with several old
features, and each attempt to fix and error may create several more. Thus, although it is extremely difficult
to build a large computer program that works correctly under all required conditions, it is easy to build one
that works 90 percent of the time.” Comparing this concept to the design of physical systems, Shore notes
how inappropriate it would be to build an airplane that flies 90% of the time. [124]

Shore also notes how, for some reason, the general public often has few objections about software
engineers attempting to build complex software without appropriate design knowledge and experience
in the field they’re writing software for. Few people would dare to fly in an airplane designed and built by
people who have had no formal training or education in aerospace engineering, yet people often have no
problem entrusting software (to include safety or security-critical software) to teenagers with no

260
background in computer science or even the field they’re working. Thanks to advances in computer
programming languages, it is not difficult for people with no background in computer science, systems
engineering, or systems security to teach themselves how to code – all it takes to learn how to program a
computer is to simply take the time to understand a computer programming language, as if it were any
other type of foreign language. And because computer programmers are often in high demand, it is also
not uncommon for programmers to be hired to immediately start designing and building software
infrastructure for major systems with which they have no experience.

Shore also notes how there is little physical or self-enforced discipline in software engineering like there
are in other fields of engineering – a trend which seems to continue despite how increasingly more reliant
the population becomes on computer programs. [124] He argues that the lack of physical constraints in
software design and development creates extra responsibility on computer programmers to have the self-
discipline not to produce overly complex and unmanageable designs which can lead to unexpected or
undesired behavior, but unfortunately many computer programmers shrug off this responsibility. [124]

“Like airplane complexity, software complexity can be controlled by an appropriate design discipline. But
to reap this benefit, people have to impose that discipline; nature won’t do it. As the name implies,
computer software exploits a ‘soft’ medium, with intrinsic flexibility that is both its strength and its
weakness. Offering so much freedom and so few constraints, computer software has all the advantages
of free verse over sonnets; and all the disadvantages.” [143]

Here the reader should note how Shore makes a direct comparison between software and free verse
written by storytellers. This is yet another reminder that the act of programming is fundamentally an act
of writing a fictional story; a script for a computer to role-play. Just like storytellers can produce abstract
imaginary realities where they are completely uninhibited by the physical constraints of shared objective
reality, so too can software engineers. Shore explicitly describes this as a disadvantage because it removes
the “natural forces” which constrain complexity, prevent poor design, or stop a developer from producing
designs which seem functional, but are logically flawed and/or physically impossible to engineer. This
concept is illustrated in Figure 61.

261
Figure 61: Example of a Logically Flawed Engineering Design that’s Physically Impossible

“The flexibility of software,” Leveson explains, “encourages us to build much more complex systems than
we have the ability to engineer.” This necessitates a type of self-discipline which she asserts may be the
most difficult kind of discipline to find in the field of software engineering: deliberately limiting the
functionality of software. “Theoretically, a large number of tasks can be accomplished with software, and
distinguishing between what can be done and what should be done is very difficult… When we are limited
to physical materials, the difficulty or even impossibility of building anything we might think about building
limits what we attempt.” [124] In software engineering, this isn’t the case. Just as easily as artists like Erik
Johansson can come up with logically impossible designs such as the bridge shown above, software
engineers can easily create design concepts that are physically or logically impossible.

Leveson summarizes the danger of software design flexibility with a quote from systems engineer G. Frank
McCormick: “And they looked upon software and saw that it was good. But they just had to add this one
other feature… Software temptations are virtually irresistible. The apparent ease of creating arbitrary
behavior makes us arrogant. We become sorcerer’s apprentices, foolishly believing that we can control
any amount of complexity. Our systems will dance for us in ever more complicated ways. We don’t know
when to stop… A project’s specification rapidly becomes a wish list. Additions to the list encounter little or
no resistance. We can always justify just one more feature, one more mode, one more gee-whiz capability.
And don’t worry, it’ll be easy – after all, it’s just software. We can do anything. In one stroke we are free
of nature’s constraints. This freedom is software’s main attraction, but unbounded freedom lies at the
heart of all software difficulty… We would be better off if we learned how and when to say no…” [124]

262
5.4.5 Software Security Challenge #4: Software Interfaces are Cheap to Produce and Often Invisible

According to Leveson, another reason why software engineering is exceptionally difficult is because
software control interfaces are cheap to produce and often invisible. A common way to deal with the
complexity of modern computer programming is to use systems engineering abstraction techniques like
decomposition to break software down into separate modules. Although separating a program into
different modules may reduce the complexity of individual software components, it doesn’t reduce the
complexity of the software system as a whole and it can introduce unmanageable complexity into the
design by creating a high number of invisible interfaces which becomes impossible to manage. [124]

In his journal article about the Software Aspects of Strategic Defense Systems, David Parnas describes how
invisible and complex control interfaces represent a major challenge with software engineering,
particularly when designing safety or security-critical systems. “The greater the number of small
components, the more complex the interface becomes. Errors occur because the human mind is unable to
fully comprehend the many conditions that can arise through the interactions of these components.” [124,
142]

Shore once again calls out how the lack of physical constraints in software can be a disadvantage. He
makes the case that software interface design is more challenging than designing interfaces for physical
systems. “Physical machines such as cars and airplanes are built by dividing the design problems into parts
and building a separate unit for each part. The spatial separation of the resulting parts has several
advantages: it limits their interactions, it makes their interactions relatively easy to trace, and it makes
new interactions difficult to introduce… The interfaces in hardware systems, from airplanes to computer
circuits, tend to be simpler than those in software systems because physical constraints discourage
complicated interfaces. The costs are immediate and obvious.” [143, 124]

“In contrast,” Leveson explains, “software has no physical connections, and logical connections are cheap
and easy to introduce. Without physical constraints, complex interfaces are as easy to construct as simple
ones, perhaps easier. Moreover, the interfaces between software components are often ‘invisible’ or not
obvious; it is easy to make anything depend on anything else.” [124]

263
5.4.6 Software Security Challenge #5: Software Design Specifications are Arbitrary, Ambiguous, and Hide
Security-Critical Information

“Do not try and bend the spoon – that’s impossible.


Instead, only try to realize the truth: There is no spoon.”
Boy Monk, The Matrix [144]

The fourth reason why software security is so challenging deserves a more thorough explanation as it
relates directly to the justification for making this thesis. The bottom-line up front is that computer
engineers have adopted the habit of using arbitrary and semantically ambiguous terms to specify the
functionality and desired (not actual) emergent behavior of their software. This not only causes confusion
about how software works, but it also suppresses vital information needed for security purposes.
Additionally, the arbitrary and semantically ambiguous way that software engineers explain their code
creates a window of opportunity for nefarious software engineers to deliberately build and disguise
exploitable design features.

Recall how computer program design represents an exercise in abstract thinking. Because software is
imaginary, computer programs must come up with imaginary explanations and abstract concepts to
explain the emergent behavior of the computers they program. Then, people hypostatize these abstract
concepts. They start acting like software abstractions are concretely real things. They forget that just
because multiple people serendipitously decided to use the same abstract terms to describe the desired
function and behavior of a given computer program, doesn’t mean these descriptions are objectively true,
or that it’s the only way to describe the function and behavior of a computer program.

For whatever reason (probably because it’s not necessary to understand computer science to write
software), people keep falling into the same trap of forgetting the undisputed truth that all computer
programs are abstractions and can therefore be described any different way using any imaginary concept,
abstraction, or metaphor. People don’t know a basic lesson of computer science, that the way any
software engineer chooses to describe the function, design, and behavior of software (including but
especially its creator) is imaginary, arbitrary, and semantically ambiguous. This not only leads to pointless
debates (people like to argue about what the “right” metaphor is describe software as if there’s an
objective answer, oblivious to the fact that there can’t be an objectively “right” way to describe an
imaginary abstraction), but it also leads directly to security incidents because the metaphors we use often
hide safety and security-critical design information.

One of the more detailed and comprehensive explanations of the abstract and arbitrary nature of
software design specifications comes from Charles Krueger, who first outlined the challenges of software
design reuse in his research for the US Air Force in the early 1990’s. As Krueger explains, computer
scientists and systems engineers use abstraction techniques to manage the enormous complexity of
software. Abstraction is a popular technique in both systems and software engineering because it allows
software engineers to suppress the details of a computer program that are unimportant to them, while
emphasizing the information that is important to them. [30]

Modern computer programmers use multiple layers of nested abstractions developed over many
decades. There’s usually a minimum of at least four layers of abstractions nested within each other when
modern software engineers write computer programs today. For example, the abstraction called
“machine code” is further abstracted and nested inside the operations of an “assembly language” which

264
is even further abstracted and nested inside the operations of a “general-purpose language” which is then
further abstracted using software specification techniques like object-oriented design. [30]

When software engineers use semantic expressions like string, char, var, thumbnail, coin, token,
application, or website to describe the computer programs they design, they are referring to nested
abstractions that have been developed and popularized by computer programmers over decades. To the
untrained eye, this abstraction technique can be confusing. Nevertheless, multi-layer and nested
abstractions have proven to be quite helpful for software engineers because they reduce what’s known
as cognitive distance. Krueger defines cognitive distance as the intellectual effort that must be expended
by software engineers when developing a computer program. Abstractions reduce cognitive distance by
allowing software engineers to filter out complex details about a computer program and focus on what’s
important to them. “Without abstractions,” Krueger explains, “software developers would be forced to sift
through a collection of artifacts trying to figure out what each artifact did.” [30]

"The effectiveness of abstractions…" Krueger explains, "can be evaluated in terms of the intellectual effort
required to use them. Better abstractions mean that less effort is required from the user." [30] The more
an abstraction technique reduces the cognitive burden of thinking about software, the more popular it
becomes as a mechanism not just to explain the intended function and complex emergent behavior of a
computer program, but to inform the design of future computer programs. In a technique commonly
known as information hiding, software engineers consider it to be a virtue to create abstractions which
suppress as much of the technical details about a computer program as possible. The more information
and details are suppressed by a software abstraction, the better it is perceived to be (at this point it should
be clear to the reader that if the goal is to hide as much information as possible, then it’s going to lead to
a breeding ground of confusion about how software is designed and how it actually functions).

Krueger explains that the way software engineers decide to suppress or emphasize information using
abstraction techniques “is not an innate property of the abstraction but rather an arbitrary decision made
by the creator of the abstraction. The creator decides what information will be useful to users of the
abstraction and puts it in the abstraction specification. The creator also decides which properties of the
abstraction the user might want to vary and places them in a variable part of the abstracting
specification.” In other words, the criteria that software engineers use to distinguish between important
and unimportant artifacts of a design are arbitrary, and so are the metaphors they use to describe its
desired function and emergent behavior.

This point needs to be emphasized: those who create and specify the design of software do so using
arbitrary decisions about what to name it and what information they think is important to share about
its design – there is no such thing as a technically precise software specification because software itself
is an imaginary, abstract concept. One of the most fundamental concepts of computer theory is that the
way any software engineer chooses to describe the function, design, and behavior of software is
imaginary, arbitrary, and semantically ambiguous. Nobody – including but especially the creators of
software who may have the most familiarity with its design – can rarely claim to have produced a
technically accurate or objectively true description of software because all software descriptions are
strictly abstract and imaginary concepts. Just because someone has technical knowledge about the syntax
behind a piece of logic doesn’t mean they’re objectively right about its specification.

Krueger goes on to break down the technical structure of software abstractions. He explains how all
software engineering abstractions include a specification that explains what the software does. These
specifications have syntactic and semantic parts. The syntactic parts of a computer program’s specification

265
manifest as the program’s source code and the mathematically discrete operations implemented by that
code. The semantic parts of the design specification are expressed using common language, independent
from whatever computer programming language is used. In other words, software engineers describe the
intended function and behavior of their programs in two different ways: through the source code itself,
and through the words/language they use to describe what the source code is supposed to do. [30]

The way computer engineers create abstractions to specify the design, intended functionality, and desired
behavior of their software therefore qualifies as an abstract language in and of itself, filled with its own
semantically and syntactically complex structure. Recalling the concepts presented in the previous
chapter, all higher-order languages invented by sapiens are both syntactically and semantically complex.
Well, so too are the abstract design specifications software engineers use to explain the intended function
and behavior of their computer programs.

Like with any language, there are opportunities for misinterpretation and miscommunication. Many of
the semantic expressions used by software engineers are not uniform across the industry. Different terms
can mean the same thing, or the same term can mean dramatically different things. This challenge is
further exacerbated by the fact that most abstractions are popularized for their simplicity, not for their
technical precision or accuracy. As Krueger explains, “semantic specifications are rarely derived from first
principles.” Software engineers invent semantic expressions to specify the design and function of their
software based on personal whims, using whatever abstract metaphors they want for any reason they
want, in completely arbitrary and subjective ways.

As an example, consider the computer programs which manage off-premises data storage called “the
cloud.” This is an abstract and semantic description of data storage technology which is not based on first
principles and not intended to be technically accurate. “Cloud” is a popular term because it offers a simple
way to describe a technology, not because it’s technically valid. It reduces the metacognitive burden of
having to stop and think about enormously complex data storage technology. [30]

Why is it important for the reader to understand how arbitrary and semantically ambiguous software
abstractions are? Because software engineers notoriously forget this basic concept of computer theory.
A common problem in the field of software engineering is that engineers keep overlooking the fact that
the abstractions used to describe the intended function and behavior of software are intended to reduce
cognitive distance; they’re not intended to provide a technically accurate description of the design.

Abstract software design specifications become popular because of how much information they suppress
and how easy they are to understand, not how technically valid they are. Software engineers have an
unfortunate tendency to forget this lesson of computer science, causing them to become overly reliant
on popular (but technically inaccurate) abstractions to influence their design decisions. This can be
extremely counterproductive to the goals of software security designers.

The fundamental problem is misaligned goals between software and security engineers. The goal of most
abstractions and design specifications created by software engineers is to reduce cognitive distance – to
minimize the amount of thinking required to understand the intended function and behavior of software,
so that software design concepts can be understood as quickly as possible and reused as easily as possible.
But as explained previously, the goal of systems security engineers is to understand what control signals
a piece of software can send to a computer that could place it into a hazardous state, so that control
structures can be designed which eliminate or constrain unsafe control signals.

266
The software engineer’s goal of suppressing as much information about a computer program as possible
therefore directly conflicts with the security engineer’s goals. The information being suppressed by an
abstraction can include information that is vital to security design. Therein lies the problem with software
engineers who forget how arbitrary, subjective, and semantically ambiguous their software design
specifications are: it can cause them to inadvertently overlook vital security information (or in many cases,
intentionally disguise vital security information – hence popular types of “back door” or “trap door”
exploits). A lack of awareness about this issue at scale leads to poor security culture where software
engineers keep producing, promoting, or reusing abstract software design specifications that hide vital
information about security.

The reason why this is so important to understand is because the author believes that this is one of the
major contributing factors to why Bitcoin is so misunderstood by the public. People seem to be missing
vital information about Bitcoin, or jumping to inappropriate conclusions about it, because of an arbitrary
decision by its inventor to describe it as a peer-to-peer cash system. People are constantly arguing about
design specifications and constantly using completely arbitrary and meaningless categorizations like
“cryptocurrency” or “blockchain,” and they’re oblivious to these basic lessons of computer theory which
tell us that these are arbitrary terms founded on nothing more technically accurate than personal whim.
It is just as technically accurate to call Bitcoin a “coin” as it is to call data storage software a “cloud.”

There’s no need to learn graduate-level concepts in computer science and systems security to code
software. This creates a problematic situation where people can devote their careers to writing computer
programs without knowing computer theory. Because they don’t have backgrounds in computer science,
they often don’t understand how completely arbitrary, subjective, and technically inaccurate their
software design specifications are. This makes them predisposed to using abstractions which suppress
vital control signal information needed for security. It also creates a situation where a lot of software
engineers have false confidence about how much they understand computers. It should come as no
surprise then, that this might lead to such a pronounced problem with cyber security incidents that a US
president has to pass an executive order to address it.

Security-critical control signal information hidden by software abstractions is a major contributor of zero-
days and other software security exploits. To “hack” a computer program is to simply take advantage of
sensitive control actions. When a hacker “hacks” software, they simply execute (or withhold) control
signals that were not properly eliminated or constrained by the software’s design logic. Why were these
sensitive control signals not properly constrained? In many cases it’s because software engineers
overlooked them. Why did software engineers overlook these control signals? Likely because they were
using abstractions which suppressed vital information needed to detect the security design flaw.

Hackers and nefarious software engineers thrive on the arbitrary, abstract, and semantically ambiguous
nature of software specifications. This culture of abstraction and information hiding is where they derive
their advantage. They will intentionally encode exploitable design logic into software and give themselves
backdoor or trap door access to sensitive control signals. They deliberately design their software to enable
unsafe control actions which can place their target’s computer into an insecure or hazardous state which
they can exploit. To hide their nefarious design or subversive tactics, software developers will create
abstractions which deliberately suppress or distract people from critical information about their
program’s control structure.

To illustrate the security challenges associated with arbitrary and semantically ambiguous software
specifications, let’s return to the example of “the cloud.” When some people save their sensitive data to

267
“the cloud,” they think it works the same way as when they store it locally to their own computer. They
don’t realize they’re sending their sensitive data to another person’s computer. For obvious reasons,
sending sensitive data to an anonymous person’s computer represents a security hazard. But people often
don’t recognize the vulnerability because the public has arbitrarily adopted the habit of calling it a “cloud”
rather than “external computers under the control of people we must trust not to exploit that control.”

It is easy for nefarious actors to come up with similar abstract software specifications using equally as
arbitrary and semantically ambiguous terms designed to distract people from seeing security
vulnerabilities which would otherwise be obvious. To make matters even more counterproductive for
security designers, software engineers often form inappropriately rigid consensus about their
abstractions. Software engineers will assert that one abstraction is “correct” even though it is arbitrary
and probably not derived from first principles. The reader is invited to test this out on their own by trying
to convince software engineers to use a different name than “cloud” to describe data storage software.

Security challenges associated with the arbitrary and semantically ambiguous nature of abstract software
design specifications are further exaggerated by an uneducated public. As people increasingly incorporate
software into their everyday lives, they gain a false sense of confidence and understanding about that
software. They think they understand software’s complexities because of how often they use it or because
they know what jargon to use in what context, despite how unfamiliar they are with computer theory or
systems theory. This false confidence leads to further miscommunication and confusion about the merits
of different software designs. It is not uncommon to see people with no background in computer science
quibbling about the merit of different software designs. In all these debates, people overlook how
arbitrary and semantically ambiguous their jargon is in the first place.

A good illustration of this phenomenon is object-oriented software design abstractions. The desire to
minimize cognitive distance explains why abstraction techniques like object-oriented design became so
popular in the 1990’s. People live in a three-dimensional world surrounded by an orientation of different
objects, so it makes sense that people would find it easier to explain the complex emergent behavior of
computer programs as if they were orientations of objects, despite how technically inaccurate this
description is. Likewise, it’s easier to design computer programs as if they were objects, and it’s easier to
communicate that design to other developers. For these reasons, object-oriented design has become one
of the most popular software abstraction techniques. [30]

The decision to describe the intended function and behavior of a computer program as an orientation of
objects is strictly an arbitrary decision based on personal whim. Computer programs are abstract beliefs
that can be described as anything. From a technical perspective, it is just as valid to describe the complex
emergent behavior of software as an orientation of objects, as it is to describe it as a verb, function, or
sequence of actions. People who don’t understand computer science don’t understand this basic principle
of computer science. Consequently, they often (tacitly) assert that the only way to accurately describe the
function and behavior of a computer program is as an orientation of objects. They will sincerely believe
that “cloud” and “token” and “coin” and countless other terms are the only appropriate ways to specify
the functionality of a given piece of software. Even more surprisingly, they will sometimes legitimately
believe that abstract software objects like “token” or “coin” are real things, for no other reason than the
fact that people adopt a universal habit of talking about them as if they were physically real objects.

Nevertheless, while it may be popular to talk about software as if it were comprised of objects oriented
in three-dimensional space, software is clearly not comprised of physical objects in three-dimensional
space. As The Matrix famously reminds us, “there is no spoon.” [144] All software objects are arbitrary

268
abstractions which don’t exist anywhere except within people’s imaginations. The only things which
physically exist are stored-program general-purpose state machines programmed to exhibit complex
emergent behaviors that resemble what people arbitrarily choose to describe as a token, coin, or spoon.

Just because computers can be programmed to present the image or behavior of a spoon, doesn’t mean
that spoon is a technically accurate description of the software, nor does it mean the spoon is physically
real. One would think this is obvious, but increasingly, it’s not. People seem to have become so
accustomed to talking about software as if it were comprised of objects, that they have begun to
hypostatize abstract software objects as concretely real objects. It frequently appears as though the public
has genuinely lost sight of the fact that software objects aren’t real. Like Neo or anyone else who has
spent too much time in cyberspace, the public seems increasingly less capable of remembering there is
no spoon. Like the gentleman picture in Figure 62, people are losing their grip on reality – on
understanding what’s real versus what’s virtual – by losing sight of the fact that all computer programs
are nothing but programmed computers acting out purely fictional stories (based on other people’s
scripts).

Figure 62: Modern Agrarian Homosapien Losing His Grip on Physical Reality

This trend could have major implications on population-scale security. The inability to make a distinction
between abstract reality and physical reality could cause people to place themselves into situations where
they could be exploited at massive scale through their belief system. In this case, the belief system through
which they could be exploited is not their ideologies, morals, ethics, or theologies, but their software and
their programmed perceptions of virtual reality. More specifically, people are allowing them to be
exploited because they forget there is someone behind almost every piece of software they interact with

269
online. And with the way the internet has currently been built, those people must be trusted not to encode
virtual reality in such a way that it can exploit people.

5.4.7 Software Security Challenge #6: Software Entrusts People with Extraordinary Abstract Power

The author has now identified five different ways that software’s lack of physical constraints contributes
to challenges associated with software security. Because state mechanisms have infinitely expandable
state spaces, there is practically no limit to what software engineers can design. Because software
represents abstract meaning assigned to state changes as machine-readable language, software designs
are abstract and physically unconstrained. Being physically unconstrained means software system
designers have nothing physically limiting them from producing unmanageably or outright logically flawed
designs with runaway complexity. The combination of infinite state spaces and physically unrestrained
complexity means software can have an infinite and unmanageable number of hazardous states. Being
physically unconstrained also means it’s easy to build invisible, complex, and unmanageable control signal
interfaces where engineers don’t even know what control signals are being passed to the computer.
Moreover, the discrete, non-continuous nature of state mechanisms means software is just one “wrong
move” away from sudden and unpredictable behavior changes.

There is another reason why software’s lack of physical constraints contributes to systemic security
problems – one that surprisingly few scientific papers have mentioned. Software gives administrators
lots of abstract power and control authority over other people’s computers, and there’s often no way
for users to physically constrain software administrators from systemically exploiting or abusing their
special permissions or control authority. In other words, software represents a new form of abstract
power hierarchy, where someone with extraordinary asymmetric amounts of abstract power must be
trusted not to exploit it. And as discussed at length throughout the previous chapter, abstract power
hierarchies are trust-based systems which are demonstrably insecure against untrustworthy people.

Ironically, the asymmetric abstract power and control authority given to software administrators is
derived from an attempt to make software more secure. Sometimes it’s not difficult to recognize a
software system’s hazardous states and sensitive control signals. For example, consider a simple piece of
software responsible for managing private or sensitive data, or a piece of software which controls the
firing of a weapon system. For both systems, there are control signals which would qualify as sensitive or
unsafe control signals that should either be eliminated or logically constrained. Because software is
abstract and non-physical, software engineers must use logical methods to constrain these control signals;
they can’t use physical constraints like a physical interlock or a safety switch.

Without a way to physically constrain sensitive control actions, a common alternative software engineers
use to logically constrain sensitive control signals is to codify permission-based or rank-based hierarchies
where special permissions are given to specific users to execute sensitive control signals. By using this
technique, software engineers effectively codify abstract power hierarchies for themselves, where some
users (i.e. a ruling class) have more rank and control authority over low-ranking users (i.e. a ruled class)
and must be entrusted not to abuse or exploit their rank. For example, it is not uncommon for software
to give certain users “admin rights.” Incidentally, these rights are colloquially referred to as “god rights”
because of the amount of abstract power and control authority they have in comparison to regular users
with regular permissions. These systems are systemically insecure by virtue of the simple fact that users
must trust their admins not to abuse their admin/god rights.

270
Because software is an abstraction, all permission-based control structure designs like this qualify as
abstract power hierarchies. Software engineers are essentially forced to encode these abstract power
hierarchies to logically constrain control authority because they simply don’t have the option of using real-
world physical power as the basis for constraining control authority. Here we see the lack of physical
constraints in software creating yet another major systemic security vulnerability. Without being able to
physically constrain software commands, software must necessarily create trust-based systems where
users are inherently vulnerable to systemic exploitation and abuse of untrustworthy or unethical
computer programmers who award themselves abstract power and control authority over the system.

The previous chapter provided a lengthy and in-depth discussion about the systemic security flaws of
abstract power hierarchies. As it turns out, the systemic security flaws are the same whether abstract
power and control authority is codified using rules of law or using software. The fundamental security
flaw of these systems is derived from the lack of control over the higher-ranking positions. Users must
necessarily trust in the benevolence of their god-kings (i.e. software administrators who have god-rights
over the software), because they otherwise have no ability to physically constrain them from being
systemically exploitative, abusive, or incompetent with their asymmetric levels of control authority.

Rank-based control structure design approaches only restrict those who have permission to access
sensitive control signals, they do nothing to physically constrain the execution of sensitive control signals.
Consequently, there is little to nothing stopping a higher-ranking computer programmer from executing
unsafe control actions which could cause the system to enter a hazardous state (recall the lessons of the
previous chapter about how logical constraints can’t stop the exploitation of logic, they can only change
logic can be exploited.) Therefore, just like all abstract power hierarchies which assign rank and special
permissions to specific people, users must trust higher-ranking insiders not to abuse or be incompetent
with their ability to send (or withhold) sensitive control signals. Likewise, users must trust that software
developers designed the system in such a way that outsiders can’t exploit the hierarchical control
structure to gain access to the special permissions and control authority given to higher ranks.

Regardless of whether a rank-based resource control structure is formally codified using rules of law or
software, whenever special permissions are granted to specific ranks within an abstract power
hierarchy, the same systemic security problems re-emerge. Abstract power is inegalitarian, trust-based,
systemically endogenous, and zero-sum. But perhaps most importantly of all, rank-based control
structures create a honeypot security vulnerability. Positions of high-rank and control authority have a
high benefit to exploit or attack (i.e. high BA). It’s far easier for untrustworthy or belligerent actors to
exploit people or gain disproportional access to resources by taking over high-ranking positions in rank-
based control structures. Creating rank-based control structures comprised of high-ranking positions with
lots of disproportional control authority over computers is like lighting a flame in a dark room filled with
moths; if you build rank-based abstract power hierarchies, systemic predators will be attracted to those
positions.

Software alone cannot impose severe physical costs on anyone attempting to exploit or abuse a computer
program’s control signals. No matter how well software’s control structures are designed, software
alone has no capacity to physically constrain the execution of unsafe control signals – the only way this
can be done is to physically constrain the underlying computer. Not being able to increase the physical
costs of executing unsafe control signals (i.e. increase CA) means the benefit-to-cost ratio of attempting
to access and exploit the existing control structure of a software-instantiated abstract power hierarchy
can only increase as more people use and rely on it (i.e. BCRA can only increase). Users are physically
powerless and therefore wholly reliant on administrators to keep them secure.

271
Eventually, the BCRA of exploiting a software’s control structure can reach a hazardous level that will
motivate someone, either inside the system or outside the system, to exploit it. The systemic security of
software-instantiated abstract power hierarchies therefore hinges on both the design skill and self-
restraint of the software’s engineers and administrators. Computer programmers must be trusted to
design a control structure that can sufficiently prevent people from exploiting its sensitive control signals,
while they themselves must also be trusted not to give into the temptation of a the ever-increasing BCRA
of exploiting the control structures they design.

Unfortunately, the history of all abstract power hierarchies created over the last several millennia is filled
with examples of breaches of that trust. Just like abstract power hierarchies codified by lawmakers, those
codified by computer programmers are apparently just as vulnerable to attack from bad actors either
external or internal to the system. These vulnerabilities have now blossomed into widescale systemic
exploitation abuse across cyberspace. Computer engineers and administrators of all kinds wield
extraordinary levels of asymmetric abstract power and control authority over cyberspace, and the
evidence of widescale systemic exploitation and abuse of people’s digital information and computer
resources has become commonplace.

5.5 Creating Abstract Power Hierarchies using Software

“This mode of instantaneous communication must inevitably become an instrument of immense power,
to be wielded for good or for evil, as it shall be properly or improperly directed.”
Samuel Morse (inventor of Morse Code), on the Telegraph [145]

5.5.1 Recreating Exploitable Abstract Power Hierarchies using a New Technology

Recalling the core concepts about creating abstract power from the previous chapter, to seek permission
or approval from someone is to tacitly give them abstract power over you. On the flip side, if you want to
create and wield abstract power over a large population, simply convince them to adopt a belief system
where they need permission or approval from you. Once a population has adopted a belief system where
they need your permission or approval, you have successfully gained abstract power and influence over
them.

Now combine this concept with the concepts introduced in this chapter about computer theory, namely
that software represents nothing more than a belief system. Here we can begin to see how people can
use software to give themselves immense abstract power because all of the ingredients are in place.
Simply convince people to adopt software designed as a permission-based hierarchy such that users tacitly
need approval or permission from other people higher in the hierarchy with higher permissions.

To summarize the previous section, software control structure design is not an easy task. A major
challenge of all software engineering is the fact that software is an abstract belief system – it does not
physically exist. Like all belief systems, software is vulnerable to systemic exploitation and abuse. There is
practically no limit to the number of ways that software can be exploited. Even if a software engineer
could sufficiently identify the hazardous system states it needs to circumnavigate to prevent a security
incident, software engineers must still figure out how to design control structures which sufficiently
constrain those control signals, and they must do it using discrete mathematical logic rather than by
applying physical constraints while also preserving the intended functionality and desired emergent
behavior of the software. This is a profoundly difficult task.

272
Engineers who design and build physical systems have an enormous advantage over software engineers
who design abstract systems. Physical system engineers get the benefit of being able to physically
constrain unsafe or insecure control actions and physically prevent systems from reaching hazardous
states. They can build safety switches which physically prevent users from committing unsafe actions.
They can build interlocks, thick walls, or heavy physical barriers. They can deter adversarial control actions
by making it impossible to justify the physical cost of performing them. These are luxuries which software
engineers do not have because they do not design systems in the domain of physical reality. Instead,
software engineers must use discrete mathematical logic and syntactically and semantically complex
language to encode constraints. This is far more difficult than building simple physical constraints.

Because software engineers can’t physically constrain unsafe control actions, they often design
permission-based systems to logically constrain sensitive control actions. One of the most common ways
that software engineers choose to secure software is to design rank-based, abstract power hierarchy
where positions of high rank are granted special permission to execute sensitive control actions. Ironically,
even though these abstract power hierarchies are designed ostensibly to improve security, we know from
the core concepts discussed at length in the previous chapter that abstract power hierarchies have
substantial systemic security flaws.

This security design methodology produces a trust-based and inegalitarian abstract power hierarchy
which gives disproportionate amounts of control authority to a select few people who must be trusted
not to exploit it. These software-defined abstract power hierarchies create a ruling class and a ruled class
just like legacy abstract power hierarchies (i.e. governments) do, where users are physically powerless to
constrain the abstract power and control authority given to their rulers. Users must trust that actors either
inside or outside the system will not find a way to abuse the abstract power and control authority given
to specific positions within these hierarchies, because they’re otherwise powerless to stop it. They have
no ability to impose severe, physically prohibitive costs on systemic predators who exploit the design logic
of the system.

Herein lies one of the most significant but unspoken security flaws of modern software: it creates a new
type of oppressive empire. A technocratic ruling class of computer programmers can gain control
authority over billions of people’s computers, giving them the capacity to exploit populations at
unprecedented scale. These digital-age “god-kings” are exploiting people’s belief systems through
software, data mining people and running constant experiments on entire populations to learn how to
network target them to influence their decisions and steer their behavior. [66]

In the hands of oppressive regimes, computer programs can turn into panopticons giving governments
unprecedented surveillance capabilities with pinpoint-precision and authoritarian control over billions of
people. Cyberspace is nothing but a belief system, and never have so many people come to adopt the
same belief system at such a global and unified scale. Therefore, never before have so many people been
so vulnerable to exploitation and psychological abuse through their belief systems. Through the abstract
reality of cyberspace, agrarian society can be entrapped, domesticated, farmed, and herded like cattle.

273
5.5.2 A New Type of Abstract Dominance Hierarchy over a Digital-Age Resource

“Yeah so if you ever need info about anyone at Harvard… just ask… I have over 4,000 emails, pictures,
addresses, SNS… people just submitted it… I don’t know why… They ‘trust me’ … Dumb f***s.”
Mark Zuckerberg [146]

In the past, abstract power hierarchies were created by writing stories using pen and parchment or
encoding what we now call rules of law. Today, abstract power hierarchies can be encoded using what we
now call software. Once again, agrarian society has fallen into a systemically vulnerable situation where
those who are literate with a new form of language, storytelling, and rulemaking are using it for their own
personal advantage. Using software it is possible that an elite, technocratic ruling class can design abstract
power hierarchies which give themselves enormous levels of abstract power and control authority over
the precious resources of entire populations (namely their bits of information). Convince a population to
run a particular piece of software, and that population enters a scenario where someone wields abstract
power and control authority over their resources.

A software administrator’s abstract power and control authority is formally codified using machine code
rather than written rule of law. But from a systemic perspective, the strategy for creating abstract power
and using it to build an exploitable belief system is functionally identical. As previously discussed, the
strategy of a god-king goes like this: use semantically and syntactically complex language to tell lots of
stories which give yourself abstract power and control authority over people’s resources (in this case,
their bits of information). Get people to adopt a common belief system which creates an abstract power
hierarchy that places you at the top of that hierarchy (in this case, get them to use or install your software).
Convince the population to trust that you will not abuse your abstract power and control authority over
them to exploit them through their belief system. If the population starts to show some concern, convince
them they are secure because you have encoded logical constraints into the system – logical constraints
which can do nothing to physically prevent you from exploiting those logical constraints in the future.

Major differences between the god-kings of the past and the god-kings of the present are simply the
technologies they use and the resources they control. Modern god-kings don’t need slavishly obedient
people anymore because they have slavishly obedient machines (machines which are far more loyal and
far less prone to uprising – at least for now). The modern technocratic elite don’t need their population
to farm food anymore, they just need their population to farm data. And the more a population willingly
forfeits their data to their technocratic ruling class, the more that population can be experimented upon
to determine causally inferable relationships and actions which drive their behavior. It’s no secret that
these software administrators utilize their platforms to A/B test their populations on a regular basis to
determine not just what influences the population’s behavior, but what drives their behavior. In other
words, neo god-kings can and do determine how to control their users based purely on what they can
learn from having access to their precious bits of information. [66]

As agrarian society grows increasingly more reliant on their computers, they appear to have forgotten a
lesson learned over 10,000 years of people creating abstract power hierarchies: they’re systemically
insecure, particularly against the people at the top of those hierarchies. The form of the technology used
to create abstract power hierarchies may have changed, but the function hasn’t, nor have the tactics and
techniques of creating and exploiting abstract power. Because people don’t understand this, they appear
to be oblivious to how vulnerable they are to systemic exploitation and abuse anytime they operate a
computer. People are migrating to cyberspace at unprecedented scale, and software administrators with
extraordinary abstract power are encouraging it and rebranding it with fun names like “metaverse.”

274
Meanwhile, nothing is protecting the population against unprecedented levels of exploitation except their
trust in anonymous strangers who have control over their computers through their computer programs.

This is happening not just at the individual level, but also on a global scale. Entire nations are now
operating online, completely exposed to the whim of other nations and utterly incapable of physically
securing themselves from neo god-kings (a.k.a. software or computer network administrators). As more
people migrate online and grow increasingly dependent on their computers, the benefit of exploiting
them through their software and their computer networks is only accelerating. At the same time, people
continue to have no means to physically secure themselves by imposing severe physical costs on people
and programs in, from, and through cyberspace. The result is an exponentially increasing BCRA for entire
populations of people. The more people choose to believe in software, the more valuable software
becomes as the chosen attack vector for modern systemic predators.

Although this systemic security problem is thousands of years old, people appear to have forgotten the
lesson. All that has happened is that an older form of storytelling technology (spoken and written symbols)
has been replaced by a new form of storytelling technology (state changes in a state machine). People,
private institutions, public organizations, and entire nations are writing stories, convincing transnational,
global-scale populations to believe things and exploiting them through their belief systems. This is nothing
new; this is the same abstract empire-building game with a different name. The exploitable ideology of
choice is now software. Convince a population to use and believe in your software, and you can have
unconstrained levels of abstract power and control authority over them that is so asymmetric, it could
rival that of Egyptian pharaohs. This concept is illustrated in Figure 63.

Figure 63: Software Administrators with Abstract Power over Digital-Age Resources

Unfortunately, the general public doesn’t appear to understand the complexity of computer theory to
know that software represents an exploitable belief system. At the same time, they also don’t appear to
understand enough about the complexities of agrarian power dynamics to understand how vulnerable
they could become to getting exploited through their belief systems, much less how they can secure
themselves against this form of abuse. Perhaps this is because people don’t take the time to understand
the difference between abstract power and physical power. They don’t understand how logical
constraints encoded into rulesets will not secure them against. They don’t understand how demonstrably
275
dysfunctional abstract power hierarchies are. And because of these misunderstandings, the public doesn’t
appear to understand how entrapped they could be, nor how they might be able to escape their
entrapment. As mentioned previously, this is a recurring problem with domestication. People can lose the
capacity or inclination to secure themselves by imposing severe, physically prohibitive costs on their
attackers to make it impossible to justify the physical cost (in watts) of exploiting them. Domesticated
populations become inclined to believe that they can adequately defend themselves without projecting
physical power. With the way the internet is currently architected, users operating in cyberspace are
automatically domesticated, as the architecture necessary to project physical power to physically
constrain belligerent actors is currently missing (at least, it was missing until the discovery of Bitcoin).

5.5.3 Software is the Same Abstract Power Projection Game with a Different Name

Recalling a core concept from the previous section, software represents an abstract form of power and
resource control authority. Computer programmers use software to construct rank-based, abstract power
hierarchies which give them physically unrestricted levels of control authority over people’s resources
(namely their computers, the information on those computers, and all the operational functionality of
computers – which is becoming increasing substantial in the digital age). Software represents a new type
of abstract power projection technology; a new way for dynasties to build their empires and reign over
entire populations across multiple generations. These abstract power hierarchies have the enormous
advantage of inheriting a domesticated population of users who have no capacity or inclination to use
physical power to secure themselves, making them automatically predisposed to systemic exploitation
and abuse at unprecedented scale. Not surprisingly for anyone who understands history, these abstract
power hierarchies are becoming oppressive; people are beginning to take advantage of entire populations
through their computer programs. History tells us what populations must do to escape from this type of
oppression. This is a lesson that has been learned time and time again over several millennia, and modern
agrarian society appears to be on the verge of learning it again.

Just like agrarian populations have been trying to do for at least the last 5,000 years with other written
languages, computer programmers today keep assuming that they will be able to design and codify
abstract power hierarchies which can adequately secure people against exploitation and abuse with the
right combination of rules. They keep trying to design complex software with increasingly complex
rulesets, only to be surprised when someone finds a way to systemically exploit the logic. Somehow,
people keep overlooking the fact that encoded logic – whether they’re written on parchment or written
in python – are always the source of systemic exploitation and never a complete solution to it. Encoding
more logic doesn’t stop the exploitation of logic, it just changes the way the encoded logic can be
exploited.

Attempting to keep populations systemically secure against exploitation and abuse from software-defined
abstract power hierarchies by writing more logical constraints is a demonstrably unsuccessful strategy.
Ineffective logical constraints are the source of all forms of software hacks and software systemic
exploitation, not the solution to them. The same line of reasoning which applies to keeping nations secure
against external invasion or internal corruption also applies to computer programs because they are
systemically identical problems.

Attempting to keep software systemically secure by writing more software may be even harder and more
futile than attempting to keep rules of law systemically secure against foreign invaders or corrupt officials
by writing more laws. At least when Hannibal arrives at the gate, or when a dictator rises to power and
starts to oppress their people through their belief system, the population can see their oppressors. This is

276
not the case with software. Modern oppressors can hide behind their software. The technocratic elite can
avoid detection. Only unsophisticated attackers allow themselves to be identifiable; the rest should
understand by now that the best way to entrap people is to do it subversively, without detection, through
their computers.

Perhaps because the population is docile and domesticated, they don’t appear to understand that the
problem with cyberspace is not a lack of logical constraints. A reason why cyberspace is so systemically
hazardous is because it represents the adoption of a common belief system, and all belief systems are
systemically exploitable. Cyberspace is nothing more than people volunteering to assign symbolic
meaning to state changes within the combined state space of a globally-distributed network of state
machines. Common belief systems like cyberspace can and will have their logic exploited regardless of
how that logic is encoded.

In the current internet architecture, the fundamental security problem is that people have adopted a
common belief system over which they have little capacity to resist exploitation because they are literally
powerless to resist exploitation. To be systemically secure in cyberspace, digital-age agrarian society must
figure out a way to project physical power in, from, and through cyberspace to impose severe physical
costs on people who try to exploit them, and to physically restrict malevolent software. This would imply
that the internet itself needs to be re-architected in such a way that people can impose severe physical
constraints on each other – perhaps using electricity.

5.5.4 The “Software” Metaphor Began as a Neologism for Abstract Power and Control Over Computers

“From the very beginning, I found the word too informal to write and often embarrassing to say…
Colleagues and friends simply shrugged, no doubt regarding each utterance as a tiresome
prank or worse, another offbeat neologism…”
Paul Niquette [147]

If the reader is having a hard time accepting the author’s assertion that software represents a form of
abstract power, then consider the fact that the “software” metaphor first emerged as a neologism for
wielding absolute power and top-down command and control authority over computers. Less than five
years after Neumann’s EDVAC was delivered to the Army Ballistic Research Laboratory in 1949, computing
pioneer Paul Niquette coined the term software. Niquette came up with the term during his experience
programming the Standards Western Automatic Computer (SWAC), a stored-program computer built a
year after Neumann’s EDVAC. In his memoirs, he describes the SWAC as a mindless, subservient machine
that bends to his will. [147]

“I wanted nothing to do with the SWAC ‘hardware’ – the machine was the mindless means for executing
my programs – a necessary evil, but mostly evil. It was about at that moment, I seized upon the
consummate reality of what I was doing… I was writing on a coding sheet, not plugging jacks into sockets,
not clipping leads onto terminal posts, not soldering wires, not bending relay contacts, not replacing
vacuum tubes. What I was doing was writing on a coding sheet! It was October 1953 and I was
experiencing an epiphany. Before my eyes, I saw my own markings carefully scrawled inside the printed
blocks on the coding sheet. They comprised numerical ‘words’ – the only vocabulary the computer could
understand. My coded words were not anything like those other things – those machine things, those
‘hardware’ things. I could write down numerical words – right or wrong – and after they were punched
into cards and fed into the reader, the SWAC would be commanded to perform my mandated operations
in exactly the sequence I had written them – right or wrong. The written codes – my written codes – had

277
absolute power over ‘hardware.’ I could erase what I had written down and write down something
different… the SWAC, slavishly obedient in its hardware ways, would then be commanded to do my work
differently – to do different work entirely, in fact. The writing on the coding sheet was changeable; it was
decided not hardware. It was – well, it was ‘soft-ware.’” [147]

Here we can see how the inventor of the word “software” described it as a form of “absolute power” over
“slavishly obedient hardware” which would follow his commands no matter how often he changed his
mind. The commands issued by a computer programmer were classified as “software” and the machines
slavishly obeying them were classified as “hardware.” Thus, since its inception the word “software” has
always represented a neologism for abstract power and top-down control authority over computers.

Using Niquette’s point of view, it’s easier to understand how computer programs represent a new type of
abstract power hierarchy. When a computer programmer writes software, they use a symbolic language
to formally codify a rank-based hierarchical system where they give themselves control authority over
computing resources (computers which often belong to other people). In much the same way that kings
or lawmakers formally codify the design of their abstract power hierarchies using rules of law,
programmers formally codify the design of their abstract power hierarchies using software. Whereas kings
assign themselves with abstract forms of power like “rank,” programmers assign themselves with abstract
forms of power like “admin rights.” Whereas kings benefit from their control authority over their subjects’
land, programmers benefit from their control authority over their subjects’ computers, the information
stored on those computers, and all the resources controlled by those computers.

When viewing software as a way to create abstract power and codify abstract power hierarchies, this line
of thinking begs a question: what physically prevents computer programmers and software administrators
from becoming exploitative or abusive with their “absolute power” and control authority over other
people’s “slavishly obedient” computers? The previous chapter outlined why abstract power hierarchies
are systemically insecure. People – Americans especially – should be keenly aware of the threat of high-
ranking individuals who are given too much abstract power and resource control authority. The citizens
of all countries which have ever had to suffer the rule of an oppressive ruling class should understand the
importance of being able to secure themselves against abusive and systemically exploitative people by
imposing severe, real-world physical costs on anyone who would try to exploit them.

If computers create new forms of property, policy, and other resources upon which entire populations
depend, and if software represents a new form of abstract power and control authority over those
resources, then how do populations maintain the ability to physically defend themselves in, from, and
through cyberspace against computer programmers who make themselves god-kings and abuse their
abstract power and resource control authority? If software and cyberspace fundamentally represent a
new kind of belief system, what is to stop systemic predators from systemically exploiting people through
this belief system? The answer appears to be nothing – at least not with the current architecture of the
internet. There does not appear to be anything physically preventing computer programmers from
exploiting populations at massive scales through their software. There does not appear to be anything
empowering people to secure themselves and their precious bits of information by imposing severe
physical costs on anyone who would try to systemically exploit their computer networks. With one clear
exception: Bitcoin.

The emergence of cyberspace may represent something as significant to cultural evolution as the
discovery of agriculture and the abstract power hierarchies used to govern agricultural resources.
Cyberspace is a globally adopted common belief system that is already radically transforming the way

278
societies organize themselves in much the same way that the emergence of agrarian abstract power
hierarchies did. Just as agrarian society led to the formation of agrarian empires, so too does cyberspace
appear to be leading to the formation of cyber empires, complete with the threat of oppressive god kings
rising to the top of the ranks. The inevitable next step, it seems, is cyber war.

5.6 Physically Resisting Digital-Age God-Kings

“Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every


nation, by children being taught mathematical concepts… A graphic representation of data abstracted
from banks of every computer in the human system. Unthinkable complexity.”
William Gibson [148]

5.6.1 How Did we Get Here?

The pace of abstract thinking and cultural evolution following Niquette’s observations was so fast that it’s
easy to overlook how this situation emerged, so what follows is a short summary. In 1953, the amount of
abstract power and resource control authority wielded by computer programmers and software
administrators like Niquette was minimal and tightly localized. Few people knew what stored-program
general-purpose state machines were at that time. People didn’t use these machines to manage critical
infrastructure, store sensitive data, or create global-scale virtual realities which transform how entire
populations perceive objective reality. But in a span of only six decades, computer programmers went
from encoding rudimentary symbolic Boolean logic into large and clunky electromechanical circuitry, to
developing what could be described as cyber empires ruled by administrators who wield unprecedented
levels of physically unconstrained abstract power. Why? For little other reason than what Niquette first
noted about software; it gives people “absolute power” over “slavishly obedient” machines, which directly
translates into absolute power, influence, and control authority over the people who depend upon those
machines.

Add Niquette and Columbus together, and you get today’s systemic security problem. Admiral Columbus
demonstrated that all it takes to get an entire population of people to do your bidding is for that
population to have an exploitable belief system. To achieve the same effect as Columbus and countless
other false prophets, people don’t have to believe in “good” or “god” anymore, they just have to believe
in Boolean logic assigned to transistor states. If you make yourself the person who programs how those
transistor states change, you become the person who can control what people see and believe.

In the 1940’s, engineers figured out how to turn circuits into symbolic languages which could be used to
issue commands to machines (i.e. machine code), paving the way for people like Niquette to wield
software as a form of abstract power and control over stored-program computers. In the 1950’s,
engineers figured out formula translators and cleared a path towards assembling commonly-reoccurring
pieces of machine code together to make computer programming easier to do. This became known as
assembly language.

Assembly languages made the art of computer programming more accessible to people. This caused the
size and complexity of computer programs to explode throughout the 1960’s until the point where NATO
declared a “software crisis” in 1968 and asked for nations to come together to create a new field of
“software engineering” to manage the extraordinary complexity of computer programming. [30]

279
In the 1970’s, computer programming became its own distinct discipline known as “software
engineering,” which became its own form of computer programming that was considered to be separate
and distinct from computer science and the profession of computer engineering. These so-called
“software engineers” began to evolve their methodologies to manage the extraordinary complexity of
designing computer programs. They explored formal methodologies like discrete mathematical modeling,
and they invented techniques like structured programming and modularity to make computer program
design more manageable. These methodologies gave rise to the philosophy of systems thinking, which
accelerated the development of new fields of engineering like systems engineering in the 1980s.

Throughout the 1970’s and the 1980’s, computer programmers became increasingly more inclined to
think of their programs as complex systems which produce complex emergent behaviors. This led to what
some might describe as a software engineering renaissance. Software engineers developed an arsenal of
system engineering tools which transformed the look and feel of computer programming. At the same
time, developers struggled with the inherent complexity of building larger systems using assembly
languages and began to search for more effective ways to express computation.

This led to a new generation of semantically and syntactically complex computer programming languages
where the most common implementation patterns used in assembly languages (e.g. iteration, branching,
arithmetic expressions, relational expressions, data declarations, and data assignment) became the
primitive constructs for higher-level programming languages. Under pressure from NATO to create
uniform general-purpose programming languages (e.g. Ada), software engineers started inventing higher-
level general-purpose computer programming languages with very similar semantic and syntactic
expressions. Consequently, a common lexicon began to emerge.

Software engineers could now routinely talk about the same kind of if/then/else statements, for/while
loops, chars, vars, and function calls no matter what general-purpose language they learned. These
common general-purpose languages made software engineering so easy that people no longer needed
decades of experience in formal logic or computer science to program a computer; they just needed to
learn the syntactic and semantic rules of higher-order computer programming languages. By the 1990’s,
countries like the US began to legally classify general-purpose computer programming languages as
formal languages so they could be protected under free speech laws. [30]

As previously discussed, sapiens use language to tell stories, and storytelling gives people the ability to
connect neocortices and construct shared, abstract realities where some people have abstract power.
Over time, the stories told by shamans turned into stories told by god-kings. The emergence of general-
purpose computer programming languages not only created a new type of literacy, it also created a new
type of storytelling available to practically anyone willing to teach themselves how to communicate with
computers. With any new form of storytelling comes a new way to create abstract realities where some
people have abstract power. Today, people only needed to teach themselves how to write computer
programs to gain immense levels of abstract power and control authority over other people’s computers.

In the beginning, the virtual realities and abstract power hierarchies created by computer programmers
were rather small, confined to small computer networks or “tribes.” But then computers found their way
into single family homes and started communicating with each other across continents. Engineers started
running wires across oceans and computers started controlling society’s most critical infrastructure. Over
a few short decades, the ability to program computers became the ability to control critical resources
anywhere in the world from anywhere in the world. It wasn’t long after the emergence of the internet
that engineers started replacing signals sent across wires with signals sent across the electromagnetic

280
spectrum. Computers shrunk from the size of a room down to the size of a pocket, and far more people
started to incorporate them into their lives, growing increasingly more dependent on them.

Along the evolutionary journey of computer programming, software never stopped offering a form of
absolute power and control authority over machines. The abstract power and resource control authority
available to computer programmers only grew as computers became more prevalent. While the
population became increasingly more dependent on their computers, they became increasingly more
dependent upon computer programmers.

Today, the amount of abstract power and resource control authority afforded to computer programmers
by the software they write has reached a level exceeding some nation states. Some countries (e.g. China)
have recognized the source of this abstract power and moved to capture it by ensuring direct state control
over software administrators. Other nations have exercised some restraint at the strategic cost of allowing
some computer programmers to become neo god-kings with formally codified abstract power hierarchies
with extraordinary and unprecedented control authority and influence over billions of people and their
computers. These neo god-kings are now trying to recruit as many people into their abstract empire as
possible, giving it attractive-sounding names like “the metaverse.”

The god-kings understand that the ability to program people’s computers translates directly into the
ability to control the data, information, and other resources which entire human populations rely upon to
shape their thoughts, guide their actions, and make sense of shared objective reality. Unfortunately,
people who are less familiar with computer science and power projection in agrarian society are less
inclined to see the trap.

Since multi-national populations have become highly dependent on their computers, they are already
highly dependent on the abstract power and control authority of software engineers who program them.
In a very short amount of time, under the noses of the people who don’t understand human
metacognition, cultural evolution, and abstract power building, software appears to have emerged as the
predominant form of empire-building in modern agrarian society, giving neo god-kings the ability to rule
over trans-national populations through the computers they carry around in their pockets. Today, it is not
even considered remarkable that a single person has physically unconstrained rank and control authority
over three billion people’s computers. But it has not yet occurred to many of them that they could now
have the means to physically resist their god-kings when those god-kings inevitably become oppressive.

5.6.2 “The Metaverse” Could Represent the Same Kind of Entrapment Hazard as The Matrix

It is still too early to know what long-term impacts computers will have on human metacognition, but one
thing is already clear: sapiens have discovered a virtual frontier which blurs the lines between abstract
and objective realities even further than it was already blurred by tens of thousands of years of
storytelling, in completely new ways that our species has never experienced before. As more people
integrate symbol-generating virtual reality machines into their lives, they increase their potential to lose
sight of the difference between objective reality and abstract reality – the difference between real versus
imaginary things. More and more, people are entrapping themselves in semi-somnambulant dream states
where they allow strangers to control what they see, hear, and believe, feeding them with programmed
illusions of freedom, serendipity, and choice.

The future envisioned by the Wachowskis in the movie The Matrix appears to be coming to fruition.
Modern agrarian sapiens are becoming so engrossed in the shared hallucination of cyberspace that they

281
appear to be losing their grip on objective reality. “The Matrix” has been rebranded as “The Metaverse,”
but the dynamics of entrapment and exploitation are still the same, regardless of the semantics. A growing
number of people appear to be genuinely confused about the difference between the real world and the
imaginary world. In this confusion, they are not grasping the difference between real and imaginary things,
most notably the difference between real power and imaginary power. Combining this confusion with a
general ignorance of power dynamics in agrarian society, people appear to have no concept of the
tradeoff between physical power-based resource control and abstract power-based resource control. This
creates an unprecedented window of opportunity for systemic predators. As young, unsuspecting, self-
domesticated, and computationally illiterate people continue to migrate into the metaverse, they have
no idea how vulnerable they are.

This, of course, is not the first time modern agrarian society has become vulnerable to entrapment by
their belief systems and consensual hallucinations. The previous chapter provided a detailed description
of how sapiens developed increasingly complex applications of abstract thought until it eventually led to
the creation of abstract power hierarchies to settle intraspecies disputes, establish control authority over
intraspecies resources, and achieve consensus on the legitimate state of ownership and chain of custody
of intraspecies property. Sapiens first started using their abstract thinking skills for planning and pattern
finding, but then they started using it to assign imaginary meaning to recurring patterns observed in
nature through a process called symbolism. Symbolism enabled higher-order communication because it
allowed sapiens to develop semantically and syntactically complex languages as a protocol for exchanging
symbolically meaningful, conceptually dense, and mathematically discrete information.

Armed with symbolic language, sapiens discovered the art of storytelling, a process where they can
leverage people’s capacity for abstract thought to create shared imaginary realities for themselves to
explore together. These virtual realities were unbound by physical constraints, allowing people to explore
them within the safety & comfort of their own imaginations. Fictional stories enhance sapient relationship
building, information sharing, entertainment, and vicarious experience. But fictional stories also lead to
consensual hallucinations where storytellers give themselves passive-aggressive access to physically
unbounded levels of abstract power and control authority over the people who believe in those stories.

All abstract realities created by storytellers are consensual hallucinations. The fact that the mechanism
for telling these abstract stories has changed to computers doesn’t change the overarching systemic
dynamics of abstract power projection. People have tried valiantly to design abstract power hierarchies
which keep their populations secure from exploitation and abuse using nothing more than logic, but
5,000 years of written testimony prove that no combination of written logic has successfully prevented
an abstract power hierarchy from becoming dysfunctional, exploited, vulnerable to foreign attack, or
vulnerable to internal corruption. Physical power has always been needed to correct for these security
vulnerabilities. This has necessitated the use of physical power as the basis for settling disputes,
determining control authority over resources, and achieving consensus on the legitimate state of
ownership and chain of custody of property. The cyclical, clockwork nature of warfare is proof that
abstract power hierarchies aren’t fully functional.

Why is physical power so useful? Because it’s real, not abstract. It is therefore not endogenous to a belief
system which can be systematically exploited by storytellers, making it immune to the threat of god-kings.
Physical power works the same regardless of rank and regardless of whether people sympathize with it.
Physical power is objectively true, impossible to refute, and impossible to ignore. Physical power-based
control authority predates abstract power-based control authority by four billion years. Since the first

282
abstract power hierarchies emerged, physical power has always served as its antithesis. Physical power
has always been used to remedy the dysfunctionality of abstract power.

As much as sapiens hate to admit it, sapiens depend upon the complex emergent social benefits of
physical power competitions (a.k.a. warfare) to establish resource control authority the same zero-trust
and egalitarian way most other pack animals do. Physical power competitions may be energy intensive
and prone to causing injury, but they are a far more systemically secure way for populations to manage
resources.

Abstract power hierarchies are insecure because they increase the benefit of attacking a population by
creating resource abundance and high-ranking positions with immense control authority. At the same
time, abstract power hierarchies have no ability to physically constrain the control actions of attackers or
oppressors. Rank and codified rulesets are all honey, but no sting. Therefore, people experience a high
benefit to exploiting abstract power hierarchies, but no physical cost for doing so. The BCRA of abstract
power hierarchies approaches infinity unless users figure out how to impose severe, physically prohibitive
costs.

What does this have to do with software? It has everything to do with software, because software is the
continuation of the same 10,000-year-old trend of people trying to create and wield abstract power over
others. This isn’t sapiens’ first rodeo with the exploitation and abuse of abstract power hierarchies. We
know exactly how this is likely to unfold because it has unfolded countless times before, under the same
systemic conditions. We know that there is no physical limit to the amount of rank and resource control
authority that software developers can wield. We can reasonably predict that eventually, populations will
have to learn how to physically constrain and decentralize this growing and unimpeachable abstract
power, or else they will be exploited and oppressed. Since the early days of the first god-kings, agrarian
populations have routinely used physical power to constrain, dismantle, decentralize, and countervail
abstract power hierarchies to secure themselves against the threat of overreaching abstract power. By
continuously engaging in a global-scale physical power projection competition, no single abstract power
hierarchy has been able to expand too far or gain too much centralized control authority over agrarian
resources.

We also know that domestication gives us a causally inferable, randomized experimental dataset to show
how insecure animals become to systemic exploitation and abuse when they are incapable of or
disinclined to project physical power. We know that abstract power hierarchies have a repeated and
demonstrable strategic security risk of self-domestication. We know that it is easy to condemn the use of
physical power to settle disputes, establish control over resources, or determine consensus on the state
of ownership and chain of custody of property because of the energy uses or the injury it causes. But when
populations condemn the use of physical power for ideological reasons, they become systemically
insecure against exploitation and abuse. We know all of this because agrarian society has learned this
lesson the hard way. Thousands of years of physical confrontation has made the strategic necessity of
physical power quite clear.

We know that as populations became more inclined to impose severe, physically prohibitive costs on
abusive abstract power hierarchies, resource management systems became gradually less exploitative
over time. In other words, abstract power hierarchies become less exploitative when people stand up and
fight for their rights. Over thousands of years of warfare, the extraordinarily high levels of abstract power
and resource control authority wielded by god-kings were eventually reduced to kings, then to presidents
and prime ministers. The centralized control of empires and monarchies were eventually decentralized

283
into democracies with carefully encoded checks and balances to restrict people’s abstract power as much
as possible. Theologies, philosophies, and ideologies increasingly favored equality and individual rights,
and the rulesets reflected it. This cultural evolution was gradual and filled with regression, but the trend
is clear: sapiens will find ways to utilize physical power to dismantle, decentralize, and constrain abstract
power hierarchies. Why? Because abstract power hierarchies are demonstrably and incontrovertibly
untrustworthy, inegalitarian, and dysfunctional.

War is the trend. War has always been the trend. If there’s one thing that agrarian society has made
explicitly clear, it’s that they’re willing to fight wars to decentralize and physically constrain the abstract
power of their own rulers or their neighbor’s rulers. And right now, cyberspace is missing a cyber
warfighting protocol for digital age society to physically protect and defend their digital rights.

5.6.3 Digital-Age Agrarian Society will Inevitably Fight Wars In, From, and Through Cyberspace

Here we finally arrive at the primary hypothesis of this thesis and the reason why the author dedicated
hundreds of pages to developing a new theory about power projection from which to analyze Bitcoin. The
core concepts outlined in Power Projection Theory laid the groundwork needed to understand the
following insight: the power projection dynamics and cultural evolution of agrarian society which took
place over dozens of millennia appear to be repeating themselves following the invention of computers.
Except now, it appears to be happening hundreds of times faster. If history is going to repeat itself in
clockwork fashion again, then agrarian society is due for another war.

Except this time, it appears like it’s going to be an electro-cyber war fought in, from, and through
cyberspace over zero-trust, permissionless, and egalitarian control over digital-age resources. Not only
that, the emergence of technologies like Bitcoin could indicate that this war has already started, but
nobody recognizes it because they don’t understand digital-age power projection (hence the need for
someone to create Power Projection Theory).

Whereas prehistoric sapiens took tens of thousands of years of abstract thinking to develop symbolic
languages and then use them to formally codify abstract power hierarchies, modern sapiens equipped
with computers have taken only tens of years to formally codify abstract power hierarchies using symbolic
languages, giving them control authority over society’s digital-age resources at unprecedented speed and
scale.

As history continues to repeat itself, these new types of software-codified abstract power hierarchies are
becoming as systemically insecure and oppressive as their predecessors were. It is reasonable to believe
that this will lead to population-scale exploitation (assuming it hasn’t already – which would be easy to
debate) which must inevitably be resolved the same way it always has been resolved: using a real-world
physical power competition to physically constrain and decentralize the emerging tyrannical, technocratic
ruling classes and their oppressive empires. In other words, a new form of abstract power is likely to bring
a new form of warfare to physically decentralize and constrain it, just as warfare has always been used as
a mechanism to physically decentralize and constrain abstract power hierarchies.

The author hypothesizes that humanity is going to become so tired of being systemically exploited at
unprecedented scales through their computer networks by an elite, tyrannically, and technocratic ruling
class, that they are going to invent a new form of electro-cyber warfare and use it to fight for zero-trust
and permissionless access to cyberspace and its associated resources (namely bits of information).

284
Using some type of new physical power projection tactic, people will be empowered to compete for
egalitarian access to and control over bits of information passed across cyberspace. This new form of
electro-cyber warfighting will allow the population to keep itself systemically secure against exploitation
from an abusive ruling class by giving them the ability to impose unlimited amounts of severe physical
costs on their oppressors. An open-source electro-cyber warfighting protocol would allow people to keep
themselves physically secure in cyberspace (i.e. virtual reality) using the same technique they already use
to keep themselves physically secure in objective reality: by making it impossible to justify the physical
cost (in watts) of attacking them. An illustration of this hypothesis is shown in Figure 64.

Figure 64: A Repeating Pattern of Human Power Projection Tactics


[1, 2, 3, 4, 5, 149, 150, 151, 152]

Without this type of new warfighting protocol, entire populations (including entire nations) will become
increasingly vulnerable to widescale systemic exploitation and abuse via cyberspace. Just as populations
rely upon physical power competitions as a protocol to physically constrain abstract power and control
authority over agrarian resources, a primary hypothesis of this thesis is that all populations – including
and especially nation states – are going to need an equivalent protocol to physically constrain the abstract
power and resource control authority wielded by those who write and administer computer programs.
Without a “softwar” protocol, entire nations are vulnerable to becoming exploited and subservient to a
technocratic ruling class which could emerge either domestically or from a foreign power.

People are going to need a physical cyber security protocol designed to provide society with the same
complex emergent benefits as warfare. The goal of this so-called cyber warfighting protocol would be to
give populations the ability to secure themselves using physical constraints rather than (demonstrably
unsuccessful) logical constraints. By literally empowering people to impose severe physical costs on
people and programs in, from, and through cyberspace, agrarian society will be able to once again
physically constrain people who abuse their imaginary power and control authority. At the same time,
this would likely unlock a zero-trust, permissionless, egalitarian method of physical power projection in

285
cyberspace that could transform the profession of cyber security and possibly even change the
foundational architecture of the internet.

5.7 Projecting Physical Power in, from, and through Cyberspace

“…power is almost everything. It doesn’t matter what you think is right;


it matters what you can demonstrate and enforce is right.”
Lyn Alden [153]

5.7.1 There’s Two Ways to Constrain Software: Logically or Physically

Now that we have established a thorough conceptual understanding of power projection tactics in nature,
human society, and cyberspace, we can finally begin to analyze the strategic security implications of
Bitcoin from a different point of view. Not merely as monetary technology – but as physical power
projection technology and a new form of digital-era, electro-cyber warfare.

So far, the author has focused on providing an exhaustive explanation for why human populations use
physical power competitions (i.e. wars) to settle disputes, determine control authority over resources,
establish consensus on the legitimate state of ownership and chain of custody of property, physically
constrain abstract power hierarchies, preserve international trade routes, and secure themselves against
systemic exploitation and abuse. This lengthy discussion was designed to help the reader understand why
society would even want to adopt an electro-cyber power projection protocol in the first place, which in
the author’s opinion, is a much harder and more important question to answer than how it could be done.
Perhaps the reason why people don’t understand the value of Bitcoin is because they don’t understand
why people would want to project power and impose severe physical costs on others in, from, and through
cyberspace.

Now that we have answered the why, the next question to ask is how it could be done. To answer this
question, we can begin by asking ourselves to consider what a global-scale cyber war might look like.
Would it even be possible to wield and project physical power (a real-world thing) in, from, and through
cyberspace (an abstract domain)? How could someone impose physical (a.k.a. non-virtual) costs virtually?

The key to understanding how this could be done is to return to the first principles of computer theory.
First, we can recall Von Neumann’s early observation about stored-program state mechanisms: “It is easy
to see by formal-logical methods, that there exist codes that are in abstracto adequate to control and
cause the execution of any sequence of operations which are individually available in the machine, and
which are in their entirety, conceivable by the program planner.”

In this quote, Von Neumann highlights that software has two primary constraints: the physical limits of
the state machine, and the imagination of the computer programmer. First, the software running on a
computer can be constrained using any design logic conceivable by the program planner (logic which we
have established is routinely dysfunctional and incapable of securing software against the systemic
exploitation of its own logic). Alternatively, a computer program can be constrained by simply physically
constraining the underlying state mechanism running the program. This is illustrated in Figure 65.

286
Figure 65: Illustration of Two Ways to Constrain a Computer Program
[154]

This observation has subtle but important implications for cyber security because it offers insight about a
potential way to improve cyber security by physically constraining computers rather than continuing to
attempt to encode logical constraints into software (a practice that is clearly ineffective, hence the current
hacking epidemic). To make cyberspace more secure, it is possible to find a way to physically constrain
the underlying computers connected to the internet. This would imply that what cyberspace is missing is
an open-source protocol and the supporting infrastructure needed to empower people to physically
constrain computers. By creating an open-source protocol which empowers people to physically restrict
computers through the internet, that would theoretically give them an unprecedented physical cyber
security capability which they could use to physically secure themselves in, from, and through cyberspace.

5.7.2 Not Being Able to Physically Constrain Computers is a Major Systemic Security Vulnerability

So far, we’ve established that software system security vulnerabilities are derived from insufficient
constraints on control signals which can be exploited in such a way that it puts software into insecure or
hazardous states. Software security engineers have a major disadvantage in their ability to constrain the
control actions executed by computer programs in comparison to physical control actions executed by
people or machines. The fundamental challenge is that software engineers must use discrete
mathematical logic to constrain software control actions because it is otherwise impossible to physically
constrain an undesired command signal without physically constraining the underlying state mechanism.

Herein lies a key insight that remarkably few computer theorists seem to understand. The inability to
apply physical constraints on computers is a major systemic security vulnerability. Not having a
mechanism to apply physical constraints on computers forces software engineers to design trust-based
abstract power hierarchies, where sensitive control actions are logically constrained by giving the
authority to execute them to a select few users of high rank (e.g. system admins, which are usually
themselves). This consolidates or “centralizes” software administrative permissions to select user
accounts that can be systemically exploited. Software engineers must then devise ways to prevent
outsiders from accessing and exploiting these special administrative permissions. At the same time, the
software’s legitimate administrators and engineers must be trusted not to exploit their self-encoded
abstract power and control authority over the system. Because these control structure designs rely on
trusting people not to execute unsafe control actions rather than physically constraining them, they are
systemically insecure. It turns out that trusting a small, centralized group of software engineers and
287
administrators not to exploit their abstract power is a highly ineffective security strategy backed by
thousands of years of testimony. The BCRA of a trust-based abstract power hierarchies approaches infinity
as the benefit to attacking or exploiting it increases. Meanwhile, users must accept the risks posed by a
perpetually increasing BCRA because they have no alternative way to physically constrain unsafe control
signals or impose severe physical costs on people who exploit their special permissions over the system.

5.7.3 A Protocol which Physically Constraints Computers Would Look Like a Very Inefficient Program

To remedy this glaring systemic security vulnerability in our current approach to cyber security, we need
a way to physically constrain computers. Using physical constraints to improve cyber security is not a new
concept. For example, a common cyber security strategy in many organizations is to physically remove
hardware (e.g. USB drives) in the computers to physically prevent belligerent actors from stealing sensitive
data from a given computer network or uploading malware to it. US military personnel physically secure
their encryption keys by carrying them on specially-designed common access cards (CACs) in their wallets.
The same principle also applies to strategies like air-gapped networks. An air gap represents a real-world
physical constraint applied to the underlying state mechanisms of a given computer network.

It should be noted, however, that these examples are self-imposed physical constraints that are not
transferable in, from, and through cyberspace. If someone were to develop a technique for applying
physical constraints to other people’s computer programs in, from, and through cyberspace, that would
represent a remarkable new capability in cyber security. And it would be especially noteworthy if this
capability manifested itself as a globally-adoptable open-source protocol that utilized an existing
infrastructure to apply these physical constraints to computers connected to the internet.

With these concepts in mind, the rest of this chapter can be summarized with the following: it is
theoretically possible to create computer programs that are inherently secure because it’s either
physically impossible or too physically difficult to put them into a hazardous state, simply by intentionally
applying physical constraints to the underlying state mechanisms running the software. It is also
theoretically possible to design computer protocols which can apply real-world physical constraints to
other people’s computer programs in, from, and through cyberspace. The protocol is called proof-of-work.

If an open-source protocol were invented that allowed people to physically constrain or “chain down” the
underlying state mechanisms connected to the internet, that would likely become a valuable cyber
security protocol. If the protocol were globally adoptable in a zero-trust, permissionless, and egalitarian
way, it could become so valuable that it would represent a national strategic priority to adopt it. There is
an imperative for nation states to be on the lookout for the development of any kind of open-source
physical power projection protocol which empowers people to physically constrain or “chain down” other
people’s computer programs in, from, and through cyberspace. A protocol like this would be of national
strategic significance because it would represent the discovery of a globally-adopted physical security
protocol for cyberspace and an unprecedented new cyber security capability. This concept is illustrated in
Figure 66.

288
Figure 66: “Chain Down” Design Concept for Physically Constraining Computers
[154, 155]

Knowing that such a protocol is theoretically possible, the next question becomes, what would such a
system look like? What kind of physical constraints would the system impose to “chain down” computers
and how would it work? What kind of supporting infrastructure would this system require to apply these
physical restrictions? One way to physically constrain computers via cyberspace is by filtering all incoming
control signals through a special kind of state mechanism with a physically constrained state space that
has deliberately-difficult-to-change states. In other words, to physically constrain other people’s
computers via cyberspace, create a computer that is intentionally costly (in terms of watts) to operate,
and then require other people’s software to run it. This concept is illustrated in Figure 67.

289
Figure 67: Security Protocol Design Concept of a Deliberately Inefficient State Mechanism
[156, 154]

Herein lies another key insight that many people (including but especially software engineers and
computer scientists) overlook. A special type of computer that is highly inefficient, with intentionally
difficult-to-change states, could have counterintuitively beneficial emergent properties for cyber security.
By deliberately designing a computer to be inefficient (i.e. intentionally physically costly to operate) and
then designing software logic requiring other people to use it, it would be possible to apply physical
restrictions to other people’s computers in, from, and through cyberspace. There could be extraordinary
strategic benefits to deliberately creating an inefficient state mechanism which takes an excess quantity
of physical resources (e.g. electricity) to operate, as those thermodynamic restrictions would double as a
way to physically constrain sensitive software control actions rather than logically constraining them.

The key insight that seems to be overlooked by computer theorists is that a highly inefficient computer to
which everyone has zero-trust and egalitarian access would likely be very beneficial for cyber security
purposes. Based on this insight, we can see that an open-source, globally-adopted physical security system
for cyberspace would likely manifest itself as something which, on the surface, appears to be
extraordinarily inefficient. This inefficiency would not be a bug, but rather a feature. In fact, it would be
the primary value-delivered function of a system which allows people and their programs to apply physical
constraints on other people and programs in, from, and through cyberspace. The inefficiency of this
system would represent the real-world physical costs imposed by the system onto people in the virtual
domain.

Adding these thoughts together, the author offers the following insight: to recognize an unprecedented
new cyber security capability that could transform how people secure digital resources, we should look
for the emergence of open-source protocols which, to the untrained and uniformed eye, would appear to
be highly computationally inefficient. This inefficiency would not be a bug that needs to be corrected – it
would be a very important and highly noteworthy feature that is essential for physically securing software.
With this in mind, we turn our attention to cost function protocols and the invention of proof-of-work.
290
5.7.4 Physical Cost Function Protocols (a.k.a. Proof-of-Work Protocols) are Not Well Understood

The author challenges the reader to think of an open-source protocol as infamous for its computational
inefficiency as Bitcoin. Bitcoin is constantly criticized for how much power is consumed by the computers
which run it. What is not understood is why computational inefficiency is an incredibly useful cyber
security feature, not a bug. In other words, Bitcoin’s computational inefficiency is its primary value-
delivered function – it’s the thing that has been missing in cyber security that is needed to physically
constrain belligerent actors, rather than (continue to fail to) logically constrain them.

The author hypothesizes people don’t understand the security benefits of Bitcoin’s inefficiency because
they don’t understand how physical security works, nor the enormous systemic security benefits gained
by projecting physical power (a.k.a. watts) to impose severe physical costs on attackers in order to raise
one’s CA and thus lower one’s BCRA. Without this background, it’s difficult to understand why open-source
computer programs which literally empower people to do this via cyberspace may represent an
unprecedented and remarkable cyber security capability which could change the way people design their
software and even potentially transform the underlying architecture of the internet.

This lack of understanding about how power projection works is why the author has outlined the following
first principles approach to understanding the Bitcoin protocol, not as a monetary system, but as a physical
security system which utilizes a computationally inefficient state mechanism to give people the ability to
impose real-world, physically prohibitive costs on other people in, from, and through cyberspace. To build
a first principles foundational understanding of Bitcoin as a security system, it is necessary to develop a
thorough understanding of the security design concept colloquially known as “proof of work,” or what the
author alternatively calls “proof of power.”

5.7.5 To Eliminate Superfluous Control Signals (a.k.a. Spam), Make Control Signals Superfluously Costly

Today, most computer programs send control signals across cyberspace at practically no marginal cost.
The lack of marginal cost is a byproduct of engineers designing highly efficient computers. Unfortunately,
cheap computing is a feature that is commonly exploited by belligerent actors. For example, it’s possible
to attack computer networks by sending millions of superfluous control signals (e.g. service requests) to
overwhelm a target network’s bandwidth. Ironically, the reason why these control signals are superfluous
is because the costs associated with sending them aren’t superfluous. Because computers don’t add
superfluous costs (like additional electricity) to sending control signals, it is trivial for belligerent actors to
flood target networks with superfluous control signals.

This type of systemic exploitation is commonly known as a denial-of-service (DoS) attack. The near-zero
marginal cost of using efficient computers to send superfluous control signals across the internet is what
enables practically all email spam, comment spam, bot farms, troll farms, and many other popular
exploitation tactics possible. Because there’s minimal physical cost associated with these activities, there’s
practically no physical costs imposed on those who systemically exploit these control signals. In other
words, the BCRA of this type of attack is high because CA is practically zero.

291
5.7.6 Searching for the Best Way to Make Control Signals Superfluously Costly to Execute

For decades, software engineers have been experimenting with different ways to improve cyber security
by creating control structure designs which physically constrain unsafe control signals by physically
constraining underlying computers. There are many different ways that this can be done, each with their
own tradeoffs.

A popular way to physically constrain unsafe control signals is by physically constraining access to the
computers which send those signals. As previously discussed, this can be done via physical computer
access control points or air gaps. For example, the US military keeps top secret intelligence information
secure against data leaks by physically restricting access to the computers which store that information
using air-gapped intranets where each computer in the network is locked within a specially-designed
sensitive compartmented information facility (SCIF). This air-gapped intranet with physically constrained
access control points is known as the Joint Worldwide Intelligence Communication System (JWICS).

Physically constraining computers in this way has major tradeoffs, though. First, the functionality and
utility of these air-gapped intranets are severely restricted because they are difficult to access, and they
cannot easily communicate with other computer networks. Second, they are still systemically insecure
because they are trust-based, inegalitarian, permission-based systems within a centralized abstract power
hierarchy, thus they’re still vulnerable to systemic exploitation. People must rely on trusted third parties
with abstract power and control authority over these networks to gain and maintain access, and that
access can be revoked at any time. Additionally, once granted access to these air-gapped networks, people
have to be trusted not to exploit their access – there’s little physically preventing them from leaking the
sensitive information contained within JWICS networks to other networks. As many highly-publicized
leaks of JWICS data would suggest, these types of trust-based security systems are systemically insecure.

Fortunately, there appears to be other ways to physically constrain unsafe software control signals that
don’t suffer from these negative tradeoffs. Software engineers have been investigating different ways to
do this for at least three decades. Throughout the late 90’s and early 2000’s, software engineers began to
propose different kinds of software protocols which could prevent an adversary from gaining access to
online resources by simply making it superfluously costly for them to do so.

The first software engineers to publish a paper about this security concept were Cynthia Dwork and Moni
Naor. In their paper, Pricing via Processing or Combatting Junk Mail, Dwork and Naor state the following:
“We present a computational technique for… controlling access to a shared resource… The main idea is to
require a user to compute a moderately hard, but not intractable, function in order to gain access to the
resource, thus preventing frivolous use. To this end, we suggest several pricing functions… [25]

This paper was the first to offer a subtle but profound cyber security concept: to secure access to
resources, simply decrease the benefit-to-cost ratio of accessing those resources by increasing the cost of
accessing them. In other words, Dwork and Naor’s idea was to utilize the primordial economic dynamics
discussed at length throughout Chapter 3. To keep online resources secure against DoS attacks, simply
increase the computational cost (i.e. increase CA) of executing certain control actions to decrease the BCRA
of executing those control actions. Dwork and Naor offered several ideas for candidate pricing functions
(to including hash functions) which could be used as the mechanism to add superfluous costs, but they
concluded that “… there is no theory of moderately hard functions. The most obvious theoretical open
question is to develop such a theory…” [25]

292
Here we see the emergence of the idea that software can be secured using the same primordial economic
dynamics that life has been using for four billion years: simply decrease the BCRA of attacking or exploiting
software, rather than trying to write exploitable logic. All that appeared to be missing was for someone
to develop the necessary theories of how it could work, why it could work, and specifically what cost
function algorithms should be used.

By 1999, Dwork and Naor’s idea of securing online resources by making computers solve moderately hard
pricing functions was formally named “proof of work.” Multiple computer scientists, including Markus
Jakobsson and Ari Juels, conceived of potential use cases and designs of proof-of-work protocols. The
concept was simple: a computer can constrain the control signals sent by belligerent computers by
superfluously increasing the computational cost of sending those control signals. In other words, a
computer can constrain another computer by making it computationally inefficient for them to send
specific control signals. As previously discussed, there is clear value in computational inefficiency and the
strategy is quite simple: to physically constrain a bad guy’s computer, create a computer program that is
deliberately inefficient to run, and then force the bad guy to run it. [27]

It's important for the reader to note that, like all software specifications, semantic specifications of
software design concepts like “proof-of-work” are completely arbitrary (see section 5.4 for a more
thorough breakdown of this concept). When proof-of-work was formally introduced into scientific
literature, it was arbitrarily called a “bread pudding” protocol. Jakobsson and Juels explained the name as
follows: “Bread pudding is a dish that originated with the purpose of re-using bread that has gone stale.
In the same spirit, we define a bread pudding protocol to be a proof of work such that the computational
effort invested in the proof may also be harvested to achieve a separate, useful, and verifiably correct
information.” In the very same year, the same author (Juels) gave the same proof-of-work protocol a
different name: a “client puzzle” system. The takeaway? What people call their proof-of-work software
simply doesn’t matter; it can be called anything. What matters is not the name; what matters is how it
works, why it works, and most importantly, why people would want to use it. [27, 26]

Interestingly, two years prior to the release of these formally published academic papers about how proof-
of-work protocols could work, a software engineer named Adam Back privately released his own version
of an operational proof-of-work protocol concept he named “hashcash.” Despite having a different name,
the intent of the protocol was the same: to secure software against threats like DoS attacks by adding
superfluous costs to sending unwanted control signals, thus decreasing the benefit-to-cost ratio of
sending unwanted control signals. Just like his peers did, Back asserted that this type of protocol could
function as “a mechanism to throttle systemic abuse of internet resources.” [157]

5.7.7 Proof-of-Work Protocols are Literally Proof-of-Power Protocols

It is imperative for the reader not to overlook the fact that proof-of-work protocols very specifically create
real-world physical costs that can be measured in watts. These protocols are computationally inefficient
not just because of how many calculations they require, but because of how many watts must be
consumed to make those calculations. By using a deliberately inefficient protocol like this, a person or
program applies a real-world physical cost on the execution of control signals sent by other people or
programs operating on different computers. To emphasize this point, the author will interchangeably use
the term “physical cost function” instead of “pricing function” and “proof of power” or “bitpower” instead
of “proof-of-work” where the term “physical cost” refers to the real-world physical deficit of electric
power (a.k.a. watts) required to generate a proof of power.

293
By increasing the physical cost (a.k.a. watts) required to execute a command action or send a control
signal, Back showed how it was possible to increase the marginal cost of exploiting certain control signals
exactly as envisioned by Dwork and Naor. This can be done by utilizing a two-step process shown in Figure
68. The first step is to create an algorithm that is so computationally difficult to solve that it imposes
excess physical cost (a.k.a. watts) on computers attempting to solve it. This algorithm effectively creates
a vacuum of electric power that must be filled for the physical cost function to generate a proof of power.

Figure 68: Visualization of the Two-step Process of Adam Back’s Physical Cost Function Protocol
[158, 159, 160]

Finding the right hashing algorithm to achieve this first step was a challenge that took years to resolve.
Many candidates for physical cost functions were discussed throughout the 90’s, but Back’s physical cost
function became popular because the algorithm he designed doesn’t appear to provide an advantage to
any particular method for solving it. It utilizes a lottery-style “pick the winning number” technique where
computers must repeatedly guess random numbers until they pick the winning number.

A major advantage of this type of lottery system is that it appears to be perfectly fair. Back’s algorithm is
equally and uniformly difficult for everyone to solve. On a per-guess basis, nobody appears to be able to
gain an upper hand on solving Back’s algorithm no matter who they are or what kind of guessing strategy
they use. One single guess has equal odds of being correct as any other guess, and there’s no apparent
way to “game” or exploit the algorithm to give somebody an unfair guessing advantage. Consequently,
the most effective way to solve Back’s algorithm is by using a brute-force guessing technique where
computers must make as many random guesses as they can as quickly as possible.

The second step of Back’s cost function protocol is to issue an abstract proof or receipt to the entity which
successfully solves the hashing algorithm. This allows the bearer of the proof/receipt to verify that they
incurred an excess physical cost (as measured in watts) for solving the algorithm. Back arbitrarily called
this abstract receipt a “proof-of-work” based off the precedent set in formal academic literature by
Jakobsson and Juels and their “bread pudding” protocol, where “proof” refers to a bearer asset (e.g. a
receipt, stamp, or token), and “work” refers to the expenditure incurred to solve the hashing algorithm.

Something that is vitally important for the reader to understand about “proofs of work” is that they
represent an abstraction of a real-world physical phenomenon: power. The “proof” or token or receipt
that is used to verify the power expenditure is abstract, but the power expenditure is a real-world physical
phenomenon that is measurable in watts. The reason why this is important will be discussed in much
further detail in section 5.9, but the main takeaway for now is that a “proof-of-work” is an abstract way
of describing a real-world physical thing, which means that “proofs of work” inherit all the physical and
systemic properties of physical power which are physically impossible for software to replicate (as
discussed in section 5.9, this is why it is physically impossible for proof-of-stake protocols to replicate the
same systemic security benefits of proof-of-work systems).
294
As shown in Figure 68, Back’s physical cost function protocol can be visualized as a black box with a single
input and a single output. The input is a vacuum of real-world physical power (a.k.a. watts) which must be
filled to solve the protocol’s hashing algorithm. The output is the proof-of-power receipt issued to prove
that a real-world physical power expenditure was incurred to solve the algorithm.

The reader may be asking, what’s the point of a computer program which intentionally creates a vacuum
of physical power that must be filled, and then issues a receipt to the entity which fills it? The answer is
worth repeating: this activity imposes a physical cost on other people in, from, and through cyberspace.
With the invention of physical cost function protocols, it is possible to impose real-world physical costs on
others by simply demanding that they present proof of power. It’s an extraordinarily simple but effective
security concept.

This is where the core concepts of Power Projection Theory become so important to understand. Back’s
“hashcash” could represent the first earnest attempt at building a physical-cost-of-attack function, or CA
function. Proof of physical power expenditure represents proof of real-world physical cost, which means
proof of high CA. Back’s protocol showed how it was feasible to physically constrain the execution of
unwanted signals by simply increasing control signal CA.

By increasing the physical cost of sending unwanted control signals, physical cost function protocols
increase the physical cost of exploiting or abusing those control signals. People gain the ability to improve
systemic security and solve the survivor’s dilemma by projecting physical power. The more people
increase the physical cost of sending control signals (not just spam, as originally envisioned by Dwork and
Naor, but any type of unsafe or unwanted control signal), the more its CA increases and the lower its BCRA
becomes. Control signals with lower BCRA are intrinsically more secure against systemic exploitation or
abuse because they’re more physically expensive to exploit. This concept is illustrated in Figure 69.

Figure 69: Illustration of Two Different Types of Control Signals


[160]

The arrow on the left represents a typical “low-cost” control signal sent by a computer program across
cyberspace at near-zero physical or marginal cost. The arrow on the right represents a “high-cost” control
signal which has been physically constrained through the addition of proof-of-power. By “stamping” a
control signal with proof-of-power, the receiving computer of that control signal can verify excess physical
cost (a.k.a. watts) was imposed on the user which sent this control signal, giving it a higher CA and lower
BCRA, thus making it more strategically secure against exploitation and abuse than an ordinary low-cost
control signal with high BCRA.
295
The ability to increase control signal CA gives software security engineers a capability they previously didn’t
have: the ability to use software to physically (not just logically) constrain sensitive control signals. It also
gives users an ability they didn’t have before too: the ability to project real-world physical power against
other people and programs in, from, and through cyberspace. With the invention of physical cost
functions, all one needs to do to impose physically prohibitive costs on other computers, programs, and
computer programmers is to simply refuse to accept control signals unless they present proof-of-power.

With the invention of physical cost function protocols, software engineers can secure computer programs
in the same way that animals project power to secure themselves in nature, and sapiens project power to
defend themselves in society. By raising the CA and lowering the BCRA of control signals, it is theoretically
possible to dramatically improve the security of software against systemic exploitation and abuse by any
belligerent actor, in novel ways that were previously not considered possible. Thus, physical cost functions
likely represent a noteworthy contribution to the field of cyber security that people are apparently
overlooking because they have adopted the habit of calling it a monetary protocol.

Another interesting observation about physical cost function protocols is that they appear to be the
continuation of a four-billion-year-old trend of organisms developing increasingly clever power
projections tactics to keep themselves secure against predators by continually increasing their capacity to
impose physical costs on them. Using physical cost functions, sapiens appear to have discovered a way to
project power by directing energy in such a way that it enables them to impose severe, physically
prohibitive costs on their neighbors via cyberspace. It should be noted that because physical cost functions
passively utilize an electric form of physical power rather than a kinetic form of physical power, they
represent a non-lethal form of global-scale power projection – making them potentially disruptive to
traditional methods of agrarian warfighting (more on this later).

A globally-adopted proof-of-power protocol could mean that people have figured out a way to impose
severe physical costs on entire populations of people (such as entire nations) – but in a way that is
incapable of causing injury. Therefore, not only do physical cost function protocols likely represent a
noteworthy contribution to the field of cyber security, but they could also one day be considered to be a
noteworthy contribution to the field of security in general – particularly national security. Physical cost
functions literally empower people to physically secure themselves, their digital-age resources, and their
policies against exploitation or foreign attack without physically injuring people or causing practically any
form of physical pain or discomfort. This is a remarkable capability considering how many global-scale
power competitions are highly destructive and injurious.

296
5.8 Electro-Cyber Dome Security Concept

“Power lies not in resources but in the ability to change behavior…


As the instruments of power change, so do strategies.”
Joseph Nye [161]

5.8.1 Imposing Infinitely Scalable Physical Costs on Computers, Programs, and Computer Programmers

Recalling the concept of the infinitely prosperous organism discussed in section 3.7, the reader should
note that physical cost functions represent an infinitely scalable physical cost function. Since there’s no
theoretical limit to how much physical cost can be imposed by physical cost function protocols like Bitcoin,
that means there’s theoretically no limit to how much prosperity margin can be created by organizations
utilizing this technology. Moreover, because the physical cost imposed by these systems is generated
electronically rather than kinetically, that means there’s also no threat of hitting a kinetic ceiling no matter
how efficient people get at imposing these physical costs (see sections 4.11 and 4.12 for a more thorough
explanation of these concepts). The potential of physical cost function protocols like Bitcoin are therefore
extraordinary, because they don’t have the same theoretical or practical limitations of traditional power
projection tactics, techniques, and technologies used for physical security applications.

All computers, programs, and computer programmers need to do to impose severe, real-world physically
prohibitive costs on other computers, programs, and computer programmers in, from, and through
cyberspace is simply create a firewall-style application programming interface (API) which utilizes a
physical cost function protocol like Bitcoin. This “proof-of-power wall” concept is illustrated in Figure 70.

Figure 70: Design Concept of a “Proof-of-Power Wall” Cyber Security API


[160]

Proof-of-Power wall protocols could work by simply rejecting all incoming control signals that don’t
present proof-of-power. This would guarantee that all control signals sent across the API have low BCRA
and are therefore more systemically secure against exploitation and abuse. By throttling up or down the
physical costs incurred for each proof-of-power receipt (which could be done by adjusting the difficulty of
the hashing algorithm or requiring more proofs of power), it would be possible to raise or lower the BCRA
of each control signal as needed (thus increasing or lowering prosperity margin as desired). So long as the
BCRA of authorized control signals are lower than the hazardous BCRA level, the computer program
remains strategically secure against systemic exploitation.

297
Recalling the core concept of the survivor’s dilemma discussed in section 3.6, the hazardous BCRA level
for any given environment cannot be known and tends to drop over time, causing a natural and
continuous decline in prosperity margin. Organisms and organizations must strive to continually increase
their CA as much as they can afford to increase it to ensure they can lower their BCRA as much as possible
and buy as much prosperity margin as they can to remain systemically secure. This is a fundamental
challenge of all strategic security that has stressed life for four billion years.

Applying this same concept to physical cost function protocols like Bitcoin, we can see that hashing
algorithms should be designed in such a way that they’re adjustable. That way, it’s always possible to
increase the physical difficulty of solving them, thus always possible to increase their CA. This will ensure
that people can permanently decrease BCRA of their computer programs and buy as much prosperity
margin needed to keep control signals systemically secure against exploitation and abuse. If sensitive
control signals start to be exploited, people can simply increase the difficulty of the hashing algorithm to
increase the amount of power needed to solve the cost function.

Like all strategic security, a “peaceful” period of reprieve against attack or exploitation happens at an
equilibrium point when BCRA is sufficiently low because CA is sufficiently high. Successful deterrence is
achieved not because would-be attackers can’t perform an attack, but because they find it impossible to
justify the physical cost of performing an attack. This happens when the attacker’s opponent has
successfully demonstrated they have both the means and inclination to impose severe physical costs on
their attacker. Because there’s theoretically no limit to the difficulty of solving hashing algorithms, that
means there’s theoretically no limit to the amount of physical cost that people/programs can impose on
their attackers using physical cost functions. And because the physical cost is imposed electronically rather
than kinetically, there are comparatively little practical limitations either. It goes without saying that this
could have enormous strategic implications, which will be discussed further at the end of this chapter.

5.8.2 Building an “Electro-Cyber Dome” Security System

“Bitcoin is a swarm of cyber hornets serving the goddess of wisdom, feeding on the fire of truth,
exponentially growing ever smarter, faster, and stronger behind a wall of encrypted energy.”
Michael Saylor [162]

As previously discussed, the first proposed use case for cost function protocols was to serve as a
countermeasure against DoS attacks – specifically email spam. This was something that multiple computer
engineers recognized as a simple and attractive use case. Adam Back originally called proofs-of-work
“stamps” rather than proofs. The name “stamp” was chosen because the original use case of his protocol
was to improve email services. The term “stamp” also highlighted how the proofs generated by his
software aren’t sequentially reusable, just like real mailing stamps aren’t. The term “proof of work” didn’t
appear until two years after Back introduced his prototype. Only after “proof of work” became organically
popular terminology did Back call his design concept a proof-of-work protocol. [39]

It didn’t take long for people to start exploring how to use physical cost functions for other use cases
beyond securing email browsers against spam. Some software engineers started exploring how to create
physical cost function protocols to create sequentially reusable and transferable proof-of-power receipts.
When this happened, people abandoned the word “stamp” and started using the names of other abstract
objects like “token” or “coin.” These semantic specifications provided a more intuitive explanation of the
software’s emergent behavior because tokens and coins are both sequentially reusable and transferrable.
Nevertheless, the general function of physical cost functions and proof-of-power receipts remained the

298
same: to make exploitable control signals more expensive to exploit – particularly the ones exploited
during DoS attacks. This is why Back entitled his 2002 paper “Hashcash: A Denial of Service Counter-
Measure.”

The general public frequently does not understand that proof-of-work protocols are first and foremost
cyber security protocols designed to defend computer programs against cyber attacks like DoS attacks.
Back’s hashcash design concept, which explicitly called it a DoS countermeasure in the title, was reused
in Bitcoin and was directly cited by Satoshi Nakamoto as a primary inspiration. Additionally, proof-of-work
was universally described as a cyber security protocol for fifteen years of peer-reviewed academic
literature prior to the release of Bitcoin. In other words, proof-of-work protocols have been known as
cyber security protocols for twice as long as they have been known as monetary protocols.

As previously discussed, because there is close to no marginal cost required for modern computers to
send control signals through cyberspace, it is possible for belligerent actors to target computer networks
and flood them with superfluous control actions (e.g. service requests) to overwhelm network bandwidth.
One possible countermeasure against this type of attack is to ignore the control signals coming from the
belligerent computers. However, this countermeasure is easily circumnavigated by sending belligerent
control signals from different networks in what’s known as distributed denial-of-service (DDoS) attacks.
[157]

Using a proof-of-power wall API (a.k.a. a proof-of-work protocol), it is possible to apply a physical cost
uniformly to all incoming control signals, creating a potentially more effective countermeasure against
DDoS attacks. No matter how distributed an attacker’s computer network can be, it would be equally as
physically costly for them to spam a target network with superfluous control actions because they would
be required to attach a proof-of-power receipt (or stamp, or token, or coin, or whatever arbitrary
abstraction the reader finds most helpful) to every malevolent control signal. And because DDoS attacks
send substantially more control signals than honest users, it would be possible to throttle the physical
cost exponentially higher so that it’s inconsequential for honest users but severe for belligerent actors.

By creating an email protocol which doesn’t allow emails to reach someone’s inbox unless they have a
proof-of-power “stamp,” it’s possible to cause the physical cost of sending someone emails to rise to levels
where it would become too physically expensive to spam people’s inboxes. The operational expense of
generating enough proof-of-power “stamps” would simply be too high to make email spam (or any kind
of spam) a profitable endeavor. It’s a simple strategy: to prevent superfluous emails, make them
superfluously expensive to send.

Of course, proof-of-power wall APIs could theoretically be used to defend people against any form of
systemic exploitation and abuse of any kind of software control signal issued by any computer, not just
against email spam specifically. This is because all forms of hacking and software abuse are the result of
insufficient control structure designs which do not adequately eliminate or constrain unsafe commands
or insecure control actions. Email spam, comment spam, sybil attacks, bots, troll farms, and weaponized
misinformation stem from the exact same types of core design flaws which proof-of-power wall APIs could
theoretically help to alleviate. At the end of the day, the root challenge is a lack of ability to physically
constrain control signals, and that capability is precisely what physical cost function protocols provide.

Each of these types of attacks represents ways to systemically exploit computer programs by taking
advantage of high-BCRA control signals like sending superfluous emails, posting superfluous comments,
casting superfluous votes, and publishing superfluous information. Thus, each of these attacks could

299
possibly be mitigated by adopting proof-of-power wall APIs which make these malicious operations
superfluously costly in terms of the number of watts required to send them. In plain language, proof-of-
power APIs can make it so that bad guys literally don’t have enough power (a.k.a. watts) to launch or
sustain their attacks. This “electro-cyber dome” concept is illustrated in Figure 71.

Figure 71: Illustration of the “Electro-Cyber Dome” Concept using Proof-of-Power Wall APIs
[137]

An electro-cyber dome is a passive power projection tactic like the pressurized membrane concept
developed by life’s first organisms during abiogenesis, as discussed in Chapter 3. This simple barrier works
the same way as a cellular wall or other passive power projection system. It utilizes the computational
inefficiency of cost function protocols to increase the amount of physical power (watts) needed to
successfully attack or penetrate a target’s API. A major difference is that electro-cyber domes are non-
kinetic physical energy barriers with no mass, built to function in an abstract domain called cyberspace.
An electro-cyber dome has no force, mass, or volume and is therefore completely incapable of causing
physical injury. Nevertheless, an electro-cyber dome is still capable of projecting physical power and
imposing severe physical costs on attackers in the form of electrically-generated watts.

On the flip side, it should be noted that this wouldn’t be a strictly “defensive” power projection capability
(the author is not even aware of any power projection capability which is strictly “defensive,” as even
passive power projection tactics like walls are among the most successful offensive strategies ever used,
hence all colonized territory). People with access to proof-of-power can theoretically “smash” through
these electro-cyber dome defenses if desired. Thus, proof-of-power protocols are not strictly “defense
only” protocols as some have argued. A top threat to people using physical cost function protocols like
Bitcoin is other people using the same protocol (hence why Nakamoto mentions the word “attack” 25
times in an 8-page whitepaper, each time referring to people running the same protocol).

Instead of physically constraining attackers via forces displacing masses like normal barriers do, electro-
cyber domes physically constrain attackers via electric charges passing across resistors. But as Einstein

300

You might also like