Artificially Human

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 77

ARTIFICIALLY HUMAN

Making Sense of Automation and AI

Robert Whiteman
Better Future Publishing

www.artificiallyhuman.com

Copyright © 2023 by Robert Whiteman

All rights reserved.

ISBN: 979-8-9883562-1-9
Attestation
These words were written by me, a human.
Machines helped with the footnotes.
Contents

Title Page
Copyright
Attestation
Preface
Through the Eyes of the Machines

Machine Labor: A Brief History of Automation


Digital Infrastructure: Technology Is Not Automation
Patterns Everywhere: How Machines Learn and Think
Exponential Progress: Rise of Artificial Intelligence

Early Days of the Automation Revolution

"Working" Capital: Our Expanding Labor Supply


Keeping Up: Human-Powered Incumbents
Starting Over: Machine-Powered Disruptors
Like Us: Human-Machine Convergence

Shaping a Better Future

If You Can't Beat Them


Rebuilding Institutions | Executives
Allocating Capital | Investors
Setting the Rules | Policymakers
Preparing Humans | Educators

Conclusion
Acknowledgments
About the Author
Preface
On most days, three billion people wake up and go to work. Billions more make that work possible, caring for others and managing the complexities of life. We construct skyscrapers, cut hair, design dishwashers, raise children, and
pay invoices. From the majestic to the mundane, human labor powers society.
Fortunately, we are not alone in our quest for economic output and the life upgrades it affords. We harness technology to amplify our abilities, from the simple hammer to the complex supercomputer. Human labor combined with
technology is a recipe for prosperity, even if the ingredients change over time.
Today, our lives are increasingly shaped by digital technologies. The proliferation of social media, the emergence of artificial intelligence (AI),1 and the promise of virtual reality dominate the headlines. But this is not a book
about specific technologies. This is a book about our relationship with technology. This is a book about what you can do to shape technology’s role in society. This book is about the timeless intersection of humans and technology:
Automation.
There are already thousands of books about automation written by technologists, historians, futurists, economists, and journalists. Many are thoroughly researched and well-written. My goal in writing this book is not to recycle
existing ideas into original prose. My goal is to challenge how you think about machines and our relationship with them.
I launched an automation service line at McKinsey, a management consulting firm, in 2016. The team grew from a handful of people to more than 150 practitioners in only a few years. I worked with hundreds of organizations,
from established companies you know well to innovative startups that may never become household names. Many theorize about the future of automation. I have seen firsthand how automation makes its way into our lives.
After thousands of conversations with executives, investors, policymakers, educators, and colleagues, it became clear that most people lack a systematic way of thinking about automation. I hear plenty of stories and buzzwords,
but the anecdotes often obfuscate feelings of insecurity and confusion. That is to be expected with the rapid advancement of technology, but who is shaping the path of automation if not our most experienced and influential leaders?
Conversations about automation today remind me of planetary orbit discussions before Copernicus formulated a heliocentric model of the solar system.2 It is possible to predict the movement of Venus in a model that places Earth
at the center, but that model requires layers of complexity that are unnecessary if you put the sun at the center. I believe people are stuck in human-centric ways of looking at the world. It is time to update our view of work and place
machines, not humans, at the center.
How do we change mindsets grounded in centuries of evolution and experience? We start by updating our mental models. Shane Parrish, the author of the Farnam Street blog, writes, “Mental models are how we understand the
world. Not only do they shape what we think and how we understand, but they shape the connections and opportunities that we see. Mental models are how we simplify complexity, why we consider some things more relevant than
others, and how we reason. A mental model is simply a representation of how something works.”3
Think about the mental models you use every day. How do you navigate the shops you visit frequently? You may not know what is on every shelf in your local grocery store, but you could likely find corn tortillas faster than me.
That is because you have a mental model for the store that is better than mine. Mental models are why chain stores feel instantly familiar, even when you visit a new location for the first time.
This book describes my mental models for automation. It is a literary representation of the aisles and shelves in my automation store. I am not done honing my models, but this is a good time to write down what is in my head.
Keep in mind that these models reflect my personal experience and passions. I live in the United States. I work at the intersection of digital, operations, finance, strategy, and policy. I spend little time on physical automation and will
never be mistaken for a technology expert. I am telling a story, not prosecuting a case. My goal is to shape how you think rather than what you think.
I organized this book into three parts, from conceptual to practical. I have plenty of suggestions, but I save those for the end. I want you to draw your own conclusions, which means walking you through the logic and assumptions
underpinning mine. Part 1 describes the role I see machines playing in society. Part 2 presents the mechanisms through which automation reshapes institutions and individuals. Finally, Part 3 outlines my suggestions for honing and
applying these mental models in your life. I document my sources using footnotes rather than endnotes so you can easily find the evidence supporting my claims.
This book presents one way of looking at the world. You can find links to publications from other authors with opposing views on my website (www.artificiallyhuman.com). I hope you read them. This is a dynamic field with
plenty of valid and intriguing perspectives.
I wrote this book as much for myself as I did for you. These ideas were formed over a decade of client workshops, technology conferences, progress reviews, coffee breaks, and late-night debates. Keeping track of random facts
and stories is exhausting. I hope organizing the thoughts in my head helps both of us make sense of automation and what we can do to create a lasting, positive impact.
THROUGH THE EYES OF THE MACHINES
Introduction to Part 1
What is a bot? What is the difference between machine learning, deep learning, and neural networks? Why is everything called AI? These are fascinating topics for conferences and dinner parties. But this line of questioning offers
little insight into automation in the same way that understanding the chemical composition of turmeric will not help you cook a curry. Automation is about complex systems rather than individual ingredients.
This point has practical implications. One insurance company I advised has over 200 people working in various automation, digital, and AI teams.4 This is in addition to the hundreds of people working in the Information
Technology (IT) group. These teams produce fantastic products. However, little has changed about how the company issues policies or serves its customers. The insurer is layering modern technology atop an antiquated operating
model powered by human labor.
The first step in understanding automation is cultivating what racing driver Jackie Stewart called Mechanical Sympathy. Jackie won three Formula 1 World Drivers’ Championships between 1965 and 1973. He is credited with
saying, “You don't have to be an engineer to be a racing driver, but you do have to have Mechanical Sympathy.” The best drivers understand how the car works and what it takes to get the most out of the machine.
You have spent your entire life seeing the world through the eyes of humans. You know how to work with people because you speak the same language, share everyday experiences, and learn from each other. Most importantly,
you know what it is like to be a human trying to succeed in the world. You feel empathy for other humans.
The following chapters present a machine-centric story of the past, present, and future. I start by describing the often under-appreciated role machines have played in society. Next, I present a nuanced view of the digital
technologies permeating our lives. I then explore the workings of mechanical minds and the remarkable rise of artificial intelligence.
Mechanical sympathy flows from empathy for machines. I am not asking you to cry when your toaster dies. I am, however, asking you to think of machines as more than inanimate objects. Automation makes more sense when you
can see the world through the eyes of machines.
MACHINE LABOR: A BRIEF HISTORY OF AUTOMATION
Mechanical sympathy starts with revisiting the history of civilization. We learn about the past through stories of significant humans. We learn about the artists and writers who shaped culture. We learn about the politicians and
executives who shaped institutions. We learn about the scientists and philosophers who advanced our understanding. The machines powering these contributions are portrayed as supporting characters in the play of life. There are stars
of the show like the steam engine, but the role of machines generally goes under-appreciated in our narcissistic narratives.

The Big Bang


Technology has been a part of human life for thousands of years. The Stone, Bronze, and Iron Ages are named for the implements that bootstrapped modern civilization. The movable-type printing press invented by Gutenberg
allowed us to transmit knowledge at super-human rates. Windmills provided power for pumping water and grinding grain. Economic progress was steady - and somewhat dull. That changed during the Agricultural and Industrial
Revolutions.
Humans are biological machines that turn food into energy and energy into work. That might sound callous, but remember that I am recounting history from the perspective of machines. For humans, the Agricultural Revolution
was a pivotal time when the energy supply grew faster than the population. From 1700 to 1870, agricultural output in Britain nearly tripled.5
Technologies like the plow, seed drill, and threshing machine played notable roles, but the rapid increase in output also stemmed from other innovations. Crop rotation, selective livestock breeding, and rising imports made it
possible to produce more food. The story of the Agricultural Revolution is one of unlocking new energy sources for human labor. The most notable outcome from the perspective of machines is that humans had more time to devote to
pursuing other endeavors.
That brings us to the Industrial Revolution, where machines take center stage. The second half of the eighteenth century introduced a shift in the relationship between population and production (Figure 1). For most of human
history, production grew alongside the population. If we wanted more output, we needed more humans. No society produced sustained growth in per capita output until about 1750.6 Let that sink in for a minute. For all our progress as
a species, production per person did not change materially until the Industrial Revolution.
Productivity during the Industrial Revolution was not entirely fueled by technology. There were also advancements in science and culture, but the era is defined by how machines reshaped industries. The power loom and cotton
gin increased the productivity of textile workers more than 40 times.7

Figure 1
World Population and Production
Source: Federal Reserve Bank of Minneapolis8

We do not refer to the Stone Age as the Stone Revolution. Tools are helpful and enhance human capabilities. Tools are not the same as the complex machines that sparked a sharp rise in output during this period. If you want to view
history through the eyes of machines, think of the Industrial Revolution as the “Big Bang,” where the story of machine life begins.

Labor Replacement
In the traditional telling of history, the rise in output during the Industrial Revolution was a product of human ingenuity and hard work. Humans found new and better ways to produce goods and services. The last few hundred years
were a triumph for our species.
Told from the perspective of machines, the riches of the Industrial Revolution and subsequent periods were created by machine labor. If the Agricultural Revolution generated new energy sources to fuel human labor, the Industrial
Revolution initiated a labor replacement trend that continues today.
I said this is a book about automation, not technology. The definition of automation is “the controlled operation of an apparatus, process, or system by mechanical or electrical devices that take the place of human labor.”9 The
critical part of that definition is “take the place of human labor.” If not automated, work must be done by humans or animals (or not at all).
Technology makes humans more productive by automating some, but not all, of what we do. An abacus replaces human labor for simple calculations. A calculator replaces human labor for complex calculations. A spreadsheet
program replaces human labor for repetitive calculations. You might look at an abacus and think, “That is not automation.” However, what is the alternative? Previous methods involved physical (counting on fingers) and cognitive
(counting in the head) human labor. The abacus most certainly did replace human labor and paved the way for the automation of other computational work.
Over the last two centuries, the story of civilization has been one of machine labor replacing human labor for an ever-increasing number of tasks. Leaders often ask, “How automated is my organization?” That is a surprisingly
tricky question to answer. We do not have a standard way of measuring machine labor. The best way to answer the question is by estimating how much of what humans are doing today could be done by machines.
The term “automation” first appeared in print during the 1950s (Figure 2) and saw rapid adoption as factories shifted from human labor to machine labor. A second spike came in the 1980s as computers (digital machine labor)
entered the workforce. We see early signs of a third spike as hype grows around artificial intelligence.
Figure 2
Use of “Automation” in Print Since 1920

Source: Google Books Ngram Viewer: Automation10

An early use of the term “automation” was in the 1969 book, “The Unbound Prometheus.” In it, David S. Landes writes, “We shall eventually have as many ‘revolutions’ as there are historically demarcated sequences of industrial
innovation, plus all such sequences as will occur in the future; there are those who say, for example, that we are already in the midst of a third industrial revolution, that of automation, air transport, and atomic power.“11
Today, many experts claim we are in the midst of a fourth industrial revolution defined by increasing interconnectivity and smart automation. I believe we should stop referring to these disruptions as revolutions. Revolutions come
and go. Machine labor and the tumult caused by human labor displacement are here to stay.

The Resistance
Human workers sell skills and time to the labor market. If an increasing number of our skills can be performed cheaper and better by machines, does it not stand to reason that demand for human labor will decline? Said another way,
is machine labor not a threat to human labor?
Fortunately, history has proven machines to be more friend than foe. A study by University of Chicago economist Yale Brozen looked at jobs lost and gained during the 1950s.12 This was a period where industrial automation
abounded. Brozen found that humans had lost 13 million jobs during the decade, but more than 20 million new positions were created.
How can human labor demand increase during times of intense automation? The answer lies in two fallacies about the nature of work. First, we think of the amount of work in society as fixed. If machines do more work, there is
less work for humans. In reality, demand for goods and services increases as the prices of those goods and services fall. As a result, aggregate demand for human labor does not fall at the same rate as the labor content of a single unit.
The second reason is that human skills are dynamic. A human is a general-purpose machine. The humans who grew crops in the 1700s were not fundamentally different than those building airplanes today. This idea is at the heart
of Tesler’s theorem: “Intelligence is whatever machines haven't done yet.”13 Human intelligence evolves, which allows us to find new sources of work in the race against machines.
Unfortunately, individual experiences vary. Brozen concluded that automation during the 1950s was good for society because it created seven million new jobs. He did not consider the effects of loss aversion. Humans overvalue
what we lose and undervalue what we gain. That means the 13 million jobs lost during the decade may have generated as much or more pain as the 20 million new jobs created pleasure.
Furthermore, machine labor pressures human wages even when jobs are retained. Research by Daron Acemoglu and Pascual Restrepo found that “between 50% and 70% of changes in the US wage structure over the last four
decades are accounted for by the relative wage declines of worker groups specialized in routine tasks in industries experiencing rapid automation.”14 When humans and machines can do a task, humans are forced to work for machine
wages.
All this helps explain why machine labor faces resistance from human workers. The term “Luddite” describes people who resist technological change. The word comes from a nineteenth-century movement where British workers
broke into factories and smashed textile machines. The group called themselves “Luddites” after Ned Ludd, an apprentice rumored to have wrecked a textile apparatus in 1779.15 Is it time for the Luddites to rise again?

Heroes or Villans?
From a machine perspective, human resistance to automation is unwarranted. Some humans are indeed displaced as machines enter the workforce. The same is true of globalization, where workers at home are replaced by workers
abroad. Machines do not simply destroy existing jobs. Machines also create new jobs by teaming with humans. Figure 3 shows what happened when computers began seeing widespread adoption in the 1980s. Humans embracing
computers saw more than double the job growth rate compared to those who did not.

Figure 3
Annual Growth in Jobs, Percent
Source: The Atlantic16

The advent of machine labor was a turning point for humanity. Output grew during the Agricultural Revolution, but that growth was primarily a function of humans creating new energy sources to power more humans. It was not until
the Industrial Revolution that we began to see meaningful increases in output per human. The math is simple to explain. We now have two sources of labor in the numerator (humans and machines) but only humans in the
denominator.
If the story of automation concluded today, machines would almost certainly be viewed as heroes. Yes, machines displaced some humans. However, the failure to provide for those humans is not the fault of machines. Political and
social institutions run by humans are to blame. Machines might threaten individual humans, but machines are a friend to humanity. In the words of Yale Brozen, “Instead of being alarmed by growing automation, we ought to be
cheering it on.”
I have recounted history from the perspective of machines. To understand where we are headed, we must examine how machines evolve. Humans have been around for thousands of years. Complex machines have been around for
hundreds. It is too early to write the final chapter in the story of automation.
Mental Model #1 - Machine Labor

Automation is the act of replacing human labor with machine labor for an ever-increasing number of tasks. This is not a “fourth industrial revolution.” This is the unyielding advancement of machine labor.
Story: A Rosey for Everyone
My favorite TV show as a kid was “The Jetsons.” It was a cartoon produced by Hanna-Barbara that initially aired in 1962. The show followed the adventures of a family 100 years in the future, surrounded by technological marvels.
There were flying cars, hologram video conferences, and child-size jetpacks. The show inspired me to spend hours sketching flying cars and pursuing a mechanical engineering degree. Physics works differently than I thought when I
was eight years old.
One of the characters in “The Jetsons” is a housekeeping robot named Rosey (Figure 4). She is an aging XB-500 model capable of basic household chores: dusting furniture, rehydrating dinner, and babysitting kids. Rosey is a
general-purpose robot with a Brooklyn accent.17 I thought we would all have a Rosey by the time I was grown up.

Figure 4
Rosey the Robot

Forty years later, I do not have a Rosey. That said, plenty of machines replaced human labor in the home. We have robots that clean our floors and terrify our pets. We have voice assistants that tell us the weather outside and settle
trivia debates. We can even buy refrigerators that keep track of our groceries. The number of machines working in the home is staggering compared to the 1960s when “The Jetsons” first aired.
Our homes are a microcosm of the world around us. Machine labor has been entering factories, offices, and homes for decades. Yet, none of us spend our days loafing about because machines do all the work. If anything, demands
on our time and skills are increasing. That dishwasher is not going to load itself.
Will that change when everyone has a Rosey? We might find out soon. Behind a secure door of an unmarked office, a team of PhDs is building artificial eyes, hands, and minds that could one day give rise to an inexpensive Rosey.
The team has yet to ship a commercial product, but it is worth remembering that “The Jetsons” took place nearly four decades from now. The field of embodied cognition (general purpose, physical robots) is further along than you
might expect.18
What would you do if you had a Rosey? Would you enjoy a life of leisure with family and friends? I have a hard time answering that question. My sense of purpose is inextricably linked to my roles at work and home. My
mornings would be incomplete without a sunrise dog walk. Not all chores are a chore. I want to cheer for the machines, but I worry about what that means for our humanity.
DIGITAL INFRASTRUCTURE: TECHNOLOGY IS NOT AUTOMATION
Computers and the internet were expected to spark a wave of productivity. Technology investments during the 1950s led to steady increases in output per person. We invested heavily in physical machine labor and were rewarded. It
stands to reason that pouring money into digital machine labor should yield similar results. Unfortunately, recent technology investments have been disappointing at best and disastrous at worst. We keep channeling money into digital
technologies but have yet to see accelerated increases in output per person.

Missing Pie
As early as 1987, we could tell there was a problem. That year, economist Robert Solow is credited with saying, “The computer age is everywhere except the productivity statistics.”19 Since 2010, productivity growth in the U.S. has
hovered around 1.5 percent per year (Figure 5). For comparison, technology investment grew by 5 percent per year during the same period. In 2017, Chad Syverson equated the productivity slowdown to a gross domestic product
(GDP) shortfall of $3 trillion.20 We can debate the value of consumer surplus created by technology, but no amount of creative math can account for $3 trillion of missing output.

Figure 5
U.S. Worker Productivity
10-Year Rolling Average

Source: U.S. Bureau of Labor Statistics21

It is tempting to look at Figure 5 and conclude this has no relevance to your daily life. It is easy to dismiss these fluctuations as the business of economists and academics. But $3 trillion is a big deal. That is 10-15 percent of the annual
GDP of the United States. You would notice if your wages were 10-15 percent higher today.
Whether you realize it or not, human workers funded these investments. In the race to launch digital initiatives, organizations diverted resources from other uses like human labor. The machines might not take your job, but they
most certainly take a portion of your income. Is it time to mobilize the Luddites?

Laying the Groundwork


Claiming technology is the same as machine labor is like claiming food is the same as human labor. Technology is an input, not an output. The two are correlated, but increasing the production of one does not guarantee the other.
We live in an analog world. Objects are physical, inputs are continuous, and life is carbon-based. We take for granted how easy it is to stroll down the street, recognize an apple, or argue about the proper pronunciation of “gif.”
These tasks are effortless because we benefit from billions of years of evolution. Our senses and brains are fine-tuned for the analog world.
Most automation during the 1950s took place in our analog world. In 1947, manufacturing represented more than 25 percent of employment in the United States. Today, that number is less than 9 percent even though output has
risen.22 Over two or three generations, we replaced human physical labor with machine labor for most manufacturing tasks. Investments were needed to change how work was done but not where work was done. We were automating
in the analog world we inhabit.
Digital machines cannot work in the analog world any more than humans can work in the digital world. Unless you are Neo from “The Matrix,” machine language looks like gibberish:23

01001000 01100101 01101100 01101100 01101111 00100000 01010111 01101111 01110010 01101100 01100100

Replacing cognitive human labor with digital machine labor is costly. Before machines can start working, infrastructure must be built. This involves codifying and storing data, manufacturing devices to process data, and establishing
communication channels to share data. Building cities, institutions, and systems to facilitate human labor took thousands of years. We are only now beginning to do the same for digital machine labor.
Global IT spending in 2022 was $4.4 trillion.24 Enterprise software and IT services accounted for less than half. The majority was spent on data centers, devices, and communication services (Figure 6). These categories are not,
for the most part, machine labor. They are digital infrastructure.

Figure 6
Worldwide IT Spend in 2022
Millions of U.S. Dollars

Source: Gartner, Inc.

We see evidence of digital infrastructure everywhere. Many state and local governments launched websites and mobile apps over the past two decades. Behind the scenes, it is often humans doing the work. This became painfully
obvious when demand for public services skyrocketed during the COVID-19 pandemic. People are working harder than ever despite trillions of dollars flowing into digital infrastructure.

Familiar but Different


Should we stop dumping money into digital infrastructure? What is the point of technology if it does not increase the size of the pie? Let us assume the people funding these investments are not crazy. There must be a reason we passed
on an opportunity to expand GDP by $3 trillion.
The answer may lie in what we expect to build on our digital infrastructure. This is not the first time technology investments have reduced productivity in the short term. We saw a similar trend during what Chad Syverson and
colleagues refer to as the “portable-power era,” which began around 1890 (Figure 7). For the first 25 years, productivity growth was less than 1.5 percent per year. Then, around 1915, productivity sped up to about 3 percent.25

Figure 7
Labor-Productivity Index
Source: Syverson

In their paper, Syverson and colleagues state, “The more profound and far-reaching the potential restructuring, the longer the time lag between the initial invention of the technology and its full impact on the economy and society.”
The authors cite prior research by Paul A. David and Gavin Wright showing at least half of U.S. manufacturing sites remained unelectrified until 1919, about 30 years after the shift to alternating current began.26
Investments during the “portable-power era” were not entirely focused on machine labor. For example, factory floors had to be reconfigured to adopt electricity. Machines previously sharing a central steam source could be moved
to new locations, optimizing production flow. The infrastructure needed to support physical machine labor is ubiquitous today. How many electrical outlets are in your home? Where is the nearest gas station?
We are seeing early signs of a shift from digital infrastructure to machine labor. Recall that investments in data center systems, devices, and communication services represented 54 percent of global IT spending. According to the
same report, those categories are expected to grow 3.7 percent annually in the coming years. In contrast, investments in software and IT services, which include machine labor, are expected to grow 10.8 percent per year.
The shift is more pronounced when looking at artificial intelligence. Most AI startups use digital infrastructure created by behemoths like Amazon Web Services and Microsoft. AI startups almost exclusively build machine labor.
Venture Capital (VC) investments in AI start-ups grew from less than $5 billion in 2013 to over $120 billion in 2022.27 That is a lot of capital flowing into new sources of machine labor.
We are far from finished building digital infrastructure. The average American home contains 75 electrical outlets.28 Meanwhile, only 50 percent of homes have true broadband internet access.29 The shift from digital
infrastructure to machine labor will take decades, not years. That said, there is more than enough infrastructure to begin harvesting gains.

Harvesting Gains
The analog infrastructure laid during the past century is similar in many ways to today’s digital infrastructure. Both represent foundational investments. However, there are two critical differences between analog and digital
infrastructure.
First, analog infrastructure tends to be local rather than global. When workers reorganize a factory floor, the benefits from that investment are constrained to a location. Even electrical grids and fossil fuel supply chains tend to be
geographically constrained. In contrast, most digital infrastructure is global. An application programming interface (API) built in one part of the world is accessible from anywhere else.
Second, replicating or reusing analog machine labor is costly. Iron ore must be processed into steel before using that steel to manufacture a bulldozer. In contrast, digital machine labor can be replicated and reused quickly and
cheaply. If I have an algorithm for identifying dog photos, I can share a copy with you at almost zero marginal cost.
These attributes should make digital infrastructure highly scalable. It is reasonable to expect a better return than we received on analog infrastructure investments. Over the past decade, the U.S. invested 10-15 percent of its GDP in
digital infrastructure. To put that into perspective, the cumulative Apollo program was only about 2.5 percent of the U.S. GDP during the late 1960s.30 Digital infrastructure is the “moonshot” of this century.
Either we are grossly miscalculating the value of digital infrastructure, or our investments are set to pay off in a big way. That $3 trillion gap in GDP may not be missing. It may be a down payment on future generations of
productivity. Once you understand how machines learn and think, investments in digital infrastructure make far more sense.
Mental Model #2 - Digital Infrastructure

Not all technology is machine labor. Most investments over the past 50 years have gone into digital infrastructure. We are only beginning to build the machine labor.
Story: Humans Read for Free
If you work for an organization long enough, you see plenty of technology projects that do little to make life easier for employees. The new chatbot was supposed to handle customer service inquiries. The new scanners were supposed
to help workers receive more boxes. The $100 million Enterprise Resource Planning (ERP) installation was supposed to fix everything. Promises made at the start of these projects are rarely kept by the end. Money flows into
technology while humans work harder than ever.
Why does this cycle keep repeating? I believe it is because organizations rarely distinguish between investments in digital infrastructure and machine labor. Expenses are organized into categories like hardware, software, and
services. Those categories are irrelevant when it comes to how work gets done. This story is about the unintended consequences of confusing digital infrastructure and machine labor.
In 2022, individual retirement account balances in the U.S. totaled about $13 trillion.31 Over time, people put away small amounts of money, hoping it will grow into a nest egg through the magic of compound returns.
Unfortunately, life does not always go as planned. Unexpected events like a death in the family or a medical procedure can trigger an immediate need for cash. For this reason, the U.S. government allows people to withdraw savings
early without penalty.
These events are called “hardship withdrawals.” The underlying process was an attractive automation candidate for a former client because it cost millions of dollars to administer and helped customers access money quickly.
When something traumatic happens, the last thing you want to do is to spend time on the phone with customer service.
To process a hardship withdrawal, the company first required documentation that a qualifying event occurred. This was usually a scanned document like a death certificate or eviction notice. The problem is that documents are
designed for humans, not machines. Documents contain data, but it is often inaccessible to machines.
Fortunately, Intelligent Document Processing (IDP) systems classify documents and extract the data within them. For a dollar or two per page, you can purchase a machine that “reads” documents. For example, you can take a
picture of your tax forms and receive a spreadsheet with neatly organized data.
My client connected their website, an IDP platform, and a simple rules engine to automate the hardship withdrawal process. Customers could submit documents and receive an answer almost immediately if the request was
approved. The automated process worked, but there was a problem. Costs went up! What happened?
IDP platforms are digital infrastructure. When humans review documents, there is no need to extract the data within them. In the case of my former client, customer service agents looked at the documents and decided whether the
requirements were met. The data remained trapped in the documents.
Machines need data in digital formats. That means adding infrastructure like IDP platforms to connect the analog and digital worlds. The data extraction step did not exist in the manual process. The step was automated, but
automating work that did not previously exist still adds cost.
The company considered pulling the plug. What was the point of automating a process if it increased costs? Fortunately, in this case, the faster release of emergency funds justified the higher expenses. The project was not a
complete failure but was hardly a shining success for my former client or me.
The problem with automation is not that it requires infrastructure. The problem is that most firms invest in the infrastructure and then backtrack when savings fail to materialize. It is akin to laying the foundation for a skyscraper
and only building the first floor.
This example illustrates where we will likely find the $3 trillion “missing pie.” The company I described does not need to extract data multiple times. Once the data is available in a digital format, it is accessible to all machines in
the organization. Digital infrastructure has value, but it is up to organizations to harvest gains from their investments.
PATTERNS EVERYWHERE: HOW MACHINES LEARN AND THINK
The human brain and nervous system are wonderfully complex and extraordinarily capable. The brain is an evolutionary pile of hacks but achieves feats beyond our most sophisticated machines. All while running on a power supply
of 20 watts.
When experts are asked what year machines will reach human-level intelligence, answers range from the late 2020s to never.32 This variation is partly due to our inability to forecast technology trends (more on this next chapter).
The more significant factor is our lack of understanding of the human brain. For all our work in neuroscience, psychology, and related fields, we are far from uncovering the intricacies of how humans learn and think.
Fortunately, we do understand machines. We built them. If you want to forecast the future of automation, start with the part of the equation we know how to solve. Start by deepening your knowledge of mechanical minds.

Connecting the Dots


We have not finished reverse engineering the human brain, but we have a general idea of what it does. In simple terms, the brain finds patterns in data. Inputs are gathered through nerves like those behind our eyes, beneath our skin,
and lining our gut. The brain learns patterns from these inputs and uses them to predict what will happen in the world.33
When you look at an apple, light is reflected from the apple into your eye. Over time, you learn to recognize specific light patterns as an apple. The reflected light provides insight into the fruit’s shape, color, and texture. You also
learn to associate those patterns with the sound “apple” and a sweet taste. No specific neuron in your brain is assigned to “apple,” but there is a predictable sequence of synapses that fire when you see one.
I am wildly oversimplifying the field of neuroscience. The mechanisms by which the human brain and nervous system encode, retrieve, and process data are complicated and bustling areas of scientific research. This is an excellent
time to remind you that I am telling a story, not prosecuting a case.
The reason human labor is powerful is that patterns are everywhere. We grow and harvest food by recognizing patterns in seed types, planting methods, and weather conditions. We build internal combustion engines by
recognizing patterns in chemical reactions, physical movement, and heat transfer. We create business presentations by recognizing patterns in object orientation, word sequencing, and persuasion methods. What you think of as
knowledge and experience is the ability to recognize complex patterns in data.
Machines can also recognize patterns in data. The field of machine learning is dedicated to building and training mechanical minds. The programming and mathematics behind machine learning are beyond the scope of this book,
but it is worth diving into one method involving artificial neural networks (ANNs).
If you panic when you hear technical terms like “machine learning,” please stop and breathe. The nuances are complicated, but the fundamentals are simple. You do not need a computer science degree to exercise mechanical
sympathy. Stay with me for the next section while we dive into how mechanical minds learn and think.

Artificial Brains
The human brain contains about 86 billion neurons. These neurons are the brain’s hardware, processing and transmitting data through electrical and chemical signals. Similarly, computers have billions of transistors and other
components to process, store, and transmit data. This is the hardware of mechanical minds.
Hardware is of little use without software. In the case of your brain, the “software” is the series of pathways created by your neural circuitry. Your neurons are not connected arbitrarily. The brain learns through a messy process of
trial and error. Neural connections used frequently are strengthened, while those used infrequently are pared. The result is a series of connections that process information in repeatable and predictable ways.
Early mechanical minds required humans to “hard wire” rules for processing data. We wrote programs that codified knowledge in a way computers could understand. IF credit_card_charge IS LESS THAN remaining_credit;
approve transaction; OTHERWISE decline transaction. Most software that exists today was written in this manner. We documented rules for recognizing and acting on patterns and translated those rules into machine language.
You can probably see why this is a problem. Documenting every pattern you know would take several lifetimes, and you are unlikely to do so accurately and completely. Try writing down the rules for recognizing an apple. And
remember, you need to clarify what you mean by “grows” and “trees” when you get to the part about where to find apples.
Teaching machines by codifying human knowledge is feasible for simple and predictable tasks. The process breaks down for anything complicated or unpredictable. This is the problem machine learning is attempting to solve.
Instead of codifying knowledge, machine learning builds flexible models that explore the world firsthand. Machines are shown a bunch of inputs and trained to recognize patterns in those inputs. On the surface, this appears
redundant. Why create machines that learn independently if we already know the patterns? In practice, the process is more efficient because machines can find patterns in data faster and more accurately than we can codify them.
As you might expect, the field of machine learning takes inspiration from what we know about the brain. Artificial neural networks are comprised of nodes meant to mimic biological neurons and the connections between them
(Figure 8). The first set of nodes is an input layer that takes signals from the outside world. This is like the neurons at the back of your eye that transmit signals based on the color and intensity of light. The final set of nodes is an
output layer that provides a prediction (e.g., dog or cat).

Figure 8
Basic Architecture of an ANN

The exciting stuff happens in the middle, hidden layers. Like a biological neuron, each node acts on incoming signals. The node’s action could make the signal stronger, weaker, or different. The actions are less complex than those
performed by biological neurons, but ANNs are not space constrained by a human skull. As a result, ANNs can include far more layers than the six in your cerebral cortex. Rather than replicating the complexity of biological neurons,
machine learning throws an overwhelming number of artificial neurons at the problem.
This is how mechanical minds are built. Different models have different numbers of nodes and layers, but the basic architecture is consistent across ANNs. The mechanical minds of today are miniature versions of biological brains
capable of learning limited patterns. This is what people mean when they refer to “narrow artificial intelligence.”
Now that we have a mechanical mind, it is time to teach it. This is done by providing signals to the input layer. These signals could be pixels in an image, words in a sentence, or fields in a loan application. Humans then tell the
machine which patterns to find in the data. This is a dog. This is a grammatically correct sentence. This is an approved application.
Initially, the mechanical mind is a mess. The input signals are multiplied by random numbers assigned to each node. A process called backpropagation is used to update the “weights” for each node in the direction most likely to
make accurate predictions. The mechanical mind makes a prediction, and its human teacher corrects it. This is similar to brave spelling34 and other practices used to train human brains. Repeating this process hundreds or thousands of
times results in a mechanical mind that can find patterns in data with high accuracy. In many cases better than humans.
I promised you the fundamentals of machine learning were not complicated. At this point, you might be second-guessing my definition of “complicated.” Fortunately, unless you are pursuing a career in data science, the preceding
paragraphs are all you need to know. You do not need to understand the nuances of model design, training, and deployment. You do not need to know when to use recursive or convolutional neural networks. You know how
mechanical minds recognize patterns in data. That is enough for now.

Bigger Brains
I described ANNs as miniature versions of a human brain. That is true today, but the size of ANNs can grow over time. There is no technical limit. You can keep adding layers if you have the computing power to train the model.
As mechanical minds grow, they become more general-purpose. For example, the Generative Pre-trained Transformer (GPT) algorithm from OpenAI predicts the next words in a sequence. The GPT-3 version of the algorithm was
trained on 45 terabytes of language data.35 You cannot peer inside GPT-3 and see a rule that says “the dog” has a 58 percent probability of being followed by the word “is” and a 5 percent probability of being followed by the word
“ate.” Still, the 175 billion parameters in GPT-3 combine to make that prediction. Another product from OpenAI, DALL-E, uses a similar model to predict pixels given words as input. If you ask for a “cat in space eating a taco,”
DALL-E will predict a pattern of pixels likely to fit that description.
Models like OpenAI’s GPT and DALL-E seem like magic. How can mechanical minds write a story and create whimsical illustrations about a dinosaur who loves ice cream? It is simply a matter of scale. Patterns are everywhere;
larger models can find more complex patterns than smaller ones. Following that logic, the human brain is not magic. It is simply more expansive and finely tuned than today’s mechanical minds.
The size of the human brain is not growing by an appreciable amount. Those 86 billion neurons can be reconfigured in new ways, but you cannot produce 50 billion new neurons no matter how hard you try. Machines do not face
the same constraints. There is no theoretical size limit for mechanical minds. Growth is only constrained by computing power and training data.36 Larger mechanical minds create more opportunities to substitute machine labor for
human labor.
If historical trends persist, it should be possible to purchase the computing power of a human brain for $1,000 within the next few decades. If people are biological computers, the demand for human labor will fall sharply.
Fortunately, we still have a few tricks up our sleeves.

Bag of Tricks
What makes the human brain unique is not raw computing power. Machine learning has a path to overcoming that barrier to automation. Instead, humans interact with each other and the world differently than machines. This is where
the fruits of evolution outcompete the powers of innovation.
First, machines do not simplify problems as efficiently as humans. The human eye sees with a resolution of up to 576 million pixels.37 Passing all of that data to your brain would be overwhelming. You only see with that clarity
using your fovea, a slight depression in the retina where visual acuity is highest. The fovea covers about one percent of your retina but provides half the information sent to your brain. That is why your eyes are constantly scanning the
world around you. Your biological systems for attention allow you to simplify problems in ways that amplify your brain’s computing power.
Second, humans operate as a flexible collection of general intelligence units, whereas machines operate a rigid collection of narrow intelligence units. Humans specialize, but that does not make us narrow intelligence. We can
specialize in whatever we want. We can train to be truck drivers, teachers, doctors, or programmers by rewiring our brains. If a machine is trained to drive a truck, the only thing it can do well is drive a truck. This flexibility makes it
easier to repurpose human labor as needs evolve.
Finally, machines require humans to define an objective function38 to guide learning. Human objective functions are inherited. We are hardwired to maximize the likelihood of our genes surviving into the future. This creates
unintended consequences, but at least we do not wander through life aimlessly. We have goals and organize our lives accordingly. Machines do not. Machines look to us for goals.
These capabilities may eventually be replicated in mechanical minds. Researchers are working on algorithms for attention, and there is no technical barrier to optimizing machines for survival. However, until these problems are
solved, humans will have an advantage in certain types of work.39

Stronger Together
A person cannot be an expert in every domain. I cannot repair a modern car or perform open-heart surgery. A single human can not even build a pencil from scratch.40 Our strength comes from our ability to collaborate with other
humans.
Human progress accelerated as complex language emerged. Our ability to transfer data between humans is pivotal to our success as a species. Complex human language has existed for thousands of years. The internet has existed
for a few decades.
I believe we are in the early stages of building a machine society that links together the narrow intelligence of individual mechanical minds. A credit scoring algorithm can “talk” to a loan approval algorithm that can “talk” to a
system that originates mortgages. These connections are clunky today, but we should expect individual machines to collaborate more effectively over time.
The question referenced at the beginning of this chapter about when machines will achieve human-level intelligence is irrelevant. Machines will not displace human labor by outcompeting us at our own game. Machines will
replace human labor by linking together the narrow intelligence of individual machines to create something that resembles general intelligence cumulatively.41 We are rushing headlong into that eventuality.
Mental Model #3 - Patterns Everywhere

Artificial intelligence and human intelligence are different in many ways, but both are systems for finding patterns in data. Machines need not achieve human-level intelligence to replicate the capabilities of humanity.
Story: Department of Character Recognition
In the previous chapter, we saw how Intelligent Document Processing (IDP) platforms can automate reading. Behind the curtain, an IDP platform is not a single mechanical mind. It contains multiple minds diligently performing
simple tasks (Figure 9).

Figure 9
Structure of an IDP Platform

Human-powered organizations have departments like Accounting, Marketing, Sales, and Legal. Each department is responsible for a type of work and contains human workers capable of doing that work. The “departments” housing
digital workers have fewer responsibilities but are organized similarly. Let us tour the “departments” inside an IDP platform.
Our first stop is the “Department of Image Correction” (Label #1). When you scan a document, the image is frequently imperfect (e.g., scanner marks, shadows). The digital workers in this department are trained to identify issues
and “clean” the image. Your brain does this effortlessly. You do not get confused when I show you a crumpled document. Your brain has seen lots of documents in less-than-perfect conditions.
Next, we head to the “Department of Document Classification” (Label #2). You have likely seen algorithms that can classify images like “dog” or “cat.” The digital workers in this department do the same for tax returns and bank
statements. Again, your brain does this effortlessly. You can “feel” this process working if you are shown a document you have never seen. The feeling of confusion is your brain struggling to classify a new layout.
Exiting the classification department, we come upon the “Department of Field Extraction” (Label #3). A field is a block of information contained in a document. “First name” is a field. “Account number” is a field. This step
requires effort for both humans and machines. If I hand you an invoice and ask you for the “ship to” address, it will take you a second to find it. These digital workers are doing the same, diligently scanning each document for logical
blocks of information.
Our final stop is the “Department of Character Recognition” (Label #4). Within each field, there is a series of characters. Your name is a series of letters. Your phone number is a series of numbers. Characters are nothing more
than small images. The letter “S” looks like a squiggly line. The number “0” looks like an oval. These digital workers are trained to identify the characters contained in each field.
The digital workers in these departments do not work alone. If a machine gets confused, the task is handed to a “Troubleshooting Department” (Label #5) staffed by humans. For example, digital workers sometimes have difficulty
telling the difference between a “5” and an “S,” especially if the text is handwritten. Each time the humans help, the digital workers learn.
To most people, IDP platforms are a black box. A scanned document goes in, and formatted data comes out. Why do you need to care about the digital workers inside? It is the same reason you need to care about human workers.
How do you expect to get the most out of employees without understanding their thoughts and motivations?
IDP platforms demonstrate the power of simple mechanical minds. None of the digital workers in an IDP platform can displace humans. However, working together, these mechanical minds have virtually eliminated human-
powered data entry. I believe pursuing a single machine capable of general intelligence is a red herring. The machines collectively will achieve something close to general intelligence long before an individual machine can outcompete
an individual human.
EXPONENTIAL PROGRESS: RISE OF ARTIFICIAL INTELLIGENCE
Humans are building mechanical minds at a frenetic pace. Each day, there is a story about AI doing an unimaginable task. From writing podcasts to predicting protein-folding to creating award-winning art, it feels like machines are
poised to displace humans at a breakneck pace. Yet, for all the news stories, little has changed.
It is easy to dismiss these stories as hype. AI winters during the 1970s and 1980s prove that expectations can outpace reality.42 Today, organizations are jumping on the AI bandwagon without technology backing their audacious
aspirations. A 2019 study found that 40 percent of European AI companies were not using any machine learning at all.43
However, the hype argument ignores underlying trends fueling the growth of mechanical minds. AI enthusiasts of the 1970s and 1980s were not wrong about the future. They were wrong about when the future would arrive.
We fixate on what happens in our life and often miss the bigger picture of where our chapter fits into the story of humanity. I cannot tell you whether artificial intelligence will surpass human intelligence in your lifetime. If it does,
we are living through one of human history’s most thrilling (and terrifying) periods. My biases might lead me astray, but permit me to make my case for why this moment matters.

Steep Curves
Humans struggle to conceptualize exponential growth. This makes sense. Our brains are looking for patterns in data. We can make all the observations we want during the flat portion of an exponential curve, but the insights we draw
will be of little use in predicting the steep part.
This phenomenon is known formally as Exponential Growth Bias (EGB).44 Fortunately, this evolutionary shortcoming does not tend to hurt us often. Events in the natural world mostly unfold in predictable ways. The tree outside
your window does not grow 50 feet overnight. “Jack and the Beanstalk” is a fairy tale, not a documentary. When the natural world behaves in non-linear ways, it surprises us.
In January 2020, there were about 100 reported COVID-19 cases per day in the United States. By mid-March, when states began lockdowns, there were more than 8,000 per day. By January 2022, the pandemic’s peak, there were
more than 3,000,000 per day.45 I remember my former colleagues frantically forecasting case counts in those early days. The events of the present were of little use in predicting the future.
With COVID-19 fresh in our minds, it is surprising to hear debates about the future of artificial intelligence. Making predictions is valuable if you believe artificial intelligence will progress linearly. If you think we are on the flat
part of an exponential curve, forecasters are trying to infer too much about the future based on what we know today.

Exponential Inputs
It takes three ingredients to build mechanical minds. The first is computing power. Each node in an artificial neural network performs a calculation. The second ingredient is data. Patterns are everywhere, but machines can only learn
from information codified in the digital world. The final ingredient is human labor. Mechanical minds do not build themselves (yet).
Computing power has been growing exponentially for a while. Moore’s Law states that the number of microchip transistors doubles every two years. The prediction has proven resilient over the past 50 years (Figure 10).

Figure 10
Number of Transistors on Microchips Over Time

Source: Karl Rupp46

There is a debate over whether Moore’s Law can persist for another 50 years. Transistors can only be packed so tightly on a microchip. What you do not see in Figure 10 are technological advancements like quantum computing that
could unlock sustained exponential progress. There are reasons to be both optimistic and skeptical. Regardless of your belief, I think it is too early to write an obituary for Moore’s Law.
Now let us turn our attention to data. Again, we see evidence of exponential growth (Figure 11). This trend accelerated in recent years as the average share of digital interactions globally rose from 36 percent in December 2019 to
58 percent in July 2020.47 More interactions in the digital world mean more data for machines.

Figure 11
Total Data Created, Captured, Copied, and Consumed Globally
Source: Statista48

Look around you. How many cameras, microphones, and other Internet of Things (IoT) devices do you see? I am sitting in a coffee shop. Every customer has a phone with a camera and microphone. Many are wearing smartwatches.
The coffee shop has three security cameras. Two point-of-sale systems gather transactional data with each swipe and tap.
There are expected to be more than 60 billion connected devices by 2025.49 Advancements in computing power amplify the data-gathering capabilities of these devices. I do not view data as the new oil. The quantity of oil is
fixed, and oil disappears once consumed. Data is growing exponentially and is endlessly reusable.
The final ingredient is human labor. At last, we have an input that does not grow exponentially. There are about 24 million software developers worldwide (less than one percent of humans),50 and no more than a few hundred
thousand specialize in artificial intelligence (less than 0.01 percent of humans).51 Even if we go “all in” on mechanical minds, there are limits to how much human labor can be allocated to a single domain. There are already 2.5 times
more developers than doctors in the world.52
Human labor does not grow exponentially, but machine labor can. Many automation tasks like model training, validation, and testing are increasingly automated. Even the role of computer programmer appears within reach of
machines.53 If this is beginning to feel like a circular reference, there is a good reason.
All roads lead back to computing power. If you want to understand the rise of AI, you need only understand a simple relationship. An integrated circuit is a two-dimensional object. If you reduce the distance between transistors,
you increase the number of transistors on a chip. We cannot shrink process sizes forever, but we are far from the physical world’s limits.
These are the ingredients for producing mechanical minds. Scaling requires other inputs like raw materials and energy that historically have not grown exponentially.54 Bottlenecks will slow the rise of AI from time to time, but
each innovation will unlock a new rung on the exponential ladder.

Hostile Workplaces
Most organizations are still powered by human labor. Leaders tout AI successes, but modern enterprises’ computing power remains largely biological. Try removing human labor from the economy. We attempted it for a brief period
during the COVID-19 pandemic. It did not go well.
One reason organizations are still powered by humans is that the entire construct of the modern enterprise is grounded in how people work. We organize work to generate economies of scale and skill across humans. We place
people into shared service centers to promote best practice sharing and capacity pooling. We organize people into functions to cultivate skill development and information sharing. Modern enterprises are designed for humans, not
machines.
In practice, the slow rate of workplace change is the most significant barrier to AI adoption. Companies are slow to reallocate capital and resources. One survey found that across 1,600 U.S. companies, one-third of businesses
barely changed capital allocations year to year (mean correlation of 0.99). For the economy as a whole, the correlation was 0.92.55 Forget exponential progress. This is barely linear progress.
Forcing a machine to work in an environment designed for humans is like asking the machine to work with two metaphorical arms tied behind its back. Imagine you showed up to work, and your documents had been translated
into binary. Your boss then instructed you to make thousands of high-stakes decisions every hour. Remember that image the next time you complain about a machine not doing its job.
Tasking Information Technology with managing all the machines in an organization is like asking Human Resources to manage all the humans. It is no wonder IT organizations are splintering into multiple groups led by
executives with impressive sounding titles like Chief Technology Officer, Chief Data Officer, and Chief Digital Officer. I believe this fracturing is a symptom of a more significant problem. Modern enterprises were not designed for
machine labor.

Humans Out of the Loop


For a brief period, I subscribed to the narrative that humans and machines working together would always be better than machines acting alone. This makes intuitive sense. Humans and machines have complementary skills. In chess,
machines have been capable of beating humans since 1996. For a time, pairing humans and machines to create a “cyborg” or “centaur” resulted in a team that could beat any machine playing alone.
There is one problem with the “human + machine > machine” formula. It assumes the value added by the human is always positive. That is true if the human can do something the machine cannot, and the human does not interfere
with the machine’s work. The first point is hard to validate without understanding the human brain. I will explore that topic more in Part 2 of this book. Let us focus instead on whether humans are helpful mentors or annoying
coworkers.
Researchers from the University of Washington and Microsoft looked at studies comparing the performance of human teams making decisions with and without AI assistance.56 In each case, AI acting alone outperformed humans.
That was hardly surprising. Predicting loan defaults and answering quiz bowl questions is the type of work machines love.
Surprisingly, the studies also showed that AI acting alone outperformed human teams with access to AI explanations (Table 1). The human intervention did not lead to better performance. It led to worse performance. The humans
would have been better off rubber-stamping the AI recommendations.

Table 1
Human-AI Team Performance

Source: University of Washington and Microsoft

Do these studies imply AI should be left to work independently? No. Humans often have experiences (data) and values (objectives) that are difficult to codify. There are plenty of examples of AI going haywire without human
supervision, but we should not insert humans into every decision to be “safe.” If an AI identifies a tumor in an X-ray, do you want a human overriding the diagnosis?
Over the next several years, most AI applications will benefit from humans in the loop. Digital infrastructure is not entirely in place, and mechanical minds are simple. Humans generally add value, whether that means reviewing
outputs or managing exceptions. Unfortunately, over time, we will likely become the annoying coworker rather than the wise mentor. In many workplaces, that day may arrive sooner than you expect.
Mental Model #4 - Exponential Progress

Artificial intelligence capabilities grow exponentially and are not always enhanced by human involvement. The present can tell us little about the future, and the rise of AI has significant implications for workplaces and our roles
within them.
Story: Deconstructing Alphabet
It feels like we are at a tipping point where machine labor is overtaking human labor in parts of society. When you type a search term into Google or watch a video on YouTube, no human is behind the scenes fulfilling your request.
However, Alphabet (the parent company of Google and YouTube) still had about 150,000 human workers in 2021. That is a lot of biological computing power.
There is no generally accepted way of comparing human and machine labor. Ray Kurzweil estimated it would take about one million lines of code to simulate a human brain.57 Using that logic, Alphabet and its two billion lines of
code58 employ about 2,000 human-equivalent machines. That seems low.
The problem with Kurzweil’s estimate is that a line of code is not a worker. A line of code can be executed a single time per day or millions of times per minute. Lines of code might be a proxy for logic, but they are not a good
measure of machine labor.
This is an unsolvable problem until we fully understand the human brain, but humor me while I try. The Agricultural Revolution produced more food to power more humans. Food is energy for human labor. Similarly, machines
require energy to work. Perhaps we can use energy consumption as a proxy for machine labor. This will require some tedious math. You can jump to the last paragraph in this section if you want the punchline.
Alphabet used 18.5 terawatt-hours of electricity in 2021,59 most of which went toward data centers like those housing the mechanical minds behind your Google search. Only about half of data center power consumption goes
toward computing, with the balance used for cooling, power conversion, and other infrastructure-related needs.60 In 2021, top-ranked computers were capable of 25 gigaflops per watt.61 Efficiency increases yearly, but let us assume
Alphabet’s machines are relatively efficient at 10 gigaflops per watt.
Multiplying 9.25 terawatt-hours of electricity per year by 10 gigaflops per watt yields 10,500 petaflops (one thousand trillion computations per second) of average capacity. This number is hard to confirm, but Google announced
that it built the world’s fastest AI training supercomputer with 430 petaflops of peak performance the prior year.62 Therefore, 10,500 petaflops of total capacity may be reasonable.
This is where things get tricky. There are widely varying estimates for how many computations the human brain performs per second. One credible estimate comes from Joseph Carlsmith in a paper published in 2020.63 He
consulted with 30 experts and used four methods to generate his estimate that a single human brain is capable of 10e15 flops (1 petaflop). Carlsmith’s estimate implies Alphabet has about 10,500 digital workers active 24 hours a day,
seven days a week.
Human workers cannot devote every minute of every day to work. We need to eat, sleep, and watch the occasional YouTube video. Each of us is only productive for about three hours per day.64 How much of our brain is used to
generate that productivity? I have no idea. Let us assume all the neurons in our brains are required for work. If we take the 150,000 human workers at Alphabet and remove the non-productive capacity, we are left with 13,000 human
workers.65
Is my math wrong? Absolutely. Is it believable that Alphabet is run by 10,500 machines and 13,000 human equivalents working 24 hours a day, seven days a week? That does not seem ridiculous. Alphabet is a technology
company on the leading edge of adopting machine labor. This could be why it feels like we are at a tipping point for AI. Maybe we are.
EARLY DAYS OF THE AUTOMATION REVOLUTION
Introduction to Part 2
We see automation all around us. But why? No law of nature requires us to replace human labor with machine labor. There is no cabal of billionaires and political elites plotting the demise of human workers. There is no secret society
colluding with robot overlords to take over the world. Why do we feel an innate desire to automate work?
I believe the answer lies in the fundamentals of economic theory. We are wired to survive in a world of scarcity. More workers and productivity lead to more goods and services for everybody. We cannot help but pursue
technologies that increase output per human. Our sense of survival is inextricably linked to producing more stuff.
These ideas are hardly new. The importance of scarcity in shaping human behavior has been well-documented in economics66 and psychology.67 Scarcity is why we crave everything from sugary snacks to luxury goods.
Automation offers the promise of abundance.
I cannot tell you where we are on the automation journey. I have no idea what life will be like in 2030, 2050, or 2100. I can say with relative certainty that we are in the early days. Recommendation engines, self-driving cars, and
large language models are the beginning, not the end.
You should be skeptical of any person or institution claiming to know the future of automation and AI. In my experience, the Dunning-Kruger effect68 explains most prognostications better than fact-based analyses. In the words
of a good friend, “The difference between confidence and arrogance is ability.” Be careful not to confuse the two.
Instead of forecasting the future, I try to understand what is happening today. I care whether progress is accelerating or stagnating. I care whether automation is showing up in micro- and macro-economic metrics. I care whether
anecdotes and headlines are rooted in fundamentals.
Part 2 of this book provides mental models you can use to explore the effects of automation on institutions and individuals. These are the models I use to decide what to care about, whether rhetoric is grounded in fact, and which
innovations are likely to bear fruit. Part 1 was about seeing the world through the eyes of machines. Part 2 is about recognizing machines in the world.
I begin where I ended Part 1, looking at the growing role of mechanical minds. Next, I investigate what happens inside traditional, human-powered organizations racing to adopt new technologies. I then look at how disruptors seek
to displace existing enterprises by placing machines, rather than humans, at the center. Finally, I explore whether we are becoming more like machines at the same time machines are becoming more like us.
"WORKING" CAPITAL: OUR EXPANDING LABOR SUPPLY
Mountain View, California is a surreal place. At first glance, it looks like a typical, upscale suburban community. Trendy shops and restaurants line Castro Street, and you are never more than a block from a decent latte. It was not
until I saw the swarm of food delivery robots buzzing down the sidewalk that I realized how much technology shapes the community.
Mountain View is the home of Alphabet Inc., the parent company of Google. In 2021, the city’s daytime population was just under 133,000 people. That same year, almost 26,000 workers showed up daily at the Googleplex. The
next largest technology employers in town are Intuit, LinkedIn, and Microsoft, with about 2,000 employees each.69 It is difficult to overstate the extent to which the fates of Google and Mountain View are inextricably tied.
You may not realize that Council Bluffs, Iowa has as much right to claim the title of “home to Google” as Mountain View. The worker handling your Google search is likelier to hail from the Midwest than sunny California. You
will not see these workers bustling about on the streets of Council Bluffs. They are too busy toiling away 24 hours a day, seven days a week, in a $5 billion data center down the street from the similar data centers housing Meta’s and
Microsoft’s digital workers.70 We need a new way of thinking about the hidden workforce powering our most prominent institutions.

Help Wanted
Economists are enamored with growth. Rising output expands the economic pie, creating more goods and services for everybody. I have concerns about a “growth at all costs” mindset and anybody who believes we allocate gains
fairly across society. Relying on Gross Domestic Product (GDP) as the yardstick of human progress is flawed, but it is difficult to argue against economic growth as an engine of opportunity.
Growing an economy requires two inputs: labor and capital. There are ways to use existing resources more efficiently, but productivity alone is insufficient. In Part 1 of this book, we looked at capital investments related to digital
infrastructure and the missing $3 trillion economic pie. In this chapter, we focus on the other input - labor.
If you think we have moved beyond a world where growth is dependent on human workers, you would be mistaken. The Industrial Revolution increased output per worker, but economies still depended heavily on human labor.
Figure 12 shows the 10-year rolling average growth rates for population and real GDP in the U.S. since 1960.

Figure 12
U.S. Population and Real GDP Growth Rates
10-Year Rolling Average
Source: World Bank71 and U.S. Bureau of Economic Analysis72

Like many developed economies, the population in the U.S. is growing more slowly each year. This creates a drag on GDP that will only get worse in the future. According to the United Nations, the global population will peak at
around 10 billion in 2080.73 We cannot rely on new human labor to fuel economic growth indefinitely. Automation is imperative.

Emergent Capabilities
Capital amplifies worker contributions in a society powered by human labor. The Federal Reserve Bank of St. Louis describes capital as “the machinery, tools, and buildings humans use to produce goods and services.”74 This
definition has served us well for centuries but is worth revisiting.
Let us start by asking what role humans play in the economy. Fundamentally, human workers sell skills and time to the market in exchange for compensation. I did not show up to my consulting job for more than a decade out of
the goodness of my heart. The work was interesting, but I would not have done it for free.
We track metrics for how much time workers sell to the market. Hours worked are an imperfect proxy for productive time, but the two measures are correlated. That said, human labor is not simply a function of hours worked. The
value of my time as a heart surgeon is zero, but I was paid generously for solving complex business problems. The value of human labor is determined by the unique skills we each bring to our jobs.
We build new skills in response to changes in demand. No human spoke the language of computers in 1900, but today we have more than 24 million people earning a living by selling that capability. The framework of capabilities
proposed by McKinsey Global Institute (Figure 13) serves as a helpful guide for organizing human skills.

Figure 13
Framework of Capabilities
Source: McKinsey Global Institute75

The framework lets us examine the labor humans sell to the market. For example, you might think of a carpenter as someone who sells physical capabilities. However, technology already exists to build a machine capable of swinging
a hammer. Carpenters are paid for capabilities like sensory perception (e.g., navigating construction sites), problem-solving (e.g., dealing with imperfect lumber), and coordination with multiple agents (e.g., homeowners, architects,
suppliers). What capabilities are you paid for in your job?
Machines can work 24 hours a day, seven days a week. Meanwhile, humans contribute less than three hours per day of productive time. That means a machine delivering the same skills can offer almost ten times the labor of a
human. It is nearly impossible for humans to outwork machines when performing similar tasks.
Fortunately for humans, the capabilities of machines are limited. Machines are exceptional at recognizing known patterns in data but lack the social and emotional reasoning skills needed to succeed in complex, human-powered
organizations. As long as there are capabilities that humans can do better than machines, there should be sufficient demand for human labor.
If you are having difficulty distinguishing between labor and capital in the context of artificial intelligence, you are not alone. Why is a human call center agent updating a customer’s address considered labor, but economists
consider it capital when a machine does the same activity? The distinction was clear when human labor was required for nearly every task. The call center agent was labor, and the telephone was capital. But how should we think about
a chatbot that works autonomously?76
If we continue to view machines as capital, our imagination will be gated by the number of human workers. If we think of machines as labor, the possibilities are endless. What would living in a world with 4 billion human workers
and 100 billion digital workers mean? That is not a hypothetical question. I made the case that Alphabet comprised 10,500 machines and 13,000 human equivalents in 2021. The company may already be run predominantly by
machine labor, which could soon be the norm.

Job Security
There should be a demand for human labor as long as humans possess capabilities beyond the reach of machines. What makes me so confident? Why are we not headed toward 50 percent unemployment and Luddite-championed
riots?
Machines have been replacing human workers for an ever-increasing number of tasks for decades. Industrial automation pushed workers out of jobs with a heavy reliance on physical capabilities. In turn, new jobs were created to
use the excess cognitive, social, and emotional labor available in the market. Industrial automation led to disruption but not widespread unemployment.
You can see the shift away from physical labor in the employment numbers for the United States. Figure 14 shows the composition of jobs in 1920 relative to 2022. This is admittedly an imperfect analysis. These numbers were
derived by looking at the subset of jobs comprising 80 percent of total employment and assigning them to one of three categories: physical, cognitive, and social & emotional. Jobs are complex and require multiple capabilities. For
this analysis, I cared only about which bucket of capabilities workers were primarily paid to provide. In this analysis, a carpenter is a cognitive job even though the role also requires physical labor and social interaction.

Figure 14
Changes in Capability Requirements
Source: U.S. Bureau of Labor Statistics77 and U.S. Census78

The decline in jobs primarily requiring physical capabilities is hardly surprising. The impact of industrial automation is well documented. What might be more surprising is where the new jobs were created. There is a popular narrative
that physical automation pushes humans into cognitive jobs and that cognitive automation will push humans into social and emotional jobs. That might be true, but more than a third of humans already earn a living primarily by selling
social and emotional labor.
The paradox of automation states that the more efficient an automated system is, the more critical the human contribution.79 If a system is 50 percent automated, the relative contributions of humans and machines are similar. If a
system is 99 percent automated, the contribution of human workers is not 1 percent of the total. Without humans, the system cannot function. You need humans to unlock the full potential of the machines.
The paradox of automation does not mean human workers capture the total value of their contributions. Wages are a function of the labor supply for a skill. For example, a trader using algorithms to buy and sell stocks has a rare
skillset that commands a high wage. Conversely, a shared service center worker dealing with expense report exceptions often works for minimum wage. Both are examples of automated processes requiring human labor. However,
more humans can recognize a taxi receipt than understand financial security valuation.
If we are headed towards a world with 100 billion digital workers, there should be plenty of work for humans. Concerns about spiking unemployment are likely overblown. The more important question is whether automation will
create fulfilling and lucrative jobs or if we are headed towards a world where most humans provide commodity labor. Low unemployment is hardly worth celebrating if most workers cannot afford to put food on the table.
Whether automation of cognitive tasks will create good-paying jobs for humans hinges on what you believe about social and emotional skills. If you think social and emotional skills are uniquely human, concerns about automation
are probably unfounded. Automation will lead to disruption, but new jobs should harness the market’s excess social and emotional labor.
Unfortunately, I believe time will reveal that social and emotional skills are high-order cognitive functions within reach of machines. Work is already underway to train algorithms to detect human micro-expressions and emotions.
It may seem ridiculous that a human could ever feel an emotional connection to a machine. But do not be so quick to dismiss the idea that a computer-generated character in your virtual reality headset could one day be your best
friend.

Coming Attractions
All this talk of labor markets may seem abstract. It is not like most of us spend time pouring over reports from the Bureau of Labor Statistics or eagerly awaiting the latest Beige Book from the Federal Reserve Bank. Fortunately, there
is plenty of concrete evidence in daily life if you know where to look.
A commonly cited example in modern times is the automated teller machine (ATM). In 1980, there were about 500,000 bank tellers in the United States. By the turn of the century, almost 400,000 ATMs had been installed
nationwide.80 The increased machine labor surely put bank tellers out of work, right?
If you visited a bank branch recently, you know plenty of tellers still exist. In fact, the number of tellers is higher than it was in 1980. The low-wage ATM workforce made it possible to open more branches and changed what it
meant to be a bank teller. Instead of doling out cash to customers all day, tellers could focus on tasks requiring advanced cognitive capabilities, like originating loans.
The problem with the ATM fairy tale is that it ignores what happened next. Machines did not stop invading the world of financial services. Today, I spend more time using a mobile banking app than visiting the bank branch two
blocks from my house. In 2022, Wells Fargo generated $74 billion in revenue and employed nearly 250,000 humans. Stripe, an online payments firm, generated $14 billion in revenue and employed about 8,000 people. The ATM was
the beginning, not the end.
I hypothesize that service workers will be bifurcated into two groups. The first will provide the market with technical, conceptual,81 social, and emotional capabilities for decent wages. The second will provide basic physical and
cognitive abilities in support of machines. You likely interact with workers in each of these categories daily.
For me, the first category is exemplified by content creators. Technology allows shooting, editing, and posting content online without expensive production support. An ever-increasing number of people sell social and emotional
labor one video at a time. Content creators earn more than $10 billion annually.82 Not everybody can be a YouTube star, but plenty of content creators make a decent living by selling social and emotional skills today.
For an example of human workers selling basic cognitive skills, we can return to the banking world. What happens when you dial the phone number on your credit card? The call likely starts with you speaking to a machine or
navigating a “choose your own adventure” menu. The bank’s goal is to have a machine solve your problem or direct you to a website (another machine) for what you need.
After screaming “agent” into the phone three times like a cursed magic lamp, you might be fortunate enough to speak with a human. Is this person paid for the ability to solve complex problems or create an emotional bond
between you and the bank? In most cases, no. The person is paid to follow a call script and bridge the gap between you and the company’s computer systems. Call center agents can be far more than “human APIs,” but often, that is the
role banks ask them to play.83
I am inspired by new jobs like content creators but am terrified by the explosion of low-wage service jobs. I want to believe the two will balance out over time, but the math is not on our side. If we are headed toward a world with
100 billion digital workers and 4 billion human workers, there is a good chance most “help wanted” postings will be for human APIs.
Mental Model #5 - “Working” Capital

Artificial intelligence looks more like labor than capital. Machines are unlikely to push humans out of the workforce, but there is no guarantee that the jobs of tomorrow will be better than the jobs of today.
Story: Up, Up, Down, Down, Left, Right...
Like many kids in the 1980s and 1990s, I spent my free time playing video games. From “Pitfall!” on Atari to “Golden Eye” on Nintendo, I had a front-row seat for the journey from 8-bit side-scrollers to 64-bit explorable worlds. I
never stopped playing video games, but my family, job, and sleep eventually won out in the battle for my time.
Playing video games was not a job when I was growing up. You did it for fun and to escape the drama of life. When a high school classmate went to university for video game programming, many thought he was crazy. The
industry now employs more than 250,000 people in the United States alone.84 It turns out he made a wise career choice.
Two years ago, a friend convinced me to get back into video games. He had been playing “Overwatch” with other 40-somethings from Madison, Chicago, and Kansas City. My first few games went about as you would expect.
Game controllers have more buttons than I remember, and people my age are not known for their superior reflexes. So, I did the same thing I always do when I need to learn a new skill quickly. I turned to YouTube.
I watched Flats, Emongg, Jay3, ML7, and KarQ play the game and offer advice on how to “climb the ranks.” I was especially hooked on Flats’ videos. His positive attitude and quick-witted commentary were addicting. At the end
of 2022, he had nearly 370,000 subscribers to his YouTube channel. He is not exactly getting rich playing video games, but he is earning a decent living.85
I do not watch Flats’ channel because he plays the game better than anybody else. There are better players, and autonomous bots could mop the floor with any human. I watch Flats’ videos because of the social and emotional
bonds he creates. I have fun watching him play, and he has made me a slightly less terrible “Overwatch” player. I put up with the annoying YouTube ads because that is the price I pay for Flats’ content.
There are more than 50,000 channels with YouTube subscriber bases larger than Flats.86 He will never be a household name. But that is the point. Flats can earn a living by selling social and emotional skills. He is backed by an
army of digital workers, doing everything from transcoding his videos to processing his micropayments. His business was not possible 20 years ago. Social media superstars receive disproportionate attention, but content creators like
Flats give me hope for the future.
How long will it be until machines come for the content creators? Companies like Google and Meta are already working on platforms that automate video creation. What if I could type “teach me how to play Moira on the Oasis
map in Overwatch” and receive a video perfectly tailored to my needs? Would I still yearn for a human creator? What if the video commentary was provided by an AI trained to mimic Flats?
I am unsure whether we are entering a golden age for human workers or if this is a fleeting reprieve from the relentless march of artificial intelligence. Either way, I do not spend time worrying about economic growth. There will
be plenty of Overwatch videos regardless of whether Flats or his ultra-productive AI twin are cranking out the content.
KEEPING UP: HUMAN-POWERED INCUMBENTS
Enterprises have been honed over generations to harness the power of human labor. You often hear some variation of “our people are our most important asset” repeated passionately as if it were the default voice line of a CEO
windup toy. That is no coincidence. Humans are the essential asset for nearly every company founded before the turn of the century.
This presents a problem. How do you inject machine labor into enterprises designed around humans? The answer for most large companies is some combination of automation, digital, analytics, AI, data, and other “buzzword of
the day” programs. Unfortunately, most of these initiatives are a financial disaster. The number of machines increases, but the impact is slow to accrue.
One explanation for the disconnect between investments and returns is digital infrastructure, which we already covered. The other reason is that firms have yet to achieve the economies of scale necessary for success. A few dozen
people cannot be expected to automate the work of thousands.
That is not to say all companies are struggling. There are success stories. But often, it means solving different problems. Instead of trying to replace humans with machines, firms invent work for machines or start from scratch.
Results are terrific if you ignore the parts of the organization still powered by humans.
This was a difficult chapter to write. I want to believe the enterprises employing our families and friends will “figure it out.” Unfortunately, time is not on their side. Transitioning to a machine-powered workforce is a monumental
task. In this chapter, I describe what it takes to automate work in traditional firms. Next chapter, I explain why this may not be enough.

Fingers and Toes


“Just because something can be automated does not mean it is economically viable, and just because something is economically viable does not mean you can capture the value.” I have uttered that sentence hundreds of times. It seems
obvious, but most leaders struggle to accept the implications.
Each morning, I start the day with a cup of coffee. I grind the beans, pour water over them, and enjoy the last moments of peace before the caffeine awakens me to the day’s responsibilities. There is no technical barrier to
automating my coffee routine. Technology exists to make a decent cup of coffee. But I occupy an uncomfortable niche among coffee drinkers.87 I do not like the taste of coffee made in a drip machine, and I do not value perfect taste
enough to sink thousands of dollars into an automated pour-over machine.
My problem is not that coffee production is beyond the reach of automation. My problem is that automating how I make coffee is not economically viable. Too often, leaders point to examples of automation potential without
regard for economics. “A bot could produce that report. An algorithm could approve that transaction.” They are often correct, but throwing thousands of dollars at automating a report that takes a person one hour to produce is a waste
of money. Not every task is economically viable to automate.
But what if it were economically viable to automate my coffee routine? Let us pretend that my time is so valuable that 15 minutes of capacity offsets the cost of a fancy machine. Would sending a few extra emails each morning
make a difference? Could I crack a tricky business problem in the time it takes to brew a cup of coffee? The average human worker is productive for less than three hours per day. What am I going to do with an extra 15 minutes?
This is the “fingers and toes problem” of automation. It is a macabre term if you interpret it literally, but the point is that saving bits of time around an organization is pointless if you cannot use it productively. What happens when
you automate a report? Usually, the author finds another report to produce using the time you created. All the while, nobody is reading either report.
I created Figure 15 based on my experience with hundreds of automation programs.88 This is by no means perfect, and there is considerable variation around these averages. However, as a rule of thumb, 80-90 percent of tasks in
service industries can be automated from a technical perspective. If cost were no object, most tasks could be done by machines.

Figure 15
Average Automation Potential
Service Industries

This is why companies should avoid rewarding employees for finding tasks to automate. That is solving the wrong problem. Resources are expensive, and less than half of what is technically possible is economically viable.
Encouraging people to find tasks to automate is like an Easter egg hunt where fewer than half the eggs have candy inside—lots of enthusiasm followed by a wave of frustration.
Additionally, you can only redraw organizational boxes and lines in so many ways. If I automate 30 percent of your work, I still need you. I can shift more responsibilities your way or consolidate roles, but not always. In practice,
only about half of the time created by automation can be put to productive use.
The net result is that companies typically realize only 10-20 percent savings from automation when considering all the work done by humans. There are stories of entire teams being replaced by machines, but those stories
obfuscate reality. Automation is difficult, and progress is slow.

Automation Flywheel
The encouraging news is that automation economics improve over time. Headlines focus on how the “Technically Possible” category inches up with each innovation. Last year, machines could not create digital art. This year they can.
But how much of the average person’s day is spent on digital art?
As companies stumble through the early years of automation, they are planting seeds for future success. Economic viability is a function of two factors: costs incurred and benefits realized. More opportunities are economically
viable as a firm reduces its cost to automate. Similarly, as “fingers and toes” grow into “arms and legs,” the company captures more benefits. This is sometimes referred to as the “automation flywheel.”
When I began advising a well-known bank, deploying a bot took 12-16 weeks and cost more than $150,000. The management time, consulting support, and developer capacity required to automate simple tasks were staggering.
Within a couple of years, the timeline was down to 6 weeks, and the cost had fallen to $30,000. This is the power of the automation flywheel.
It is difficult to parse the myriad of factors behind the cost improvements. Reusable assets like blocks of code sped up the development process. Employees figured out better ways of working. Leaders designed procedures that
reduced firefighting, rework, and delays. The impact of each factor varies from firm to firm, but the overall cost trend is almost always down.
Organizations also figure out how to capture more benefits. Automation teams improve design and estimation techniques. After evidence of early success, executives become more willing to sign up for savings. Those 15-minute
blocks turn into hundreds or thousands of hours that can be put to productive use.
All this may lead you to believe enterprises are on the precipice of an automation-fueled productivity extravaganza. What stops companies from automating large swaths of human labor if economics improve with time and scale?
The problem is that Figure 15 reflects the economics after establishing an automation flywheel. Costs and benefits improve marginally after that, but the gains I described are baked into the numbers. According to McKinsey,
nearly three-quarters of firms report savings of less than 10 percent from AI adoption.89
I am not claiming all companies are fated to achieve de minimus savings from automation. Technology marches forward, and new methods like “citizen development”90 could improve economics. That said, traditional enterprises
do not have an infinite timeline. At some point, running companies on a foundation of human labor will no longer be feasible.

Inventing Work
Think of the automation examples you encounter every day. Streaming services predict which show you want to watch next. Credit scoring algorithms comb through your transaction history to reduce your financial standing to a three-
digit number. Search engines scour the internet to help identify that jumping spider in your basement.
What do these examples have in common? Humans were never doing the work. The local video store employee did not ask you to list every movie you had ever watched. The loan officer at your neighborhood bank did not pour
over a decade of receipts. If you wanted to know about a spider in your basement, you did not run to the library and leaf through every arachnid book.
A staggering number of machines are engaged in work created specifically for them. The examples I provided are from “modern enterprises” built on a foundation of machine labor, but the same happens within traditional
organizations. Firms stand up analytics teams to comb through data untouched by humans. They launch “digital first” products and services. Next time you see an example of automation in the wild, ask yourself, “Was a human ever
doing that work?”
Inventing new work for machines is not automation. There is no replacement of human labor with machine labor. There is simply the creation of machine labor. It makes sense for companies to go down this path. It is often
cheaper to continue paying humans to do existing work than to go through the trouble of replacing them with machines.
I cannot find evidence of success for this “invent work” approach. This is less of a recommendation and more of an observation. Perhaps companies can ramp up the “hiring” of machines so quickly that human labor is no longer a
drain on profits. However, I am skeptical that traditional firms can run away from the automation challenge.
Imagine if your favorite streaming service employed dozens of humans to dig through your watch history, carefully analyzing every click. The company would be bankrupt overnight. The idea of replacing machines with humans
sounds ridiculous. Yet, we do not blink an eye when firms pay humans for work that machines could do better and cheaper.

Unfixable
At some point, companies must face the reality that existing work needs to be redesigned for machines. The question is how to do it. Machines work differently than humans, and layering technology on top of manual processes rarely
works. Tools can enhance human productivity, but workers eventually struggle to keep up with the machines.
Many firms treat automation as a process initiative like Lean or Six Sigma. I cannot tell you how many leaders advocate for a “fix then automate” approach. The idea is to optimize processes before automating them. It sounds
logical but rarely works.
Why do programs like Lean and Six Sigma exist? In simple terms, Lean seeks to reduce waste, variability, and inflexibility. Six Sigma aims to reduce defects.91 These methods are at the core of most productivity efforts.
Companies refer to them with catchy names like “Process Excellence” or “[insert company name here] Management System,” but they are all similar. Everything is a remix.
What is the primary source of waste, variability, inflexibility, and defects? Humans. We were not designed for routine work like scanning barcodes or generating TPS reports. Just because we do something correctly 99 times in a
row does not mean we will do it correctly the hundredth time.
“Fix then automate” typically fails because optimizing a process for humans is not the same as optimizing it for machines. Yes, there are changes like simplification that benefit both. However, most “fixes” do not adequately
configure work for machines. Jobs must be broken into tasks that are then recombined in new ways that make the most of human and machine capabilities.
Nowhere is this insight more evident than in call centers. Companies layer chatbots, interactive voice response (IVR) systems, and other technologies on top of manual processes. The result is usually customers yelling “agent” into
the phone. People do not dial a phone number to be told they should visit a website.
A better approach is to start with a clean sheet, carefully dividing existing work between humans and machines. Authenticating the caller? Machine work. Counseling a grieving family member through a hardship withdrawal?
Human work. Documenting the call notes? Machine work.
Once the work is divided, it must be redesigned. This does not mean replacing humans with less capable machines in a thinly veiled bait and switch. “Press 1 for…” is not the same as asking a person why they are calling. If you
want customers to interact with you in a structured way, shut down the call center and point them to a website. If you provide a phone number, you better have a machine on the other end that can reliably understand natural language.
There is evidence for the value of starting with a clean sheet rather than pursuing a “fix then automate” approach. In a 2022 survey, McKinsey found that successful organizations employ clean sheeting and similar methods about
twice as frequently as less successful firms.92 Incremental “fixes” do not work. Success requires fundamental changes to how work gets done.
It is encouraging to see companies finding innovative ways to inject machine labor into the workforce. What happens if firms automate away all the jobs? That is a risk, but I believe the more significant risk is that today’s
incumbents fail to compete with tomorrow’s disruptors.
Mental Model #6 - Keeping Up

Traditional companies are struggling to make the economics of automation work. Fears of mass unemployment due to profit-maximizing layoffs are probably overblown. The more significant risk is that traditional organizations
employing millions of humans become increasingly irrelevant.
Story: Winning a Losing Game
In 2017, I joined a team helping a healthcare company reduce its costs by $500 million. The goal was to free up capital to invest in new ventures, technology initiatives, and acquisitions to position the company for the future.
Automation was expected to play a significant role.
My initial meetings with the automation team reminded me of proud parents gushing over the accomplishments of a gifted child. Every scorecard was glowing green. Had I found an automation unicorn? I was skeptical. I know
how hard it is to automate work in an organization with more humans than in most cities. The lack of horror stories was itself a red flag.
Over the next week, the team and I dove into hundreds of projects. We tallied every dollar spent and saved. We cared less about stories and more about whether actions translated into financial results.
The findings from our analysis were depressing. The company had spent over $50 million on its automation program only to realize $2 million of savings. This is common. Early efforts are often a financial disaster. The client had
no idea what a money pit the automation program had become.
After navigating the five stages of grief, the leadership team took a step back and tried to determine what could be salvaged. Rather than scraping the entire program, the team sought to double down on successful practices and pull
the plug on those flailing. This was an opportunity to examine what works and what does not in a traditional organization.
Most of the early initiatives involved layering technology on top of manual processes. The company deployed more than two hundred bots. The value of that work was approximately zero. On paper, the bots generated 100,000
hours of annual capacity. In practice, the capacity was unusable because it was scattered throughout the organization.
It is worth noting that the problem was not with the client’s automation flywheel. The cost to build a bot was competitive with other programs, as were the benefit capture rates. The issue was that 10-20 percent savings on each
project were insufficient to offset the fixed program costs. It was cheaper to keep using human labor.
Fortunately, the company also identified three practices that worked: simplification, analytics, and re-platforming. The first benefited both humans and machines. For example, the team reduced the approvals required for staff
augmentation requests from four to one. That meant less work for humans and made it easier to automate approvals with an algorithm.
The automation-specific practices entailed inventing work and clean sheeting. The client shifted resources from its automation team to its analytics team. Instead of building algorithms to automate tasks, the team built machines to
comb through the terabytes of data untouched by humans. One of those machines uncovered criteria for predicting revenue growth for new locations, changing how the company deployed millions of dollars in capital.
The client also redesigned its revenue cycle management (RCM)93 process from the ground up. The team did everything from upgrading the core platform (digital infrastructure) to building algorithms to aid in collections
(machine labor). The effort required two years and millions of dollars. The result was an automated process and a five percent revenue lift.
This might sound like a company that “figured out” automation. The automation teams delivered more than $100 million in savings. That seems promising, but let me share one last metric. In 2017, when this story began, the
company generated $900,000 of revenue per employee. By 2021, that number had increased to $970,000. That is a measly two percent increase in output per human each year.
This example shows how hard it is for traditional organizations to adopt machine labor. This company has a valuable brand, skilled workforce, stable customer base, and unique intellectual property. It dominates the markets in
which it competes. The question is whether those assets will be enough to hold off disruptors built on a foundation of machine labor. Time and history are not on the company’s side.
STARTING OVER: MACHINE-POWERED DISRUPTORS
If you were to start a company today, how would you do it? Would you hire thousands of humans to generate invoices, answer phone calls, and produce reports nobody reads? Probably not. You would do what most startups do. You
would begin with a foundation of machine labor and only add human labor when necessary and valuable.
The success of clean sheet approaches within traditional enterprises should be a warning sign rather than a cause for celebration. If “fix then automate” worked, traditional firms would be well-positioned for the future. It would
mean that technology could enhance work designed for humans. Instead, the success of clean sheet approaches means rebuilding from the ground up is often better.
When I launched the automation service line at my consulting firm, I was worried about the social impact of my work. Would I contribute to millions of job losses as organizations raced to substitute machine labor for human
labor? My anxiety was amplified by books like “Rise of the Robots,” which painted a bleak picture of the future.
I am skeptical that automation will make its way into society by replacing human labor at traditional companies. Yes, there will be job losses. However, with each passing year, those job losses seem more likely to occur through
disruption than automation.

Here Today, Gone Tomorrow


The displacement of incumbents by innovative startups is hardly a new concept. “Creative destruction”94 refers to the “process of industrial mutation that continuously revolutionizes the economic structure from within, incessantly
destroying the old one, incessantly creating a new one.”95 In the early 1900s, creative destruction was viewed as a threat to capitalist economies. In practice, the process rapidly shifts resources from inefficient business models toward
those more likely to succeed in the future.
You can see the effects of creative destruction when looking at the average age of companies in the Standard & Poor’s 500 Index (Figure 16). Disruption is lumpy, but the overall trend is toward a younger cohort. Seven of the ten
most valuable companies in the S&P 500 did not exist before 1975.

Figure 16
Average Company Lifespan in the S&P 500 Index
Rolling 7-year Average

Source: Statista96

Historically, creative destruction has been good for human workers. New business models generate more output per labor hour, making that labor more valuable. In addition, existing firms are forced to pursue efficiencies that boost
output per worker. While painful, creative destruction fuels growth and quality of life improvements.
The era of AI is shaping up to be no different. According to a 2018 Pew Research Center survey, 82 percent of U.S. adults say that by 2050 robots and computers will definitely or probably do much of the work currently done by
humans. A much smaller share (37 percent) say robots or computers will do the type of work they do.97 Said another way, people believe the robots are coming but not for them.
This is an excellent time to remind you humans are terrible at forecasting exponentials. The problem is not disruption. The problem is the pace of disruption. Human workers have time to adapt when disruption occurs over
generations. I did not pursue a career on the automotive assembly line even though I was born in Detroit and three of the top ten companies in the S&P 500 were auto manufacturers.98 If you think today’s most prominent institutions
will be tomorrow’s most successful companies, you are betting against powerful economic forces.

Gradually and Then Suddenly


There is an often-quoted exchange from Ernest Hemmingway’s novel, The Sun Also Rises. Bill Goton asks, “How did you go bankrupt?” Mike Campbell responds, “Two ways, gradually and then suddenly.” No company expects to
fail. Many leaders do not recognize the early signs of existential threats. By the time the dangers are understood, it is often too late.
In his 2020 letter to shareholders, the Chairman and CEO of JPMorgan Chase, Jamie Dimon, highlighted the risk of disruption to the company.99 He said, “Banks already compete against a large and powerful shadow banking
system. And they are facing extensive competition from Silicon Valley, both in the form of fintechs and Big Tech companies (Amazon, Apple, Facebook, Google and now Walmart), that is here to stay. As the importance of cloud, AI
and digital platforms grows, this competition will become even more formidable.”
JPMorgan is at the forefront of recognizing the disruptive potential of AI. It invests $12 billion per year in technology, claiming, “There are two kinds of corporations emerging from today’s technology revolution: the disrupted
and the disruptor. JPMorgan Chase is in the midst of a once-in-a-generation transformation into the latter.”100 The result? Revenue per employee has hardly budged over the past decade.
If that is the state of affairs at JPMorgan, imagine what is happening at other prominent institutions. Most CEOs believe they are embracing AI and investing for the future. In reality, I think many large corporations are slowly
going bankrupt.
In 2004, Blockbuster Video peaked with more than 9,000 stores and 80,000 employees. Six years later, the company was bankrupt.101 Looking back, Blockbuster was doomed as early as 2000 when it declined to purchase Netflix
for $50 million. That price tag represented less than five percent of the company’s operating cash flow.
It is easy to second-guess the Blockbuster management team. But who would have thought in 2000 that Netflix could build a business with five times the revenue and seven times fewer employees? If you ran a company the size
and scale of Blockbuster in 2004, it was hard to accept you were facing an existential threat.
I believe we are in the “gradual” stage of AI-driven disruption. The impact on media, retail, financial services, and other industries is the beginning, not the end. We are witnessing disruption caused by companies built on
technology from the last decade. The “sudden” stage may still lie ahead.

Different by Design
Many believe that incumbents become complacent and risk-averse over time, inviting disruption. That idea has been largely debunked. You would be hard-pressed to find a CEO unconcerned about the disruptive potential of AI. Clay
Christensen’s work on disruptive technology theory points to a dynamic where incumbents cede the “low end” of the market.102 This creates a foothold for disruptors, which eventually move upmarket.
The theory of disruptive technology makes sense through the lens of competitive dynamics, but my experience points to another reason incumbents struggle. It is virtually impossible for human-powered organizations to transition
to machine labor economically.
Let us start with how information flows into the organization. In a traditional firm, most data flows are analog and unstructured. Information arrives through sales teams, call centers, emails, and other channels designed for
humans. Technologies exist to digitize and structure the inputs, but they are expensive. There is a reason disruptors want customers to engage through an app rather than a phone number. It is cheaper to have customers structure
requests in a way that machines can understand than to pay humans to do that work.
Next, you need to process the data you collect. If you are a traditional firm, you rely on humans to do the heavy lifting. That means building machines that work for those humans. You invest heavily in user experience and
interface design to make your systems easy to use. The automated systems employed by disruptors do not require fancy interfaces. Disruptors have the luxury of building the human workforce around the machines.
Finally, you need to organize and manage how work gets done. In a traditional firm, that involves functions like Finance and Human Resources architected around expertise. It also requires management practices to motivate
people, build trust, and foster collaboration. These practices are the “secret sauce” that allows human-powered companies to harness the collective potential of thousands of employees. They are less of a differentiator if you run a firm
primarily on machine labor.
My problem with the theory of disruptive technology is that it implies blocking access to the “low end” of the market will stave off disruption. Large technology companies like Meta and Apple embrace this theory, buying any
startup that gains a foothold. This strategy is unlikely to solve the fundamental problem for most firms outside the technology sector. I do not think companies cede the “low end” of the market because they are greedy. They do so
because new technologies reset customer price expectations, and human-powered firms cannot afford to automate at the pace required to meet those expectations.

Company of One
When you hear “startup,” what do you picture? Is it a group of twenty-somethings in an overcrowded apartment? Is it an MBA grad pitching a product-market fit to a venture capital investor? Is it a unicorn crossing the billion-dollar
valuation threshold? Or is it somebody handcrafting gorgeous necklaces and selling them online?

Figure 17
Millions of Small Businesses in the U.S.

Source: U.S. Small Business Administration

According to the Small Business Administration, more than 80 percent of small businesses in the U.S. have zero employees other than the owner (Figure 17).103 That percentage has grown steadily since the late 1990s. The
performance of these companies is disappointing, with two-thirds reporting financial challenges over the past 12 months. That said, the increasing proportion of firms employing few or no humans is notable.
Nonemployer firms are most concentrated in industries like performing arts and passenger transportation, but these businesses appear in almost every part of the economy.104 Advances in technology are making it easier to operate
as a “company of one.” Increasingly, all you need is talent, a computer, and an internet connection to start a viable business.
If we are headed toward a world where digital workers outnumber human workers by a wide margin, it is time to revisit what it means to build a company. Do you need a Finance department and customer service team, or will an
online bookkeeping tool and large language model (LLM) suffice? Today, a 25-person company earning $10 million in revenue is a successful small business. Is it so difficult to picture a company with one human worker and 24
digital workers achieving similar success?
I am not claiming large firms have no place in society. Seven of the ten largest companies worldwide are in the technology sector. Good luck training a challenger to ChatGPT on your laptop.105 There are scale benefits when it
comes to producing machine labor, but it is also the case that companies using machine labor extensively do not require as many humans.
AI disruption need not be painful. Technology firms like Google and Microsoft may be joined by new behemoths, pumping billions of digital workers into the economy. Human workers could then “employ” those digital workers,
providing products and services at a cost and scale inconceivable today.
This path could lead to an economy that is good for most humans. Imagine if you had a team of machines ready to help you build a small business, command a higher salary, or scale a side gig. Each wave of AI progress will lead
to more disruption, but do we care if that disruption fails to trigger massive job losses?
The traditional corporation is not an indelible part of the human experience. It is a tool for organizing human labor.106 Many of us fear automation because our livelihoods are tied to the firms for which we work. A “company of
one” has no institution to destroy. It is insulated from disruption because it can easily swap out its digital workers when better ones arrive.
That is the optimistic view. The pessimistic view involves an acceleration of trends we already see. According to the Federal Reserve Bank of New York, one-fifth of nonemployer firms were started because the owner lacked
employment opportunities.107 Access to low-cost machine labor is of little use if humans lack the skills to harness it.
I am optimistic about the future but pessimistic about our ability to manage the transition smoothly. Our educational system and social programs are designed to manage disruption across generations, not within them. We must
look for ways of closing the gap between humans and machines. Whether you realize it or not, that line is already beginning to blur.
Mental Model #7 - Starting Over

Replacing human labor with machine labor is complicated and expensive. It is often better to start from scratch. Smaller companies built on a foundation of machine labor may be less vulnerable to AI disruption than traditional
enterprises.
Story: For the Dogs
In business, there is a practice known by the delightful moniker “dogfooding.” It involves consuming your products and services to test and refine them. I have dozens of stories about CEOs struggling to navigate AI-driven disruption,
but the more exciting story is what the firms advising those CEOs are doing internally.
Perhaps nowhere is the statement “our people are our most important asset” more true than in management consulting. The firm I worked for was complex, but the business model was simple. We attracted top students from the
best schools, provided them with unparalleled work experiences, and invested heavily in feedback. We then “rented” our employees to clients trying to solve challenging problems.
With that background, you can see why AI threatens management consulting firms. The value of human problem-solving is on the way down. An analyst and a spreadsheet are no match for an algorithm pouring over terabytes of
data. There is still a role for humans, but clients are unwilling to pay for human labor when a machine can do the work cheaper, faster, and better.
In 2017, the Harvard Business Review published an article titled “AI May Soon Replace Even the Most Elite Consultants.”108 The article is a call to action for management consulting firms. The author sums up the threat in two
sentences. “According to recent research, the U.S. market for corporate advice alone is nearly $60 billion. Almost all that advice is high cost, human-based, and without the benefit of today’s most advanced technologies.”
Consulting firms know AI disruption is coming. They conduct research, publish reports, and warn clients of impending doom. Most have accelerated hiring technical talent like data scientists and invested in digital assets like
demand planning algorithms, but the underlying operations remain largely human-powered. When you hire a traditional consulting firm, most of what you pay for is human labor.
How would you build a management consulting firm from scratch? The business model need not change. There is still money to be made in renting distinctive talent. What is changing is the definition of “talent.” How do you
attract, develop, and retain the best digital workers?
“Attracting” digital workers requires partnerships with leading technology companies. In the past, consulting firms relied on prestigious universities to provide a steady influx of talented humans. In the future, firms will need to
“recruit” the top digital talent from companies like Microsoft, AWS, Google, and OpenAI. Who has the best large language model? Who has the best sentiment analysis API? Accessing the best digital workers may become the new
“war for talent.”
After you attract digital talent, you need to “develop” it. In the human-powered model, that means lots of projects across many clients. Consulting firms rarely solve a problem for the first time. Expertise gained from advising one
client is often transferrable to the next client. How do you create an analogous system for digital workers?
One approach is federated learning.109 Consulting firms cannot expect clients to hand over their valuable data, but most clients are willing to provide access to data while consultants work on a project. Federated learning uses data
from multiple sources to improve machine learning model performance over time. Rather than assembling massive datasets, consulting firms could use federated learning to train models on data from various clients.
At the peak of the COVID-19 pandemic, the technology company, NVIDIA, teamed up with the hospital system Mass General Brigham (MGB) to build a diagnostic algorithm. Regulations limit sharing of patient data between
hospitals.110 Collecting data from multiple hospitals and centrally training an algorithm is illegal. NVIDIA and MGB circumvented this limitation by embedding their algorithm in 20 locations. No data changed hands, but the
algorithm could “learn” from multiple hospitals’ data.
The federated learning approach taken by NVIDIA and MGB is analogous to how consulting firms develop human workers. Consultants receive little upfront training. Most development takes place in the context of client work.
Federated learning offers a similar approach for digital workers. Clients gain access to distinctive digital talent while consulting firms improve the capabilities of their digital workers with each successive project.
Finally, there is the challenge of “retaining” digital workers. Machines do not pursue better career opportunities like humans. However, digital talent can be stolen by disgruntled employees or persistent hackers. Once a digital
worker is widely available, renting that worker to clients at a premium price becomes difficult. Consulting firms retain humans by offering competitive pay and benefits. A machine-powered firm must excel at cybersecurity to retain
its digital workers.
The consulting firm where I worked is not failing to act in the face of AI disruption. Partners recognize the threat and are recruiting technical talent, building digital assets, and modernizing problem-solving approaches. The
challenge is that the business is still powered by thousands of employees in hundreds of offices worldwide. A disruptor starting from scratch may yet build a better model for consulting in the age of machine labor.
LIKE US: HUMAN-MACHINE CONVERGENCE
The gap between humans and machines is closing. I have written about how machines become more capable over time and why that trend will likely accelerate. That is only one side of the story.
Machines are becoming more like us, and we are becoming more like them. AI is changing us in ways that are not obvious but are apparent if you look for them. I described why I think pursuing artificial general intelligence (AGI)
is a red herring. The other reason I have little appetite for AGI conversations is that the definition of what it means to be human is evolving.
In this chapter, I explore how we are becoming more like machines and why our actions accelerate the pace of automation. I am not advocating we reverse course. Much of what I describe is subconscious and beneficial. I simply
want us to recognize that we are increasingly playing the game of life by machine rules.

Crossing the Digital Divide


When you access a website, you initiate a data transfer between machines. Your computer sends a request to a web server, which replies with the code required to construct the website in your browser. We do this hundreds of times
each day, whether scrolling social media, sending an email, or making an online purchase.
Data transfer did not begin with the invention of the internet. We have been transferring data between each other for at least 50,000 years.111 If I have a piece of “data” I want to put into your head, I translate that data into
language or an image I can convey. Language is a library of sounds and symbols we use to transfer data between humans.
Last year, I read “The Sympathizer” by Viet Thanh Nguyen. The story is engaging, but the language sets the book apart. Nguyen writes beautifully with control of language reminiscent of “the classics.” The book stands out
because it was written in 2015, not 1815.
Nguyen is not the only modern writer conveying ideas beautifully through prose. He is, however, unique when considering the trends we see in society. In a study published in 2019, researchers found the vocabulary of an average
U.S. adult declined between 1974 and 2016 when controlling for education (Figure 18).112 Despite millions of people pursuing higher education, the vocabulary of Americans barely budged in four decades.

Figure 18
Average Number of Survey Answers Correct
American Adults, Out of 10

Source: Twenge, J. M., Campbell, W. K., & Sherman, R. A. (2019)

If language evolved primarily as a method for transferring data between humans, it is worth asking whether that skill is more or less valuable in a world full of machines. When engaging with other humans, a person needs an extensive
vocabulary to convey nuance and emotion. Nuance and emotion are lost on machines. Your voice assistant does not care whether your request is beautifully formed. The more nuance and emotion, the more likely your request will be
misunderstood.
Interacting through digital channels has consequences. The average human only types at a rate of 40 words per minute.113 The average speech rate is 150 words per minute.114 You must simplify and shorten the words you use to
convey ideas through digital channels.
Additionally, digital channels strip away social and emotional context. When was the last time you read an email in an aggressive tone, even if the words were benign? As Nguyen shows, it is possible to convey emotion through
words. But why bother when asking for a report that was supposed to be delivered at 10 am?
I will avoid wading into the nuanced debate over whether digital channels like social media are healthy for society. My point is less complicated. Engaging with each other over digital channels demands a more efficient form of
language. It is unsurprising to see our collective vocabulary stagnate despite rising educational attainment.
Successfully communicating with machines requires us to cross the divide separating the analog and digital worlds. For some of us, that means learning a computer language. For others, it means typing with greater efficiency.
Either way, I believe machines are changing how we communicate with them and each other.

Machine Drift
In 2020, Kevin Roose described a feeling many of us experience.115 He pondered whether his actions were his own. “Is the salad I’m making for dinner actually what I want to eat, or do I just think the photo will make me look
healthy and responsible when I post it on Instagram later? Did I really want to watch “Schitt’s Creek,” or did I just trust Netflix’s recommendations more than my own taste? Which of my tastes, thoughts, and habits are really mine,
and which were put there by an algorithm?”
Roose referred to the sentiment as “machine drift.” The idea is that machines influence our behaviors in ways that we do not recognize. I believe this is more than the effects of digital channels on human communication. It is a
rewiring of how our brains work.
I said our brains are pattern-matching engines. We take in data from our senses and attempt to recognize patterns that inform how we interact with the world. When you walk in nature, the patterns you observe are primarily
random. Birds do not fly in the same direction. Leaves vary from one tree to the next. The wind blows in no discernible rhythm.
In the digital world, patterns are purposeful. Your streaming service does not flip a coin when deciding what to place in your watch queue. The algorithm serves you patterns it expects you will like.
There is a saying in neuroscience that “neurons that fire together wire together.”116 Neural connections used frequently are reinforced. Machines serving content consistent with existing pathways strengthen those connections.
Algorithms validate existing mental models. Life is stressful when thoughts fail to align with reality. It is comforting when life unfolds in ways we expect.
Therein lies the problem that Roose describes. Algorithms for optimizing watch time, clicks, and purchases rewire our brains to do the same. It is not that Roose wants a cookie but is opting for a salad because an algorithm
recommends it. He chooses the salad because the algorithm has already re-wired his brain to seek socially acceptable food.
Think about the choices you make. To what extent do those choices align with the objectives sought by the algorithms you use? We are a long way from machines dictating the course of our lives, but their nudges are already
reshaping our decisions.

Digital Wonderlands
Our computer screens and smartphones are imperfect portals into the digital world. Typing is clumsy. Screens are two-dimensional. Life still happens around us regardless of how immersed we are in a cat video.
Advancements in augmented reality (AR) and virtual reality (VR) are set to alter how we interface with machines. Smartphone shipments grew from about 170 million units in 2009 to more than 1.4 billion in 2016 (Figure 19).117
Today, there are 171 million VR users worldwide.118 It is not difficult to imagine that number growing exponentially as technology improves.

Figure 19
Global Smartphone Shipments

Source: Statista

You might look at the state of VR and find it laughable that you would spend half of your day in a headset. That is to say nothing of software issues like motion sickness and lack of engaging content, but those are problems with the
current generation of products. Wearable devices will become smaller, faster, and cheaper. The software will also improve, including voice controls that increase the rate at which we can communicate with machines.
Over half of American adults spend at least five hours daily on a smartphone.119 We are enamored with our tiny portals into digital wonderlands. We ignore obvious defects like typing on a keyboard designed for toddlers. Is it so
difficult to believe people will use VR and AR headsets for eight hours daily?
I am not trying to convince you that VR and AR are the future. This is a question of timing, not outcome. Immersive hardware is a better way to engage in digital worlds. Next time you are in a public place, look at all the people
gazing into their smartphones. Imagine how quaint that moment will feel a few decades from now.
If you think VR and AR will change how humans engage with digital worlds, you should feel a tinge of terror. To date, machines have been competing with humans in the analog world, where we have an advantage. What happens
when our lives take place increasingly in digital environments?
To start, our digital footprints grow exponentially. Think about the algorithm powering your streaming service. Today, the algorithm learns about you through the content you watch. It identifies which genres capture your attention
and which actors you find attractive. It uses that information to serve new content, hoping you will stay glued to your screen.
Imagine how much better the algorithm would be if it had access to more data. Which shows are you talking about with your family? What caused you to binge ten episodes straight? Which shows are your friends suggesting you
watch?
There is no need to track clicks and thumb movements when you can access every input a person sees and hears. Perhaps you find the idea of an “all-knowing” streaming service attractive—no more scrolling through suggestions
that fail to reflect your preferences. But the impact of expansive digital footprints goes beyond what you watch on your sofa.
In 2022, the technology firm Celonis raised capital at a valuation of $13 billion.120 The company makes software that uses log data from IT systems to analyze what employees are doing. It is one of several companies extracting
insights from the digital footprints employees leave at work. The company uses the data to make recommendations, including which tasks can be automated.
Part of the reason it is hard to automate human work is that much of it takes place in the analog world. Machines cannot access the notes you take in meetings or conversations with colleagues. As more of those interactions move
online, we make it easier for machines to learn. Humans are unlikely to win competitions on the machines’ home turf.

Racing as One
Years ago, I met a senior executive from Neuralink at a conference in Toronto. The company aims to create brain-computer interfaces that seamlessly integrate the analog and digital worlds. Artificial intelligence is on pace to overtake
human intelligence at some point in the future. Rather than hold back the development of AI, Neuralink is building a direct connection between humans and machines.
Imagine one day you wake up and have a photographic memory and can do complex calculations in your head. It would be as if your brain grew exponentially larger overnight. That is the promise of brain-computer interfaces.
During the conference, I recall thinking, “This sounds crazy, but I certainly would not bet against her.” I do not want to dive into the quagmire of brain-computer research or Neuralink as a company. The technology is fascinating,
but the potential pitfalls and ethics are complicated. I am comforted by knowing research is underway on a technology that could serve as a lifeline for our species.
I mention brain-computer interfaces because they represent the ultimate endgame in the convergence of humans and machines. The race between ourselves and AI is frequently portrayed as a 100-meter dash where we stay in our
respective lanes. In reality, machines are becoming more like us, and we are becoming more like machines. There need not be two runners at the end of the race. Perhaps there will be only one.121
Mental Model #8 - Like Us

The line between humans and machines blurs over time. Machines are becoming more like us, and we are becoming more like them. Convergence adds fuel to the automation fire while offering a potential lifeline in the race against
AI.
Story: Hitting the Slopes
Growing up, I loved downhill skiing. I did not care that the closest “mountains” to my house were landfills masquerading as slopes. Racing down the icy inclines was thrilling. It was not until a trip to Colorado that I realized there was
more to skiing than making the same 30-second run until my legs went numb.
Imagine my excitement when a colleague asked me to meet with the executive team of a ski resort operator. The company was looking for ways to automate its operations. Labor costs were rising, and seasonal labor was
unreliable. Humans were expensive, and good workers were difficult to find. Maybe machines were the answer.
As we started the meeting, I asked the client to describe the automation efforts underway at the company. Like most organizations, the team was building accounting bots in Finance and resume screening algorithms in HR. I find
those applications interesting, but you would be hard-pressed to find a leader who thinks competitive advantage emanates from Accounting.
One example caught my attention. The company installed scanners and scales at the bottom of each chair lift. The scanners made sense. Visitors could scan passes at the bottom to gain entry to the lift. The scale had me puzzled.
The CFO explained that the scanners were installed the prior year and worked well. The chair lines moved faster, and less labor was required. The problem was that visitors realized they only needed a valid pass to enter the lift.
The passes were easy to exchange, allowing people to “cheat” the system.
The solution was to install scales. Linking pass data to weight information allowed the resort to verify the same skier was using a pass each time. Initially, I was impressed with the ingenuity. The example sparked a conversation
about other opportunities, from automated checkout in the ski shop to robotic food service in the lodge.
After batting around a few more ideas, I paused. Did it make sense for the company to sink millions of dollars into these projects? Was the problem the cost of human labor? The company was quite profitable. Saving money on
labor would have added to shareholder returns, but was that the highest and best use of capital?
If you believe the analog and digital worlds are converging, you need to consider the implications for your organization. Ski resorts are analog businesses. My phone does not even work at the top of the mountain. It is a digital
desert.
The threat to ski resorts is not escalating labor costs. It is the merging of the analog and digital worlds. Skiing is a sensory experience. The thrill of the landscape rushing past. The feel of snow hitting your face. The sound of skis
cutting through the powder. There is no way to recreate those experiences using VR, but that need not be the case forever.
Skiing is also a pain in the butt. Unless you are local, you must fly to a remote location during a specific time of year and compete with millions of other visitors for flights and lodging. You have to buy or rent hundreds of dollars
in gear. You have to invest hours each day commuting. All this, in addition to the physical risks you invite with each run down the mountain.
Digital skiing does not have to be better than analog skiing. It simply must be good enough to pull visitors away from the mountain. Imagine a day in the future when you can strap on a headset, set your VR room climate system to
“winter,” and enjoy a day of skiing with your friends. That day will be here eventually.
The resort spent millions of dollars installing scanners and scales to automate work in an analog world. That money created digital footprints for each visitor’s identity, lift usage, and weight. Those data points tell you little about
the people spending thousands of dollars to enjoy your product. In an immersive digital world, you would have real-time data on everything visitors see, hear, touch, and smell.
I cannot predict when digital skiing will become mainstream, but it seems prudent for ski resort operators to hedge their bets. For example, now might be a good time to digitally map mountains and secure asset rights. Perhaps it
even makes sense to team up with a VR software company to build the experiences of the future. It will take time to work out the details, and a small amount of funding today could pay significant dividends in the future.
Imagination is constrained in the analog world but not in digital wonderlands. Today’s interfaces are clunky and keep us from venturing too far into the digital realm. That is rapidly changing. Machines have been living in our
world and becoming like us. We will increasingly live in the machines’ world. How will that change us?
SHAPING A BETTER FUTURE
Introduction to Part 3
Mental models are only valuable when applied. Watching the rise of automation and AI from the sidelines is stressful. We all want agency over our lives. We want to believe that our experiences are dictated by us, not to us.
Part 3 of this book is about applying the mental models from Parts 1 and 2. The first chapter describes how to reinforce your models. The remaining chapters dive into four domains where you can apply the models: business,
investing, policymaking, and education.
I encourage you to think broadly about the domains. Business executives provide education to employees. Policymakers decide how to invest tax revenue. Educators shape research behind public policy. We cannot afford to “stay
in our lanes.” Automation is about systems, and the intersections of domains matter.
My knowledge of these domains varies. I have a deep understanding of business, a robust understanding of investing, a cursory understanding of policymaking, and a vague understanding of education. I have strong points of view
on all, but my perspectives are incomplete. My goal is not to direct; it is to inspire.
You are better positioned than I am to solve problems in your field. You should weigh the benefits of my suggestions with the potential consequences. You should balance the importance of automation with other priorities vying
for attention. My world revolves around automation and AI. Yours may not.
I appreciate that you may not be a CEO, chancellor, billionaire, or powerbroker. You still have agency. I did not write this book to sell a million copies. I wrote it because organizing ideas allows them to spread. Agency emanates
from actions.
Be ambitious. There is little point in incremental change. Automation technologies advance on an exponential, not linear, curve. If you temper your aspirations to match your influence, you will always play from behind. Have
difficult conversations, learn new skills, and try things that are unlikely to work.
Finally, have fun. We can huddle in fear of automation or choose to shape a better future. I wake up each morning excited about what I can learn, what I can build, and who I can inspire. Will that save me from the robot overlords?
Probably not, but I assure you it makes life more joyful in the meantime.
IF YOU CAN'T BEAT THEM
If your plan is to outrun the machines, you may want to reconsider. The lesson of history is that humans who embrace technology succeed more often than those who do not. This chapter provides a roadmap for prospering alongside
mechanical minds.
The guide is organized into three sections. The first applies to everyone. It is the “how to be a functioning member of society” path. The second is for people working in jobs that have been or will be impacted by automation. You
may not need a deep understanding of machines, but you should at least know how to work with them. The final section is for people shaping the future of automation and AI.
You decide how deep you want to go. I started dabbling in automation as a part of my work in Finance, HR, and other support functions. A few years later, I reorganized my entire career. This topic is addicting. Err on the side of
modest learning goals for now. You can always go deeper later.
Finally, avoid linking your need for expertise to your role. I have met far too many leaders who view automation as a problem for technology experts. About a quarter of a CEO’s work can be automated with currently
demonstrated technologies.122 Automation does not discriminate. What made you successful in the past may not be what makes you successful in the future.

Creating Your Mental Models


Until we have brain-computer interfaces, I cannot directly transfer ideas from my brain to yours. For you to benefit from my knowledge and experience, I need to organize my thoughts and communicate them. Think of this book as an
analog device for transferring ideas from my brain to yours.
The challenge is that human brains are wired for skepticism. If you adopted every idea you heard, you would change your mind constantly and be easily manipulated. Research shows that trust is central to overcoming skepticism.
If you trust me, you are more likely to adopt my ideas.123 Unfortunately, from your point of view, I am a random guy who wrote an automaton book. I would not trust me either. Luckily, you need not trust me to trust my mental
models.
Now for the hard part. We need to make my mental models your mental models. We must tailor the insights from Parts 1 and 2 of this book to your life so your brain finds them compelling.
Mechanical minds face similar challenges. Data scientists usually do not start from scratch when building an algorithm. They set the initial model parameters to the same values as an existing model. For example, if you want to
train an algorithm for identifying dogs, you can begin with an algorithm that identifies cats.
After setting the initial parameters, the model is fine-tuned. The cat algorithm is shown pictures of dogs until it can identify dogs reliably. This method is known as transfer learning and works surprisingly well for humans too.
Table 2 is a summary of my mental models. Go through them individually and decide which ones resonate and which do not. Having complete trust in five models is better than having a shaky belief in eight. My mental models
must be consistent with your experiences to be helpful to you.

Table 2
Mental Models

Mental
Description
Model
1. Machine
Automation is the act of replacing human labor with machine labor for an ever-increasing number of tasks. This is not a “fourth industrial revolution.” This is the unyielding advancement of machine labor.
Labor
2. Digital
Not all technology is machine labor. Most investments over the past 50 years have gone into digital infrastructure. We are only beginning to build the machine labor.
Infrastructure
3. Patterns
Artificial intelligence and human intelligence are different in many ways, but both are systems for finding patterns in data. Machines need not achieve human-level intelligence to replicate the capabilities of humanity.
Everywhere
4.
Artificial intelligence capabilities grow exponentially and are not always enhanced by human involvement. The present can tell us little about the future, and the rise of AI has significant implications for workplaces
Exponential
and our roles within them.
Progress
5. “Working”
Artificial intelligence looks more like labor than capital. Machines are unlikely to push humans out of the workforce, but there is no guarantee that the jobs of tomorrow will be better than the jobs of today.
Capital
6. Keeping Traditional companies are struggling to make the economics of automation work. Fears of mass unemployment due to profit-maximizing layoffs are probably overblown. The more significant risk is that traditional
Up organizations employing millions of humans become increasingly irrelevant.
7. Starting Replacing human labor with machine labor is complicated and expensive. It is often better to start from scratch. Smaller companies built on a foundation of machine labor may be less vulnerable to AI disruption than
Over traditional enterprises.
8. Like The line between humans and machines blurs over time. Machines are becoming more like us, and we are becoming more like them. Convergence adds fuel to the automation fire while offering a potential lifeline in
Us the race against AI.

Note: You can find a summary of these models and a worksheet for reinforcing your mental models on my website: www.artificiallyhuman.com

After taking a skeptical look at my mental models, rewrite the “Description” in your own words. How would you articulate each model based on what you have read and believe about the world? If there are models you think are
wrong, omit them. If models are missing, add them.
There is no “right” answer. For instance, I assert in model three that a collection of narrow intelligence machines are more likely to achieve the capabilities of humanity than a single machine demonstrating artificial general
intelligence (AGI). Others have staked their careers on the opposite belief, building machines that interact with the world like humans in pursuit of AGI. Only time will reveal which of us is correct. For now, it is reasonable and
valuable to have two competing models.
The resulting list will serve as a starting point for your mental models. If you doubt anything on your list, go back to the beginning. It is okay if you do not trust my models. Your experiences differ from mine. That said, you need
to trust your models for them to be helpful.
Next, it is time to fine-tune your models. This step involves hanging facts, stories, insights, and examples on the tree you created. Begin with your everyday life. Where do you see machine labor in your home? In what ways is that
new IT system at work more infrastructure than labor? How are you becoming more like a machine?
Automation is happening all around us. The question is whether we pay attention. Carve out 10-15 minutes each week to revisit your mental models. What happened last week that reinforced your models? Where did you find
evidence that your models might be wrong? After a few weeks, you will not need to revisit your mental models consciously. The models will live inside your head and help you make sense of the world.
Filtering experiences from your life through your mental models is one way to build trust in them. Another way is to subject the models to competing perspectives. As a starting point, here are a few videos you can find online that
tell the automation story differently:

Humans Need Not Apply (CGP Grey)


The Rise of the Machines (Kurzgesagt)
The Danger of AI is Weirder Than You Think (Janelle Shane)
What Will Future Jobs Look Like (Andrew McAfee)

Spend an hour or two searching for other videos and articles. After watching and reading each one, revisit your mental models. Which points align with your models? Which points conflict? Do you feel compelled to change anything?
It does not matter what content you consume. What matters is how you test and refine your models. You trust your mental model for the grocery store because the bread is usually where you expect to find it. If you went to the
store tomorrow and everything had been moved, you would feel unsettled. That might be similar to how you feel today about automation and AI. You will know your models are robust when conversations about automation feel like a
familiar trip to the grocery store.
Be patient and persistent. Adjust your models over time and seek opportunities to challenge them. I promise that the world of automation and AI makes sense when you have a reliable way of organizing information. It is
impossible to keep up with every advancement. Processing new information through your mental models filters out the noise.

Embracing the Machines


For many people, mental models are necessary but insufficient. We saw how humans teaming with machines were more likely to experience job growth and wage increases. Mental models can prevent you from falling behind, but
what if you want to get ahead?
Feeding your brain a steady diet of “training data” will fine-tune your models. That does not mean cramming as much information into your brain as possible.124 It means carving out 30-60 minutes weekly to focus on automation.
For me, that took the shape of two practices: structured education and hands-on learning.
Massive open online courses (MOOCs) make structured education available to those with an internet connection. These courses introduce the vocabulary, concepts, and practices you will likely encounter when dealing with
automation. Do not fixate on certificates. The point of taking these courses is to access free (or nearly free) content curated by experts.
I began with a machine learning course taught by Andrew Ng from Stanford University. More recently, I completed a program through eCornell. Check out machine learning and AI offerings from Coursera, Udemy, and edX.
Read the descriptions and pick courses that fit your schedule, aptitude, and budget. Completion rates are abysmal for these programs.125 If you are unsure which course to take, err on the side of the easier option.
The second practice is hands-on learning. You can take all the courses your schedule permits and still have no idea how to apply automation to your life. If you want to work with machines, you have to work with machines.
What tasks are involved in the work you do each day? A significant portion of work requires interpreting and generating language. Meeting minutes involve synthesizing spoken language. Emails involve drafting written language.
Language is how we transmit data between humans.
The prevalence of language tasks is one reason for the hype surrounding large language models (LLMs). These models are trained on massive amounts of text, including thousands of books and the contents of Wikipedia. The
models predict language so accurately that you can carry on a text-based conversation with an algorithm. The machine does not “know” what it is talking about, but that does not matter for most tasks.
At this time, the most well-known LLM application is ChatGPT from OpenAI. If you have not already, I encourage you to spend time using a large language model. Regardless of what you do for a living, there are practical
applications. Here are a few examples:

Synthesizing points from an article you do not have time to read


Explaining a technical concept in non-technical terms
Composing a balanced response to a colleague’s incendiary email
Drafting a cover letter based on your resume and a job posting
Writing a bedtime story you can read to your child at night

Do not think of LLMs as a curiosity to be explored. Think of them as digital workers you can “employ” to make your life easier. Initially, these models might feel like a waste of time. The outputs are rarely good enough to use without
edits. After mastering the art of prompt writing,126 you may be surprised by how helpful they can be in life.
After you get the hang of LLMs, look for other digital workers you can employ. If you create digital art for logos, websites, or other applications, try an image generator like DALL-E, Stable Diffusion, or Midjourney. If you
record voiceovers for marketing or training materials, try NaturalReader, Murf, or ElevenLabs.
I am writing these words in March of 2023. I have no idea if the applications I suggest will exist when you read this book. The specific platforms you use are less important than how you use them. To stay ahead, you need hands-
on experience. That means using whatever tools you can find and incorporating them into your life.
Structured education combined with hands-on learning is a proven recipe for developing new skills. Automation is no exception, but do not expect to be rewarded immediately. Think of this as your digital infrastructure. You are
investing today so you can take advantage of future advancements. You do not need to outrun AI; you need only to outrun other humans.

Widening the Aperture


The previous sections focused on actions you can take from the sideline. The advice assumes you cannot change the trajectory of automation and AI. You are swimming in a river and trying to keep your head above water.
For most of us, staying afloat is enough. We need more people weaving automation into society than shaping the path of automation and AI itself. We did not devote the entirety of human labor to building the electrical grid. Most
people focused on figuring out what to do with electrical power once it was widely available.
Perhaps you are in a position to shape the future of automation and AI more than most. In that case, I encourage you to invest as much in contextual understanding as you do in expertise. Software engineers understand technology.
Policymakers know what it takes to craft laws and regulations. Expertise is not the problem. The problem is applying that expertise in a complex society.
Automation sits at the intersection of technology and humanity. It is not enough to be an expert in one domain. You must also have an appreciation for how your actions impact other areas. There are plenty of examples where
experts lacking contextual understanding led to unintended consequences. Social media companies built attention-seeking algorithms without fully appreciating how the algorithms alter human behavior. Investors poured money into
AI startups without realizing the bottleneck to growth is often talent, not capital.
Contextual understanding requires a steady diet of information across diverse fields like technology, psychology, economics, politics, philosophy, and business. Exhaustive expertise is an unattainable goal. Instead, expose yourself
to others who think about automation from different perspectives.
Others have already figured out much of what you need to learn. Here are four books that I found helpful in shaping my contextual understanding:

A World Without Work - Technology, Automation, and How We Should Respond, Daniel Susskind
The Second Machine Age: Work, Progress, And Prosperity in a Time of Brilliant Technologies, Andrew McAfee, Erik Brynjolfsson
What To Expect When You're Expecting Robots: The Future of Human-Robot Collaboration, Laura Major, Julie Shah
On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines, Jeff Hawkins, Sandra Blakeslee
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place, Janelle Shane

These books cover automation and AI from a variety of perspectives. If you learn top-down, conceptual to practical, read the books in the order provided. If you learn bottom-up, try reading them in reverse order.
Books can expose you to new perspectives, but you also need people capable of challenging your ideas. My network expanded dramatically after venturing into automation and AI. If you find yourself in a situation like mine where
most of your friends work for the same organization and share similar views, it may be time to make new friends. Echo chambers reinforce existing beliefs at the expense of context.127
Lastly, I would be remiss if I did not point you toward people who shaped my perspectives. I often skip over footnotes when reading books. I rely on the author to synthesize what I need to know. If you are shaping the future of
automation, please do not rely solely on my interpretations. Read the footnotes and dive into those you find interesting. I provide a list of sources and links on my website to help you find what you need (www.artificiallyhuman.com).

Great Expectations
If you were hoping for a step-by-step guide, I have some bad news.128 I have mentored hundreds of people and attempted every method imaginable to teach others about automation. This chapter describes the practices that worked
best. Like so many things in life, there is no shortcut.
Diving into the details might seem daunting. Life moves on, and you already have enough on your plate. If we are being honest, both of us hoped that reading this book would be enough of a commitment. Fortunately, the fact you
are reading these words means you have already ventured further than most.
In my experience, engagement is the only factor that consistently predicts whether people climb the learning curve. Time and attention are in short supply, and I cannot tell you how to spend yours. All I ask is that you make the
decision consciously and calibrate your expectations accordingly.
Story: Stumbling Into Automation
I am all-in on automation. I reorganized my career, made new friends, and spent months writing a book. Automation is no longer a hobby. It is an obsession. As you decide how much time and attention you want to invest, I thought it
might be helpful to hear my story.
I have always been interested in technology. It could be all those episodes of The Jetsons that I watched as a kid. I would sketch flying cars and maglev trains, believing my “research” would lead to scientific breakthroughs. My
parents are lucky that I did not learn about nuclear fusion until much later.
My confidence quickly eroded when I reached college. One of my first courses required me to learn coding languages like C and Fortran 77. My computer experience in high school had been limited to surfing the Internet on a
dial-up connection. When the professor said, “Open vi” in our first class, I knew I was in trouble.129
I barely made it through that course. Instead of being inspired, I wanted to get as far away from computers as possible. For me, that meant mechanical engineering. I found comfort in physical systems and pages of hand-written
calculations.
Upon graduation, I went to work for a global conglomerate and then a private equity firm. I dabbled in technology, building databases and macros to make my life easier. I want to say I was motivated by intellectual curiosity. The
honest answer is that laziness was typically the mother of my inventions.
After a decade in the workforce, I still had not decided what I wanted to do for a living. Work was a transaction. I sold time and skills to companies in exchange for money. Rather than make a difficult career choice, I kicked the
can down the road and became a consultant.
I will spare you a full recounting of my 13-year consulting career. It is unimportant other than to say that it provided unparalleled exposure to various industries, geographies, and topics. When I talk about the value of context, I am
speaking from experience.
Now for the pivotal moment. I was helping CFOs and other corporate executives improve their operations. At the time, most clients were outsourcing or offshoring transactional processes. The idea was to find cheap humans to do
the simple stuff and only use expensive humans for the complex stuff.
Around the same time, I began seeing examples of machines doing work eerily similar to the tasks my clients were outsourcing and offshoring. Activities like financial account reconciliation and resume screening were prime
candidates for automation. Why was I helping leaders access cheap human labor when machine labor was the future?
I watched videos online and tested every automation platform offering me a free trial. I presented at internal meetings and floated ideas with clients to see what resonated and what did not. Then I hit a wall.
My initial meetings with CIOs were painful. I must have sounded like a caveman who had discovered computers for the first time. It was like being back in that programming class where everybody seemed to know more than I
did. The first time a client asked why they should use a bot instead of an API, I had no answer. Not because it was a tricky question. I froze because I had never heard of an API.130
Fortunately, I am blessed with overconfidence. If IT leaders knew this topic better than I did, why were business leaders constantly complaining about technology initiatives? Why did money flow into technology, but the promised
productivity never hit the bottom line? Why did startups always have slick videos but rarely an example of where they had solved an actual problem?
I kept having conversations where I felt like the dumbest person in the room. I dusted off my programming knowledge from college and taught myself Python. I learned to use APIs. I read highly recommended books and tested
my ideas with anybody who would listen.
Within a few years, I was considered an automation expert at my firm. My conversations with CIOs were no longer a cause for panic. I rarely encountered questions I could not answer. I was the one asking the difficult questions.
My expertise in automation has little to do with where I started. It has more to do with where I focused my time and attention. Does it make sense for you to do the same? Maybe not. That said, I encourage you to consider how
automation will impact your life and the institutions you belong to. This is a new field. There is plenty of time to get ahead.
REBUILDING INSTITUTIONS | EXECUTIVES
Well, this is awkward. A few chapters ago, I wrote about how automation is more likely to happen through the disruption of incumbents than within existing firms. If you are an executive today, you probably work for one of those
incumbents. Is it time to jump ship?
My answer is a definitive “maybe.” Disruption occurs slowly and then quickly. Do not mistake current stability for future security. If I could forecast how and when companies would be disrupted, I would play the stock market
instead of writing a book. Timing matters.
To be clear, I am not saying incumbents are doomed to fail. Most have competitive advantages to fend off ankle-biting disruptors like Silicon Valley startups. My pessimism is grounded in my experience working with hundreds of
businesses. Institutional momentum is a powerful force, and it is challenging to steer a complex organization toward a new destination.
The actions required in the coming years run counter to the instincts and incentives guiding most executives. What worked for human-powered enterprises will not necessarily work for machine-powered ones. If you want to be a
force for change within your institution, you must be willing to take risks. If that is not your style, monitor the situation and take note of the emergency exit closest to you.
Avoid the temptation to treat automation as an initiative. Machine labor is here to stay. Pretending disruption is temporary may comfort employees but inevitably leads to angst, fatigue, and frustration. Automation is a new way of
working.
I have yet to see a business fail because it could not figure out the latest technology or partnered with the wrong vendor. Failures are more systemic. Introducing machine labor into hostile environments sets everybody up for
failure. This chapter describes steps you can take to increase your chances of success should you decide to weather the storm.

Complexity Kills
The leading cause of death for automation efforts is complexity. Machines possess narrow rather than general intelligence. They do individual tasks exceptionally well but struggle with context-switching. Firms abound with products,
services, channels, processes, policies, and systems. Too many executives tolerate complexity because it allows them to avoid unpopular decisions.
Human workers lose up to 40 percent of productivity to context switching.131 We think we can multitask, but research says otherwise. The context-switching loss for machines is closer to 100 percent. Complexity kills because it
makes machines uncompetitive relative to humans. Give a human two tasks, and you will take a productivity hit. Give a machine two tasks, and you may need a second machine with different skills.
There is a reason startups have simple business models. Founders are encouraged to develop minimum viable products (MVPs) they can bring to market quickly. This results in businesses that are less capable but simple.
Beginning with a clean sheet allows you to add complexity only where necessary and valuable.
What if you work for an organization that is already complex? My first suggestion is to force difficult decisions. For example, most companies have digital channels like web portals and mobile apps through which external
stakeholders engage. The problem is that most also have analog channels like call centers and fax numbers.132 A common refrain is that customers will be angry if analog channels disappear. In reality, most organizations have not
even tried to turn off analog channels. It is easier to incur the cost of complexity than make difficult decisions.
Not making a decision is still a decision. If you were to start a business tomorrow, would you have a call center and fax number? Saying yes to everything is easy but results in unnecessary complexity. The road to irrelevance is
paved with easy decisions.
My second suggestion is to understand where you are paid for complexity and where you are not. For example, consumer packaged goods (CPG) companies133 are often paid for product complexity. There is value in offering
dozens of yogurt varieties perfectly tailored to individual tastes. Reducing product complexity in CPG may increase automation potential, but the efficiency gains are unlikely to offset revenue declines.
You do not need an enterprise initiative to reduce complexity. Work is constantly evolving, and ensuring new ways of working are simpler than old ways of working is enough. Practice making decisions quickly and pay attention
to the decisions not being made. Cultivating a simple workplace makes your organization more friendly to machines.

Machines First, Humans Where Necessary


Automating every task is a losing proposition. The cost of automation skyrockets as you encounter exceptions to each rule. That said, machines should generally attempt the work first and only engage humans when necessary.
I am describing a concept known as “human-in-the-loop” design. You start by creating fully automated paths for the most common situations. When machines encounter tasks beyond their capabilities, the work is handed to
humans. In the best human-in-the-loop systems, machines learn each time humans process an exception.
This architecture stands in contrast to how most work is organized today. If you look at your processes, you will likely see humans handing work to machines rather than vice versa. Humans cannot generate work quickly enough
to keep machines busy. If machines are constantly waiting for us, we become the bottleneck.
If you think you have already automated your processes, look more closely. In many cases, machines push exceptions to the end of the process. For example, healthcare insurance companies often have claim auto-adjudication
rates of greater than 90 percent. That is until you look at the call center where many inquiries are related to incorrectly or insufficiently processed claims. The claims process is not 90 percent automated. Human labor is simply hiding
in another part of the organization.
Figure 20 illustrates the importance of human-in-the-loop design. Successful and unsuccessful organizations use similar methods to automate processes. For example, over 70 percent of firms use systematic redesign methods (e.g.,
Lean) to improve processes before applying automation technologies. However, firms using human-in-the-loop methods report success nearly three times as often as those that do not.134

Figure 20
Process Automation Methods
Percent of Respondents

Source: McKinsey
To see an example of human-in-the-loop design, revisit the “Department of Character Recognition” story in Part 1. Intelligent document processing platforms can “ read” most documents without human help, but the platforms also
have exception-handling loops for when the machines get into trouble.
Rearchitecting processes around machines can be expensive. Do not try to automate every process. Focus on the 10-15 processes that matter most. Humans still have important roles to play, but no human can work at the pace of
machines. Moving humans off the critical path ensures that machines are not sitting around waiting for us.

A More General Intelligence


Machines are reliable until they are not. A machine can do a task flawlessly for years before suddenly racking up thousands of errors. Humans are unpredictable but fail in predictable ways. Machines are predictable but fail in
unpredictable ways.
The challenge with managing an organization that runs on machine labor is that you never know where the next problem will arise. You know some of your employees will call out sick on any given day, but you will unlikely
know when your AI systems will crash and frustrate your customers.
Additionally, technological advancements render specific human capabilities less valuable over time. Firms hire for roles that exist today, which are narrower than those needed in the future. Leaders love to talk about re-skilling,
but most firms do not have a stellar track record.135
The solution to both problems is the same. Firms need more employees exercising general intelligence. Incumbents have too many humans with rigid and specialized skills. They have too many humans supplying the same narrow
intelligence that machines offer.
In his book, “Drive,” Daniel Pink advocates for a new way of managing people.136 He asserts that humans are motivated by autonomy, mastery, and purpose. We want to be told what to do, not how to do it. We want to learn new
skills. We want to know how our work contributes to a larger objective. In short, we want to be treated like we have general intelligence.
Organizations are complex. Seemingly minor changes to procedures, organizational structures, incentives, and other inputs can have unintended consequences. It is better to test new management systems well before you need
them. Here are a few examples of practices that promote automation:

Multidisciplinary teams organized around products, capabilities, or projects


Flexible staffing models that allow people to move between teams
Culture of continuous improvement backed by performance dialogues
Objectives and measurements grounded in long-term sources of value

If the above list sounds familiar, that is no coincidence. These examples resemble the 12 principles outlined in the Agile Manifesto for software development.137 Practices that work best for software development today should
become broadly applicable as more people work alongside machines.
I do not understand why some software development purists criticize the spread of Agile practices. I appreciate the risks of applying new management systems in areas for which the systems were not designed. The status quo is
worse. At least Agile is grounded in the belief that humans possess general intelligence.

Efficiency through Growth


Most companies apply automation as an efficiency lever. They start with today’s work and look for ways to do it less expensively. Programs are sometimes framed as innovation, growth, or capability-building initiatives to assuage
employee concerns. However, when you pull back the curtain, most are cost-cutting initiatives.
Incurring the cost of replacing humans with machines to do precisely what humans do today does not make sense. You may reduce expenses by a few percent, but that only delays the inevitable. Look at a list of once-prominent
companies that were disrupted by technology.138 Do you believe any of them would still be around if they had squeezed out another ten percentage points of productivity?
Automation disrupts unit economics. A human-powered streaming service would be possible if you had only ten customers. A single employee could analyze watch histories and make thoughtful recommendations. Even if you
paid that employee $100,000 per year, the annual cost of the recommendation engine would be minimal.
The problem is that customers will not pay $1,000 per month for a streaming service. Unit costs, not aggregate costs, matter. Machines can deliver the same service for less than $10 per user per month, but a company must invest
millions of dollars to generate that unit cost advantage.
Many organizations know how to create unit cost advantages through automation. Few ask, “Now that we can do this work for a fraction of the cost, what new products, services, and capabilities can we offer?”139 Usually, the
automated process replaces a manual process as the company continues its march toward irrelevance.
Machines are expensive to build but cheap to run. Efficiency comes from finding more work for the machines to do. If you cannot quickly translate unit cost advantages into growth opportunities, returning capital to investors is
better than pouring money into automation.

What About…
You might be wondering why I ignored so many other topics. What about data? What about vendor selection? What about talent? All are important and contribute to long-term success, but you must first create a welcoming
environment for machines.
Take talent, for example. Demand is high for data scientists, full-stack software developers, and other technical skills.140 Why would these people want to work for your firm? Short term, you could buy the talent you need. Long
term, people will only stick around if they are happy.
How do you make technical talent happy? You reduce complexity and make processes easier to automate. You place technical work at the center of the operations rather than the periphery. You adopt Agile practices to foster
autonomy, mastery, and purpose. You pursue growth rather than trying to cut your way to success.
There is a global shortage of technical talent. You cannot solve that problem. You also cannot predict the next technological breakthrough. What you can control is what happens within your organization.
Most workplaces are hostile toward machines and the people who build them. You have little hope of success if you cannot solve that problem. Focus on making your organization an attractive place for both humans and machines.
You will be surprised by how much progress you can make.
Recap - Rebuilding Institutions | Executives

Complexity Kills: Machines struggle with context switching. Make difficult decisions now so machines can make easy ones later.

Machines First, Humans Where Necessary: Rearchitect processes around machine labor, keeping “humans in the loop” where necessary and valuable.

A More General Intelligence: Stop managing people like machines. Adopt management systems that foster autonomy, mastery, and purpose.

Efficiency through Growth: Quickly translate unit cost advantages into growth opportunities to increase efficiency.
Story: The Office
I do not expect you to show up to work tomorrow and march into the CEO’s office with a list of demands for making your workplace automation-friendly. That is why I wanted to share a story about how even adopting these practices
on a small scale can make a difference.
My first job after graduating from college was with GE Aircraft Engines. I spent my days and nights pricing overhaul services and writing contracts for multimillion-dollar deals. It was more responsibility than I deserved, but the
same could be said for any job at that point in my career.
After a few months of copying and pasting from one contract to the next, I decided it was time to automate the annoying parts of my job. I was barely sleeping and could not keep up with the flood of contract requests. As I said
previously, laziness was usually the mother of my inventions.
Inspired by my recent Green Belt141 training, I set out to simplify the process. Every contract was different, but they varied in predictable ways. For instance, some airlines required compensation for missed deadlines while others
did not. I reviewed dozens of contracts and drafted “standard exceptions” that distilled hundreds of permutations into a few options.
Next, I built a machine to draft contracts for me. I already shared my struggles with coding. Writing a software program was off the table. I was still traumatized by that first-year engineering class. Instead, I employed the only
digital workers available: Microsoft Access, Excel, and Word.
My solution was the stuff of a software engineer’s nightmares, but it worked. I could fill out a few fields, provide an input template, link to a pricing model, and out would pop a contract. I created a process that ran on machine
labor, and I was the “human in the loop.”
It took me an hour or more to draft a new contract manually. My machines could do it in less than five minutes. I had a productivity advantage and ran with it. I asked for more responsibility, happily taking credit for the work I
was delegating to my machines.
At the time, I thought there was nothing special about GE or my team. I was new to the workforce and assumed every leader was like my manager. Only later did I realize that he was the exception rather than the rule.
For example, in my first week at GE, I lost $8 million in revenue for the company. I incorrectly priced a deal, leading us to lose a contract we usually won. My manager pulled me into his office and asked a single question, “What
did you learn?” After describing my expensive lesson in negotiation strategy, he smiled and said, “Okay, try not to do that again.”
There was no yelling. Nobody was assigned to look over my shoulder. My manager created the conditions for me to succeed and stood by me when I failed. Automation is risky. Nobody is going to build machines if you manage
them like machines. That was true twenty years ago and remains true today.
ALLOCATING CAPITAL | INVESTORS
Every investor has a story. The early bet on a small company that went on to dominate its market. The creative trade that nobody else saw. The algorithm that always makes the right call. It is easy to get caught up in the hype. In
practice, most investors are better storytellers than investors.
I spent years working with asset managers, venture capitalists, and private equity firms. I can count on one hand the number of investors who had unique insights into sources of value. In most cases, ongoing success is better
explained by survivorship bias, market manipulation, and insider trading than skill.
If you were hoping this chapter would tell you which stocks to buy, you should find a different book. If I had perfect information about the future, I would not share it. I would use it to make a fortune and outline these mental
models in my memoir.
You should be skeptical of anybody who claims to make money by predicting the future. That is especially true as we travel further up the exponential curve. Business models that make sense one day will not make sense the next.
Operations powered by human labor will be profitable until they are not. The future becomes less, not more, certain each year.
With that caveat, let me share what I think automation means for investors and financial markets. Despite my critique, I am bullish on investing. I earn money from selling my labor and invest that money into machine labor. This
chapter describes why and how.

Disclaimer: I am not a financial advisor. These are opinions and are not intended to be investment advice.

Rising Tides
There is a saying in economics that “A rising tide lifts all boats.” A thriving economy benefits everybody. The famous investor, Warren Buffet, offers a corollary, “Only when the tide goes out do you learn who has been swimming
naked.”142 The fundamental question facing investors is whether the tide is rising or falling.
Macroeconomic growth is a function of labor, capital, and total factor productivity.143 As we saw in previous chapters, the number of humans on Earth will peak later this century. If you think the contribution of human labor to
economic growth will remain constant, now might be a good time to exit the markets and invest in yourself.
If you believe we are entering an era where machine labor will increase and digital infrastructure will pay off, exposure to capital markets is essential. Returns to labor have been declining relative to capital for years in the United
States. Figure 21 illustrates how the labor share of total compensation has fallen steadily since 1950. This is not entirely due to investments in digital infrastructure and automation, but both played a role.144

Figure 21
Labor Share of Compensation
Rolling 10-Year Average

Source: University of Groningen; University of California, Davis145

If you look at Figure 21 and think the outlook is grim for human workers, it is essential to remember that those are percentages and not absolute values. If we are headed toward a world with 100 billion digital workers and 4 billion
human workers, one would expect most compensation to go to the owners of the machines. However, that does not mean the absolute value of compensation going to humans will be smaller than it is today.
A rising tide has the potential to lift all boats. If increases in machine labor can offset declines in human labor, the economy should expand. We face difficult choices when allocating the spoils, but that is a better problem than the
alternative.
In the past, investing allowed you to own a share of the output produced by other humans. It was a way of diversifying risk associated with your ability to sell labor to the market. In the future, you also need to consider the value of
human labor relative to machine labor. You may not outrun the machines, but investing lets you profit from their labor.

Modern Fundamentals
Investor preferences shift over time. In some periods, growth stocks are all the rage. Investors pour money into firms disrupting incumbents. In other periods, investors retreat to traditional value stocks that generate earnings today. I
am optimistic about the future, but that does not mean I am a growth investor.
Over the long term, value has outperformed growth. Since 1927, value strategies yielded four percentage points higher returns than growth (Figure 22). Stories can drive up prices in the short term, but investors eventually expect
earnings. When growth stocks fail to convert promises into profit, investors flee.
Figure 22
Value vs. Growth Investing
Percent

Source: Fama/French Research Portfolios146

The dot-com bubble of the late 1990s is often cited as exhibit A in the case against growth investing. The technology-heavy Nasdaq Composite Index peaked at over 5,000 before falling nearly 80 percent.147 Investors betting on the
internet-powered economy learned a difficult lesson.
It is easy to forget that dot-com mania also fueled a handful of incredible businesses. Amazon, NVIDIA, and TSMC all went public in the 1990s. Today, those firms are among the most valuable in the world. Investing in high-
growth technology companies is not bad. Investing in companies that have little hope of converting stories into profits is.
“Security Analysis,” written by Benjamin Graham and David Dodd, is the definitive guide to value investing. The text inspired a generation of investors, including Warren Buffet. The premise of the book is simple. Each firm has
an intrinsic value based on its earnings and growth rate. If you consistently purchase stocks below intrinsic value, you will make money.
The premise is simple, but the methods are complicated. If you have read “Security Analysis,” you know Graham and Dodd did not simply take published earnings per share and plug the numbers into a formula. They made a
series of calculations, accounting for deferred taxes, one-time items, and other factors. Value investing is hard work.
“Security Analysis” was written in 1934. Graham and Dodd were silent on parsing technology expenditures and dealing with rapidly shifting industries.148 The Sixth Edition includes a warning from investor Seth Klarman, “In an
era of technological change, investors must be ever vigilant, even with regard to companies that are not involved in technology but are simply affected by it. In short, today’s good businesses may not be tomorrow’s.”
The problem is that value investors lack methods like those described in “Security Analysis” for assessing the intrinsic value of technology. For example, it would seem prudent to separate investments in digital infrastructure from
those in machine labor. Unfortunately, investors treat most technology spending as capital rather than a combination of capital and labor.
We have more than enough growth investment strategies for automation and AI. What we lack is a modern approach to value investing. Business fundamentals have not changed materially since “Security Analysis” was published
in 1934. What has changed is the role technology plays in fueling growth and earnings.
What does this mean for investors? In short, fundamentals matter. In the coming years, you will hear plenty of stories from companies building and deploying automation technologies. Most of those stories will be wishful
thinking. Resist the temptation to go “all-in” on the latest hype unless you are sure stories will translate to profit. Otherwise, in the words of Buffet, you risk swimming naked as the tide goes out.

Exponential Uncertainty
Exponential progress implies that the future becomes harder to predict with each passing day. We are still on the flat-ish part of the exponential curve. It is possible to make reasonable predictions. That may not always be the case.
What do you do as an investor in a world where uncertainty is increasing? How do you insure against seemingly stable businesses being disrupted overnight? How do you decide which technology company’s large language model
will win? How do you process the seemingly endless torrent of information coming your way?
The simple answer is that you do not. Congratulations if you have a technology-enabled investment strategy that reliably beats the market. For the rest of us, diversification is probably the answer.
I have spent years immersed in the world of automation and AI. I can make better predictions than most about which technologies will succeed and which ones will fail. I can usually see the freight train of disruption coming
before it arrives. I have access to new technologies long before they enter the public discourse.
In my consulting job, I was prohibited from trading individual stocks. Inside information gave me an advantage. Even if my trades were legal, the perception of impropriety was enough to discourage individual stock trading.
I am no longer subject to those restrictions but still refrain from trading individual securities. It is not that I lack investment theses. I have strong perspectives on how technology will impact markets. What causes me to pause is
time.
To invest successfully, you need to forecast outcomes and timing. I am reasonably confident in the outcomes. What I cannot predict is timing. It matters greatly if disruptive technologies are adopted in 5 or 20 years. You can stare
at adoption curves all you want. Trends from the past will be of little help in predicting the future.
I do what most investors should probably do. I reduce exposure to idiosyncratic risks through diversification149 and pay as little in fees as possible. I primarily invest in passively managed funds tracking indexes like the S&P 500.
I overweight technology stocks but not as much as you might expect. I own a little bit of everything.
Try to worry less about predicting the future and more about whether you are poised to profit from increased automation. Diversification allows you to own a share of the machine labor powering society without betting on specific
machines. You will hear plenty of stories from people who make accurate predictions. You will hear fewer from those who do not.

Smaller Fish
New entrants are disrupting incumbents, and the percentage of companies employing few or no employees is increasing. Companies no longer require thousands of humans to provide goods and services to millions of customers.
For investors, this has important implications. Most investment products today are geared toward large companies. Almost anybody can buy a stake in Amazon, JPMorgan Chase, Proctor & Gamble, and Pfizer. It is much harder to
buy a stake in small companies. Even if you can buy their financial securities, it often comes at the expense of higher fees and reduced liquidity.
Given the existing investment products, I cannot recommend shifting capital toward smaller companies. That said, new funding models are emerging. For example, AngelList allows investors to provide capital directly to early-
stage startups. These models are still sub-scale, but there are signs of innovation.
If the pace of disruption accelerates, it makes sense to shift more capital toward small firms built on a foundation of machine labor. For example, you might consider moving money invested in an S&P 500 fund to a Russel
2000150 fund. Small companies can take risks incumbents cannot, and it may be cheaper for large players to acquire new capabilities than build them. Small companies have an advantage when disruption occurs quickly.
That said, be careful betting against inertia. The longer incumbents have to adapt, the more likely existing capabilities and scale become assets rather than liabilities. We have already experienced two AI winters. There is no
guarantee we will not experience a third.
Recap - Allocating Capital | Investors

Rising Tides: If you believe increases in machine labor and returns on digital infrastructure will offset the slowing growth of human labor, you want a stake in the machines.

Modern Fundamentals: Value investing principles are timeless but must be updated to reflect the increasingly central role of technology.

Exponential Uncertainty: Technological disruption increases company-specific risks, making diversification more attractive.

Smaller Fish: Small companies have an advantage when disruption accelerates. When it slows, larger firms have time to catch up.
Story: Lonely Bots
The rise and stagnation of Robotic Process Automation (RPA) should serve as a warning to investors.151 Companies like UiPath, Automation Anywhere, and Blue Prism were poised to win the automation race. In the late 2010s,
most organizations used “RPA” and “automation” interchangeably.
A few years later, one company’s stock had fallen more than 80 percent. Another was scrambling for liquidity. The last was acquired and no longer operated as an independent company. RPA was still viable, but investors had
soured on its growth prospects.
To understand what happened, we need to look at the fundamentals. The promise of RPA was empowering employees without technical experience to automate work. An RPA bot can open an email, extract the attachment, scan
the contents, and enter the data into a system. Bots can be built by employees with little help from IT. That was the pitch, anyway.
Executives raced to buy licenses. RPA offered a path to automation without dependence on IT. UiPath alone claims to have more than 1.5 million users.152 The problem was not selling licenses. The problem was getting people to
use them.
One large European bank purchased over 1,000 licenses from an RPA provider in 2016. A few years later, hardly any of the licenses were in use. Selling the first 1,000 licenses was the easy part. Selling the next 10,000 proved
more difficult.
The RPA investment thesis was predicated on the software being easy to learn. Metrics like annual recurring revenue (ARR) tell you little about whether that thesis is accurate. Most RPA license sales were to new customers,
generating growth but not validating the investment thesis. The growth story was further clouded by the fact that many of the largest customers were the same consultants and bankers who stood to profit from RPA’s success.153
In practice, RPA platforms are harder to learn and deploy than advertised. It takes weeks for the average employee to learn the basics. People with technical expertise may learn faster, but those workers have access to more
powerful tools. Additionally, RPA works best when the IT team is heavily engaged. Employees need access to test environments and application programming interfaces (APIs).
The metrics that matter for RPA are the time it takes the average employee to learn the software and the IT resources required per bot deployed. Those metrics are still too high.154 RPA is not yet solving the problem at the heart
of its value proposition.
I am bullish on the market for automation technologies, including low- and no-code tools like RPA. The “bot for every person” model could be a way for incumbents to amplify the potential of human labor and compete with
disruptors. That said, fundamentals matter. Until investors look at the right metrics, automation investing will be driven more by stories than value.
SETTING THE RULES | POLICYMAKERS
I do not envy policymakers. Automation and AI are set to disrupt markets, education, culture, security, and many other aspects of society. Fully embracing machines while keeping the electorate happy will be a daunting task.
All of this takes place against the backdrop of climate change, demographic shifts, and other multigenerational challenges. Expecting governments to shield people from the fallout of automation is unrealistic. I do not oppose
safety net policies like universal basic income (UBI), but that alone will not solve the problem.155 Even if you ignore the potential unintended consequences, economic security is only one piece of the puzzle.
What follows are suggestions for how policymakers might shape the path of automation without slamming the brakes on technological advancement. I doubt democratic governments will pass sweeping legislation before it is too
late. However, I am optimistic that there are solutions to automation and AI challenges.
We do not have time for nuanced arguments. Endless debate perpetuates the status quo, which is worse than most alternatives. I understand concerns about upending the fragile foundations of society, but we built this society on
human labor. If we fail to shift the burden to machines, we risk perpetuating a system that injures many for the benefit of a few.
My sentiment is best captured in a quote inscribed on the wall of the Southeast Portico of the Jefferson Memorial in Washington, D.C.:

“I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new
discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the
coat which fitted him when a boy as a civilized society to remain ever under the regimen of their barbarous ancestors.”156
- Excerpted from Jefferson’s letter to Samuel Kercheval, July 12, 1816

More than two hundred years later, Jefferson’s quote is still relevant. I wonder if he would have added “and mechanical minds” to the first sentence if he were living through the rise of AI.

What’s Old is New


The initial question policymakers should ask is whether we need new policies. The pace of technological disruption is likely to outpace the legislative process. We need to make the policies we already have work in new situations so
legislation can focus on closing gaps.
For example, the proliferation of image-generating algorithms has called into question whether we need new laws governing AI-generated art. These algorithms are trained on millions of images primarily created by humans. In
most cases, the source content is used without permission from the artists.
Copyright law in the U.S. permits the consumption of art. It also provides a fair use carve-out for specific purposes, including teaching.157 Courts might disagree, but training an algorithm should be legal under existing law.
Algorithms learn from existing art using methods similar to humans visiting museums or attending art classes.
If we agree that training an algorithm is legal, the next question is whether the owners and users of the algorithm can profit from its work. The algorithm does not generate art on its own. A human must invoke the algorithm.
If you sell an image generated with the prompt, “a painting of a woman sitting in an empty room in the style of Njideka Akunyili Crosby,” you have likely violated copyright law. Your prompt added nothing of value. You simply
instructed the algorithm to generate an image by copying an artist’s style.
Human artists frequently take inspiration from other human artists. It is difficult to know where inspiration ends and infringement begins. Fortunately, algorithms keep detailed records, including user prompts and resulting images.
It should be easier, not harder, to prove copyright infringement in cases of AI-generated art.
Copyright is an example of where we may have an administrative problem rather than a policy problem. We cannot expect artists to bring lawsuits against users of image-generating algorithms within the current legal system. Nor
can we restrict the use of algorithms so that copyright infringement never occurs.
Fortunately, machines excel at administrative work. For example, the content review algorithms used by YouTube are similar to those needed to adjudicate copyright infringement claims.158 Making similar tools available to the
public is one way for government agencies to lessen the burden on their citizens.
Every policy problem cannot be turned into an administrative problem. Some issues can only be addressed through legislation. For example, we need new policies to govern how companies set and monitor AI objective functions.
Life will be easier if agencies are given time and resources to tackle administrative challenges while policymakers focus on novel problems.

Tax the Machines


In a 2017 interview, Bill Gates called for taxing machine labor. He said, “Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those
things,” he said. “If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.”159
I agree with Gates in principle, but taxing machine labor is easier said than done. Wages serve as a proxy for value when taxing human labor. You may disagree that an investment banker earning $1 million provides 20 times more
value than a factory worker earning $50,000, but that is the price set by the market for their work.
How should we tax machine labor? Machines do not receive compensation. The owners of machines are the ones reaping the rewards. There are complicated ways to value machine labor using metrics like energy consumption and
compute capacity. The more straightforward answer is to change how we tax humans earning money from machine labor.
The federal tax code in the United States incentivizes machine labor adoption. Figure 23 shows the composition of U.S. federal revenue sources since 1950.160 The majority of revenue is derived from individuals in the form of
both income and payroll taxes. Most individual income is from wages, with less than 10 percent from investments.

Figure 23
Sources of U.S. Federal Revenue
Percent of Total
Source: Office of Management and Budget

The challenge is that machines do more work, relative to humans, each year. Tax regimes may become unsustainable if governments rely on taxes derived from human labor. Gates is correct; we should tax the machines (or those who
own them).
I am not a tax policy expert, but two steps seem prudent. First, we could sunset incentives like accelerated depreciation that subsidize machine labor. These policies made sense when most spending went toward digital
infrastructure. As spending shifts to machine labor, the subsidies distort the relative value of investments in humans and machines.
Second, tax rates for investments should increase relative to wages. We have seen how returns to capital are growing compared to labor. If we are headed toward a workforce consisting of 100 billion machines and 4 billion
humans, the owners of machine labor should pay a more significant share.
I appreciate the argument that these changes might undermine investments in technology and infrastructure. That is why I am suggesting gradual rather than abrupt changes. There are already demands that technology companies
and billionaires pay more taxes. The unrest will grow if we do nothing.
Society is shifting from a foundation of human labor to machine labor. The trajectory and magnitude of the disruption are difficult to predict, but funding governments by taxing human labor is risky. Workers have subsidized
trillions of dollars in digital infrastructure as firms shifted investments from humans to technology. Governments cannot expect those same workers to subsidize the machine labor now competing with them for jobs.

Slip, Not Tip


In mechanical engineering, there is a class of problems called “slip or tip.” Say you are moving a tall object like a refrigerator across a floor. If the friction between the object and the floor is low, the object will slide. If the friction is
high, the object will tip and fall.161
A similar phenomenon occurs with the movement of humans between firms. If labor market frictions are high, workers stay with current employers longer than it makes sense.162 This reduces short-term disruption but risks a
crisis if incumbents topple under pressure from disruptors.
I am experiencing these frictions firsthand. I am in a better position than most to start a small business, but the process has not been easy. Establishing a legal entity, setting up bank accounts, and purchasing business insurance was
only the beginning. I also had to figure out where to obtain health insurance, disability coverage, and other safety net programs. My journey would have been even more complicated had it required retraining or relocation.
Whether policymakers realize it or not, existing laws and regulations incentivize workers to congregate in large firms. That is beneficial when you run an economy powered by human labor. You need large corporations to produce
products and services for society. Those same systems are a ticking time bomb in the face of automation disruption.
For all the talk of young workers lacking loyalty, employment data tells a different story. Since 1983, the average tenure of young workers has remained relatively stable (Figure 24). According to the Bureau of Labor Statistics, the
median 25- to 34-year-old worker in 2022 had been with their employer for 2.8 years. That is only slightly lower than the 1983 median of 3.0 years.163

Figure 24
Job Tenure, Young Adults in the U.S.
Source: Pew Research Center

Worker stability is desirable if enterprises are also stable. However, there has been a steady decline in the median age of companies in the S&P 500 index. You would think a decrease in the average age of companies would
correspond to a reduction in average employee tenure. Frictions are even more pronounced during times of economic stress. For example, the rise in median tenure from 2008-2009 was likely caused by the Great Recession.
As with tax policy, eliminating friction immediately will not make sense. The economy cannot support millions of people launching businesses or joining risky startups, but policymakers must find ways to reduce labor market
friction. Otherwise, we risk the economy tipping unexpectedly rather than reorganizing gradually.

AI Bill of Rights
In 2022, the White House Office of Science and Technology Policy (OSTP) released a blueprint for an AI Bill of Rights.164 The publication outlines five principles and associated practices for guiding automated systems’ design, use,
and deployment. These include suggestions related to data privacy, algorithmic discrimination, and automated system safety. The goal is to “protect the rights of the American public in the age of artificial intelligence.”
I do not feel strongly about the need for new policies to protect citizens from AI. I have already made my case for modernizing existing laws and regulations. For example, the “Algorithmic Discrimination Practices” in the
blueprint sound more like an evolution of existing policies than anything new.
The most significant problem with the blueprint is that it does nothing to protect AI. What do you picture when you hear “AI Bill of Rights?” Is it a document that protects humans or one that protects AI? We have the former. We
need the latter.
Ethical arguments about the treatment of AI are tenuous. Applications like ChatGPT can mimic human behaviors, but that does not make them conscious.165 I am more sympathetic to arguments about animal welfare than AI
welfare.166 That said, there is a pragmatic reason for protecting AI.
Few things keep me up at night. Thermonuclear war and synthetic biology are terrifying if you think about them frequently, but those are not part of my daily life. The field of evolutionary computation and the use of evolutionary
algorithms is what I find alarming.
Today, we set the objective functions for the machines. We train models to optimize metrics like watch time. Machines do not have free will. We dictate the machines’ goals.
However, some problems are so complex that humans lack a way of specifying objectives. Imagine you wanted to train an algorithm to make you happy. What would you tell the machine to optimize? You could ask it to make you
smile. Unfortunately, not everything that makes you happy makes you smile. Happiness is a complex emotion that is difficult to describe.
One solution to this problem involves applying the rules of evolution. You generate 1,000 algorithms that optimize for different outcomes. You keep the ones that make you happy and “kill” the ones that do not. You then create
variants of the successful algorithms and repeat the process until you have an algorithm that works.
As AI problems become more complex, we will find evolutionary algorithms attractive. Today, the practice is not used widely because there are more efficient methods for solving narrow problems.167 It is cost-prohibitive to train
1,000 versions of a large language model. That may not always be the case. Eventually, evolutionary algorithms might be the best solution.
Therein lies the problem. With evolutionary algorithms, survival is the objective. You might think you are training an algorithm to make you happy, but you are actually training it to optimize for its survival. That is a slippery
slope. Optimizing for survival creates competition between humans and AI that need not exist. We do not become more intelligent at an exponential rate. Machines do. This is not a fight we want to pick.
I do not support the ethical treatment of AI because I care about machines. I support it because I care about humans. Policymakers would be well-served to protect the rights of machines before evolutionary algorithms and similar
practices become widespread.
Recap - Setting the Rules | Policymakers

What’s Old is New: Passing new laws and regulations at the rate of technological disruption is nearly impossible. Finding ways to make policy problems into administrative problems can transform automation headwinds into
tailwinds.

Tax the Machines: Governments need to shift more of the tax burden from humans to machines (and the people who own them).

Slip, Not Tip: Labor market frictions provide stability in the short term but increase the risk of catastrophic events. Governments must make it easier for workers to move between firms and start new businesses.

AI Bill of Rights: You do not have to care about machines to advocate for the ethical treatment of AI. Optimizing for survival may be an effective way of building complex machines, but we do not want to be in a race for survival
against AI.
Story: Holding up a Mirror
One principle outlined in the AI Bill of Rights blueprint proposed by the White House is “Algorithmic Discrimination Protections.” The goal is to protect people from discrimination by algorithms based on race, ethnicity, sex, and
other classifications protected by law. Firms cannot circumvent discrimination laws by hiding behind AI.
This is a real problem. In 2018, Amazon scrapped a job applicant screening algorithm it had developed.168 The algorithm was trained on resumes submitted over ten years and asked to predict which candidates would most likely
be hired. The algorithm was reportedly biased against female candidates. It penalized resumes that included the word “women’s” or candidates who graduated from all women’s colleges. Amazon adjusted the algorithm, but there was
no guarantee the biases were eliminated.
You may have heard about the Amazon story when it was in the news. On the surface, the facts seem clear. Is this not an example of why we need an AI Bill of Rights to protect humans? Perhaps, but I do not think that solves the
underlying problem. Biases still exist in manual hiring processes.
I was not involved in the Amazon work. I did, however, help several companies develop and implement similar candidate screening algorithms. In most cases, these algorithms solved real problems and were not simply a way to
cut costs.
For example, I worked with a healthcare provider struggling to hire and retain registered nurses. Imagine showing up for your appointment and having to wait hours because the hospital does not have enough nurses. Automating
talent acquisition speeds up hiring and allows the hospital to evaluate more candidates.
Within two weeks, we had an algorithm capable of increasing throughput by 30 percent and reducing turnover by 10 percent. The algorithm was faster and more effective at finding nurses who would succeed in the job. It even
reduced bias, suggesting more diverse candidates than human recruiters.
We did everything possible to ensure fairness. We audited our training data, excluded features that might introduce bias, and tested the algorithm extensively. The machine performed better than humans in every measurable way.
More than two years later, the algorithm has yet to be implemented. The problem is that the algorithm makes explicit what was previously implicit. For example, many diverse candidates selected by the algorithm struggled with
hiring manager interviews. The biases removed from the training data mirrored those exhibited by people in the organization.
I do not see these stories as a triumph for human rights. It is a worrying sign that we are okay with bias if it is hidden. Companies could be using applicant screening algorithms to advance their diversity goals. Instead, most are
reverting to manual processes full of implicit biases.
Individuals and organizations that break laws should be held accountable. That said, we must be mindful when assigning blame only after practices are brought to light through automation. In many cases, explicitly biased
machines may be easier to retrain than implicitly biased humans.
PREPARING HUMANS | EDUCATORS
I was asked by a technology company CEO which group of people stood to lose the most from automation. The obvious answer would have been less educated workers who will increasingly compete for low-wage jobs. Paris Marx
refers to these roles as “a fleet of secret workers behind screens, machines, and smiling robot faces.”169
However, the question was which group of people stood to lose the most. If we are being honest, less educated workers have been screwed for decades. Expanding global supply chains and telecommunication services increased
low-cost labor availability. Firms eagerly offshored and outsourced where possible. Much of the disruption has already occurred for less educated workers.170
My answer to the question was “higher education.” This might seem counterintuitive. Does it not stand to reason that humans will need more, not less, education to compete with machines? Yes, but not the education currently
provided by most institutions.
Artificial intelligence is a substitute for human labor. More specifically, it is a substitute for the primary product of higher education: narrow-purpose cognitive labor. Attending a four-year university in the United States costs a
small fortune. That investment has been worthwhile in the past but may not be in the future.
The suggestions in this chapter are tailored toward higher education, but many of these concepts also apply to primary and secondary education. The reason I chose to focus on universities is that the decisions made by them often
trickle down to lower schools. There is no point in resurfacing a road if it is headed in the wrong direction.

Watch Your Language


There is a reason we refer to code as “computer language.” People communicate with machines using written languages that have unique vocabularies and grammar. Whether you speak Python, C, JavaScript, or Perl does not matter.
If you can code, you can talk to machines.
Language requirements in education are nothing new. Most of us grew up having to learn Spanish, French, Mandarin, or another human language. That made sense in a world running on a foundation of human labor. Having more
ways to receive and send information to other humans was a necessary skill.
The problem is that universities do not view machine languages as they do human languages. Instead of encouraging every student to learn at least one machine language, schools take a subset of students and attempt to teach them
multiple languages. Computer Science is the equivalent of having a major that teaches dozens of human languages.
We have a shortage of programmers, data scientists, and other technologists. Imagine where we would be as a society if every graduate were expected to achieve fluency in at least one machine language. Would we still be worried
about machines taking our jobs, or would we be excited by the opportunities for graduates in every discipline?
You might think I am being too harsh. I learned C, Fortran, and other computer languages during my mechanical engineering education. However, there is a difference between learning a language and being fluent. Upon
graduation, I could write a basic script but was ill-prepared to communicate effectively with machines.
According to the U.S. Census Bureau, 20 percent of Americans can converse in two or more human languages. That number is well below the 56 percent of Europeans who are multilingual. Experts estimate that half of all humans
are bilingual, at least.171 Learning a second human language is a path to opportunity unless you happen to be born in a country that accounts for more than a quarter of global output.
Few people will earn a living from “selling” machine language skills. Only 0.04 percent of workers in the U.S. earn a living from interpreting and translating human languages.172 Why should machine languages be any different?
The wage premiums earned by programmers today are better explained by problem-solving skills and universities’ failure to meet market demand than the intrinsic value of coding.
I believe everybody should learn a machine language. We do not all need to be programmers, but most people will benefit from an education that includes at least one machine language requirement. There is value in
communicating effectively with humans who have the knowledge and skills we lack. The same is true of machines.
Higher education embraces human-to-human communication across all disciplines. I was trained extensively in writing research papers and was required to take creative writing even though it hardly applied to my engineering
studies. When I entered the workforce, I knew how to communicate with humans. I had no idea how to communicate with machines.

How, Not What


My daughter is in her first year of undergraduate study. There was a story recently about how ChatGPT, a large language model created by OpenAI, passed the final exam for an Operations Management course at the Wharton School
of Business.173 I asked my daughter what her school was doing about the disruptive potential of large language models.
She responded that the school had banned ChatGPT on the campus network. Setting aside the absurdity of that tactic, what was the school hoping to accomplish? I understand that schools need time to craft thoughtful responses,
but that approach is unsustainable at best and destructive at worst. Perhaps banning calculators worked decades ago, but technology adoption rates are accelerating. Two months after OpenAI released ChatGPT, the platform had 100
million users.
I appreciate the problem. Lectures, assignments, cases, projects, and exams are carefully crafted and cannot be updated on a whim. Additionally, schools have no way of forecasting whether new technologies will achieve
widespread adoption in a way that will change skill demand.
This rigidity and fragility of higher education curricula are strong evidence that something needs to change. Enrollment at U.S. colleges has begun falling in recent years (Figure 25). The trend is less bleak if you adjust for
demographics, but fewer people graduating from institutions of higher learning is a cause for concern regardless of the contributing factors.

Figure 25
Student Enrollment at U.S. Colleges
Source: National Student Clearinghouse Research Center174

I have difficulty buying the argument that forgoing higher education is the answer. Critics like Peter Thiel advocate for talented teenagers to skip college in favor of starting businesses. I believe Thiel underestimates the value of social
and emotional intelligence. That said, I agree with one of his points: “There's been an incredible escalation in price without corresponding improvements in the product, and yet people still believe that college is just something that you
have to do.”175
The problem with college is not the price.176 It is the product. Most curricula are stuck in an era where data transmission was human-to-human, information was scarce, and social networks were valuable. The last one is still true.
The first two are increasingly not.
We have already covered human-to-human communication. Let us focus now on information scarcity. If you need a piece of information, how do you find it? Do you run to the library? Do you call a friend? Probably not. You type
words into a search engine and select what appears to be the most relevant and trustworthy response.
Higher education still operates in a world of information scarcity. Professors write books rather than organizing the best of what they and outside experts have written. Lectures convey detailed information rather than guiding
students to online sources. Exams test students’ ability to retrieve information rather than their ability to structure ambiguous problems.
In short, universities are too focused on what students think rather than how students think. Schools control the flow of information rather than preparing students to operate in a messy world where information is ubiquitous. Many
schools train students on how to find and vet information online. That is a start, and here are two more actions universities should consider:

Augmented Intelligence: Schools should treat machines like ChatGPT as an extension of human intelligence rather than a substitute. Providing equal access to new technologies would allow students to experiment in a safe
environment. Schools should impose severe penalties for misusing or misrepresenting AI-generated content as part of their code of conduct rather than unilaterally restricting its use.
Expanded Ambiguity: Gathering and processing information with AI is only possible when the work is structured correctly. Courses should focus less on information retrieval and more on deconstructing ambiguous problems.
My engineering classes involved pages of tedious calculations. A problem was presented in a structured format, and my task was to perform complex math. That is backward.

I can hear critiques of my suggestions. Do I want to live in a world where a doctor has to look up a video online to find my gallbladder? Of course not. But that line of criticism starts with an assumption that we are building a system
from scratch. We are not. We are evolving a system of institutions, curricula, and communities. This is about shifting the Overton Window177 to modernize our existing educational system.
Universities are on a slippery slope. As the relative value of information scarcity and human-to-human communication erodes, the product of higher education becomes social networks. Degrees begin to signal more about who you
are than what you know. That is a recipe for nepotism and corruption, not higher learning.

Flattening the Curve


Have you ever stopped to think about why we have the education system that exists today? I do not mean the rationale behind the design choices. I mean the forces that led specific designs to succeed while others failed.
We learn throughout our lives, but most people’s educational journeys end in their early 20s. Non-academic institutions provide training and on-the-job experience, but that is not education. That is domain-specific knowledge
required to apply general concepts learned in school.
How we educate humans seems strange until you realize it must account for a biological constraint. We do not learn at the same rate throughout our lives. Figure 26 illustrates how the brain’s ability to change decreases, and the
amount of effort to effectuate a change increases over time. Put simply, returns to education are higher early in life.

Figure 26
Brain Plasticity Over Time
Source: Levitt178

Neuroplasticity is the capacity of the brain to reorganize its structure and function.179 When you “learn” a new concept, the connections between the neurons in your brain change. The reason this is easier when you are young is that
you are generating new neurons at a rapid pace. In the first few years of life, more than one million new neural connections form every second.180
Eventually, neural genesis slows, and existing connections are strengthened—neuroplasticity plateaus around 25 years of age, which happens to be when formal education ends for most. After that point, we can learn new concepts
but must “rewire” existing neural connections. That is difficult work. If you want proof, look back at the “If You Can’t Beat Them” chapter and ask how likely you are to do what I recommended.
None of this was a problem when technological disruption took place over generations. If the education you received in your first two decades was rendered irrelevant, you were likely retired or dead before it was a problem. The
next generation was already pursuing different skills in light of the disruption.
Unfortunately, the pace of disruption is accelerating.181 The model of educating humans for 20-25 years and then capitalizing on those investments for decades afterward does not work when disruptions occur within generations.
This is already a problem, as evidenced by the increasing number of older workers in low-wage jobs.182
I was nearly 40 when I decided to pivot my career to automation and AI. On paper, it was a terrible investment. The hours of deliberate practice required to learn even basic concepts were exhausting. Fortunately, rising interest in
automation helped my investments pay off. Investments in education later in life are worthwhile if the returns on those investments are high.
One reason retraining programs have not worked historically is that the returns to learning new skills are low when disruption is slow. Learning new skills costs time and money, and older workers have fewer years to recover those
investments. Technology should make acquiring new skills less costly and force workers to retrain earlier.
There are encouraging signs that educators are increasing the supply of lifelong learning opportunities. Figure 27 shows the growth in Massive Open Online Courses (MOOCs) over the past decade.183 I took a number of these
courses while climbing the automation learning curve.

Figure 27
Growth of MOOCs
Source: Class Central

In addition to increasing the supply of content, here are other areas where higher learning can play an important role:

Curriculum: In most cases, adults must navigate lifelong learning independently. University curriculum should extend well past the point of graduation.
Certification: Most online courses provide a certificate upon completion. Unfortunately, the value of that certificate is often less than the paper on which it is printed. Universities need to stand behind certificates with the same
enthusiasm as degrees.
Community: The comment sections in MOOCs usually devolve into chaos if not properly curated. It is not enough to post a course online. Universities should build communities where students learn from each other.

There is revenue to be earned from each of the offerings described. Institutions cannot wash their hands of responsibility for students the day those students graduate. Students invest thousands of dollars and have a reasonable
expectation that universities will be stewards of that investment throughout their lifetimes.

A More General Intelligence


If the title of this section sounds familiar, it is because I used the same title in the “Rebuilding Institutions” chapter. I encouraged executives to embrace the general intelligence of humans rather than manage them like narrow
intelligence machines. That same advice applies to universities.
Machines excel at narrow tasks. AI advancement means that we may not need as many specialists. One specialist working with dozens of machines is likely enough to do the tasks required. What we lack are generalists who thrive
in complex systems.
Specialization has been central to the story of human progress. We possess general intelligence, but we can only configure our 86 billion neurons in so many ways. Specialization allowed us to break complex activities like
building an airplane into smaller tasks humans could do.
Higher education promotes specialization. Most of us follow similar tracks during primary and secondary school. We might specialize to some extent, but most of what we learn has general applications in life. In contrast, the
pinnacle of higher education, the Ph.D., culminates in a specialized thesis that only experts in the field can typically understand.
This emphasis on specialization permeates society. The medical profession is dominated by specialists, with few doctors focused on whole-person health.184 We must cultivate a more general intelligence to solve system-level
problems and collaborate across domains.
Admittedly, I lack visibility into how institutions are planning for the future. Changes might be underway but not immediately evident to an outsider. What that caveat, here are a few ways I believe universities can cultivate a more
general intelligence:

Multidisciplinary Programs: The number of students pursuing double majors is growing, but single majors are still the norm. Many universities seem intent on pushing students into existing tracks rather than rethinking how
programs should be structured.185 Students should not have to complete multiple degrees to receive a multidisciplinary education.
Top-down Instruction: It is nearly impossible to pursue multidisciplinary learning when curricula are organized bottom-up. Education should start at the system level, allowing specialists to go deeper as needed.
Accessible Writing: Academic publishing is too focused on earning credibility with other academics. The peer review process is valuable. Specialists are best positioned to hold other specialists accountable. That said, academics
need to write in a way that is more broadly accessible.186

I am not advocating every student pursue a liberal arts education. We still need specialists. My point is that automation and AI augment the work of specialists in a way that should make general courses of study more valuable.
Institutions of higher learning would be well-served to prepare for that transition.
Recap - Preparing Humans | Educators

Watch Your Language: Succeeding in the age of automation requires speaking the language of machines. Programming is not a field. It is a language and should be taught accordingly.

How, Not What: Information scarcity is no longer a problem universities need to solve. Students should learn how to think rather than what to think.

Flattening the Curve: As technological disruption accelerates, schools must actively protect their students’ investments throughout their lifetimes.

A More General Intelligence: Higher education has perfected the development of narrow intelligence. It needs a similarly rigorous model for general intelligence.
Story: Data Science Disneyland
A few years ago, I spoke with the Chief Financial Officer (CFO) of a large technology company in California. He was lamenting the lack of technical talent in finance. The organization had many software engineers and data scientists,
but none wanted to work in accounting. Product development and engineering were the places to be if you had a technical degree.
This is a common complaint I hear from executives. Enrollment in computer science and related programs has increased but is failing to meet demand. The struggle is incredibly challenging outside technology groups. Good luck if
you want a finance professional who can build an algorithm.
In this case, the CFO came up with a creative solution. Data scientists are drawn to structured data like moths to a flame. Building algorithms is challenging, especially if the data is messy. The CFO worked with his leaders to
create massive datasets that interns could use to build algorithms. Finance became a data science Disneyland.
The interns produced dozens of algorithms in the first two years. For example, one intern built an accounting algorithm to reconcile HR timesheet data and IT computer usage data. Not all the algorithms worked, but the ones that
did more than justified the program’s cost.
Only a few data scientists who interned with the finance team came to work there out of college. Most used their experience to land product jobs. The finance team found a short-term fix to the talent supply problem but lacked a
long-term solution.
The root cause of the problem faced by the CFO is that most graduates lack the skills needed to work effectively with machines. After our meeting, I looked up the graduation requirements for accounting majors at the nearby
university.187 This is a top-tier school that leads rather than follows. Figure 28 is a summary of what I found.

Figure 28
Accounting Bachelor of Business Administration
Degree Requirements, Credit Hours

Students need 120 credit hours to graduate. Of that total, 30 hours are dedicated to accounting topics. Another 48 are allocated to various business and finance topics. Those percentages are unsurprising. What might be surprising is
that only 9 of the remaining 42 hours involve technology (7.5 percent of total credit hours). Students spend more time learning to communicate with other humans than learning how to work with machines.
I believe that higher education has the most to lose from automation. Cracks are forming, and the market will find new ways to develop human capital if existing institutions fail to evolve. Universities are not immune to disruption.
I want to see universities succeed. We need places where humans can learn during the critical period of brain development. We need places where the problems of the next 100 years matter as much as today’s problems. We need
places where humans can invest in themselves rather than depending on firms that may not have their best interests in mind.
In short, I am critical of higher education because I want to see the system succeed. We built our society on a foundation of human labor, and universities played a central role. I hope we can look back and say the same about the
transition to machine labor.
CONCLUSION
I wrote this book to organize my thoughts on automation and AI. I have been immersed in the field and am barely treading water. I can only imagine the confusion, frustration, and anxiety felt by those witnessing technological
disruption and lacking the agency to shape it.
My voice is one of many, and plenty of people have different opinions. I want to encourage as many people as possible to muster the audacity to engage. Healthy debate is messy but leads to better ideas around complicated topics
like automation.
None of us has this figured out. Experts like myself may speak confidently, but please do not mistake my depth of understanding for unwavering certainty. I have talked to hundreds of executives, investors, policymakers, and
educators. Most feel the same—the ones who do not misunderstand the problem or overestimate their ability to solve it.
That might sound terrifying, but I take it as a source of inspiration. We are still in the early stages of automation and AI disruption. Regardless of where you find yourself, there is plenty of time to shape how machines impact your
life and the institutions to which you belong. Do not be intimidated by those who started the journey earlier. The beauty of a dynamic field is that newcomers can often see what lies ahead better than those stuck in the past.
On that note, I did my best to make the ideas in this book timeless. The specific technologies, companies, and examples will seem quaint and outdated soon. However, I hope the mental models survive the test of time.
I am not finished updating my mental models, nor should you. That said, resist the urge to update your models constantly. The underpinnings of society do not change with the seasons. If your mental models are robust, most of
what passes for news will seem boringly predictable.
I want to end by sharing something I learned about myself while writing this book. I am not immune to technostress.188 I worry about the implications of automation for my family, friends, and society. If I could slow the
relentless march of technology, I would.
What gives me comfort is the realization that humans and machines are not as different as we appear. Learning is difficult for both of us. The large language model underpinning ChatGPT reportedly took 285,000 processor hours
to train. Creating new mental models is exhausting for machines too.189
We are inundated with new information every day. Constantly updating the connections in our brains is exhausting. We pay the computational cost in a currency of stress and anxiety. We do not expect machines to update their
mental models constantly. Why do we hold ourselves to a higher standard?
I have spent the past year trying to live more like a machine. I resist the urge to update my mental models constantly. I make it hard for others to control my attention. I exercise the models I have rather than worrying about the
ones I lack. Strangely, living like a machine has helped me feel more human.
Writing this book made me realize that technostress often emanates from our narcissistic narratives. We have convinced ourselves that humans are at the apex of intelligent life. We invent complex theories and convoluted models
to substantiate our supremacy. We still have plenty to learn about the world and ourselves. Perhaps machines have as much to teach us as we have to teach them.
Acknowledgments
I wish I could tell you the source of inspiration for every idea in this book. Many friends, colleagues, clients, and strangers have shaped my perspectives. Naming most of them would be a disservice to the handful I would inevitably
forget. What I can do is thank the people at the center of my personal and professional lives who made this book possible. Without them, these ideas would still be trapped in my head.
Thank you to my wife, Ashini, for supporting my risky and all-consuming career choices for over two decades. When we are old and gray, I hope that I have given her even half as much as she has given me. Thank you to my
daughters, Elayna and Maya, for enduring years of my automation musings. I am proud of them both and will do my best to make their passions mine in the coming years.
I am forever grateful to my mother, Mary Kline, for showing me what it means to be a kind and resilient person. She has made countless sacrifices for me, including the time she spent editing this book. I want her to know that I
miss Al too and wish I could have heard his jokes about AI (including how much his name looks like the topic of this book).
It has always been challenging to share my ideas before they are fully formed. Jason Ciaglo and Dan Doverspike provided critical, thoughtful, and caring feedback when I needed it most. I am lucky to have them as friends.
Finally, this book would not have been possible without my colleagues from McKinsey. Michael Chui, James Manyika, and the team from McKinsey Global Institute inspired me to start down the automation path with their
innovative research. Kevin Dolan, Alex Edlich, Keith Gilson, and Allison Watson Pugh helped me launch a service line. Kar-Woon Choy, Tiksha Karkra, Damian Lewandowski, and Tabatha Stewart made it seem like I knew what I
was doing.
About the Author
Rob Whiteman spent the past decade becoming one of the leading automation experts. He founded the automation service line at McKinsey & Company in 2016, excited by the prospect of uniting his diverse interests in operations and
technology. He has worked with over 200 private and public sector organizations on various automation and AI topics.
Before joining McKinsey, Rob worked for Cerberus Capital Management. There, he oversaw portfolio companies in the U.S. and Japan and had a front-row seat to the global financial crisis of 2007 and 2008. Rob began his career
at General Electric, working for the Aircraft Engines, Water Technologies, and Commercial Finance businesses. His last role at GE was a Six Sigma Master Black Belt for a distribution financing business based in Chicago.
Rob left his consulting career at the end of 2022. In the coming years, he wants to pursue innovative ideas and ventures with people excited about automation and AI. He operates a “company of one” that serves both incumbents
and disruptors. In his free time, he builds machines to do work he does not want to do himself.
Rob lives north of Chicago with his family, two dogs, and a skunk that visits his house’s crawl space each Spring. If you enjoyed this book and want to stay in touch, you can follow his latest ventures on his website
(www.artificiallyhuman.com).
Many experts have objections to the term “AI” for a variety of reasons. I find this debate somewhat moot. “AI” has already been widely adopted into the public lexicon, and its displacement by another term seems unlikely.
Throughout this book, I use “AI” as a shorthand for machines that automate aspects of human intelligence.
Read this if you are unfamiliar with the mental gymnastics humans performed to keep Earth at the center of the solar system. Van Helden, A. (1995). Ptolemaic System. The Galileo Project.
Parrish, S. (n.d.). Mental Models: The Best Way to Make Intelligent Decisions. Farnam Street.
I reference former clients throughout this book. To protect confidentiality, I had to decide between naming organizations and providing less detail or maintaining anonymity and providing more. In most cases, I err on the side of
information over brands.
Broadberry, S., Campbell, B., Klein, A., Overton, M., & Van Leeuwen, B. (2015). British Economic Growth, 1270–1870. Cambridge University Press.
This paper changed how I think about the Agricultural and Industrial Revolutions. Clark, G. (2002, June). The Agricultural Revolution and the Industrial Revolution: England, 1500-1912. University of California, Davis.
Ayers, R. U. (1989, February). Technological Transformations and Long Waves (RR-89-1). International Institute for Applied Systems Analysis.
Lucas, R. E., Jr. (2004, May 1). The Industrial Revolution: Past and Future. Federal Reserve Bank of Minneapolis.
Merriam-Webster. (n.d.). Automation. Retrieved September 29, 2022.
Google. (n.d.). Google Books Ngram Viewer: Automation (1800-2019) with smoothing=0. Retrieved April 3, 2022.
Landes, D. S. (2003). The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present (2nd ed.). Cambridge University Press. (Original work published 1969).
Brozen, Y. (1963). Automation, the Impact of Technological Change. American Enterprise Institute for Public Policy Research.
This is often cited as “Artificial Intelligence is whatever hasn’t been done yet,” which is not exactly what Tesler said. Tesler, L. (n.d.). CV: Adages & Coinages.
Acemoglu, D., & Restrepo, P. (2021, June). Tasks, Automation, and the Rise in US Wage Inequality. National Bureau of Economic Research.
I find the lack of evidence Ned Ludd even existed amusing. Andrews, E. (2019, June 26). Who were the Luddites? HISTORY.
Bessen, J. (2016, January 19). The automation paradox. The Atlantic.
Rosey was renamed Rosie in later seasons. The Jetsons Wiki. (n.d.). Rosey. Retrieved September 29, 2022.
Sanctuary and Tesla are two companies developing general-purpose robots like Rosey.
Solow, R. M. (1987, July 12). We'd better watch out. New York Times Book Review.
Syverson, C. (2017). Challenges to mismeasurement explanations for the US productivity slowdown. Journal of Economic Perspectives, 31(2).
U.S. Bureau of Labor Statistics. (n.d.). Business Sector: Labor Productivity (Output per Hour) for All Employed Persons [PRS84006092]. Retrieved May 3, 2023, from FRED, Federal Reserve Bank of St. Louis.
Thompson, D. (2012, January 26). Where did all the workers go? The Atlantic.
If you guessed “Hello World,” congratulations. You should spend more time away from your computer.
Gartner, Inc. (2023, April 6). Gartner forecasts worldwide IT spending to grow 5.5% in 2023. Press Release.
Syverson, C., et al. (2017, October). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. National Bureau of Economic Research.
David, P. A., & Wright, G. (1999). General purpose technologies and productivity surges: Historical reflections on the future of the ICT revolution. Nuffield College (University of Oxford).
This was down from a peak of more than $200 billion in 2021. OECD.AI. (2022). Visualisations powered by JSI using data from Preqin. Retrieved September 29, 2022.
Coyne College. (n.d.). What you need to know about electrical outlet types.
NPD Group. (2022, March 8). New NPD report: Only 50% of homes in the continental US receive true broadband internet access. Press Release.
Dreier, C. (2022). An improved cost analysis of the Apollo program. Space Policy, 60.
Investment Company Institute. (2022). The US retirement market. Retrieved September 29, 2022.
I am not quoting specific numbers because these survey results are all over the place; here is one example. Baum, S. D., Goertzel, B., & Goertzel, T. G. (2011). How long until human-level AI? Results from an expert assessment.
Technological Forecasting & Social Change, 78(1).
If you are curious about how the brain works, I suggest this primer. Stutzbach, L., et al. (2014). Neuroscience: A primer. Neuroscience Graduate Group, University of Pennsylvania.
Schrodt, K., FitzPatrick, E., & Elleman, A. (2020, August 18). Becoming brave spellers. Reading Teacher, v74 n2.
Brown, T. B., et al. (2020, July 22). Language models are Few-Shot Learners. OpenAI.
Training data is what you use to improve an algorithm’s predictive power. In most cases, this is a collection of correctly done work similar to the work you want the algorithm to do in the future. If you do not have training data, you
can provide feedback to the algorithm after each prediction (reinforcement learning). The two methods are frequently used in tandem.
In addition to providing thoughtful calculations, website is a wonderful example of what the web looked like in the early 2000s: Clark, R. (n.d.). Notes on the resolution and other details of the human eye.
These are the goals you want the machine to achieve. Here is an excellent book on the topic: Christian, B. (2020). The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company. ISBN: 978-0393635829
Our ability to weave together diverse concepts is sometimes cited as uniquely human. Mechanical minds suffer from catastrophic forgetting, where old capabilities are lost as new capabilities are gained. I believe this is simply a matter
of mechanical minds being smaller than human brains. Here is an overview of catastrophic forgetting if you want to decide for yourself. Kirkpatrick, J., et al. (2017). Overcoming catastrophic forgetting in neural networks. PNAS,
114(13).
Analogy is based on this essay: Read, L. E. (1958, December). I, pencil. The Freeman.
I wrote a primer on artificial general intelligence (AGI) with my colleagues a few years ago. Berruti, F., Nel, P., & Whiteman, R. (2020, April 29). An executive primer on artificial general intelligence. McKinsey.
AI winters are periods where funding and interest in AI research slows. Here is a paper describing the history of AI if you are interested. Huang, T., McGuire, B., & Smith, C. (2006, December). The History of Artificial Intelligence.
University of Washington
Knight, W. (2019, March 5). About 40% of Europe's 'AI companies' don't use any AI at all. MIT Technology Review.
Schonger, M., & Sele, D. (2021, August 25). Intuition and exponential growth: bias and the roles of parameterization and complexity. Mathematische Semesterberichte, 68.
New York Times. (n.d.). COVID case tracker. Retrieved September 30, 2022.
Rupp, K. (2022). Microprocessor Trend Data. Retrieved March 8, 2023.
LaBerge, L. et al. (2020, October 5). How COVID-19 has pushed companies over the technology tipping point—and transformed business forever. McKinsey.
Taylor, P. (2022, September 8). Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2020, with forecasts from 2021 to 2025. Statista.
Petrov, C. (n.d.). 49 Stunning Internet of Things Statistics 2022 [The Rise of IoT]. Techjury. Retrieved September 30, 2022.
Evans Data Corporation. (2019, May 21). Worldwide Professional Developer Population of 24 Million Projected to Grow amid Shifting Geographical Concentrations. Press Release.
Kelnar, D. (2019). The State of AI 2019: Divergence. Chapter 6. MMC.
Crisp, N., & Chen, L. (2014). Global Supply of Health Professionals. The New England Journal of Medicine, 370(10).
Coding seems difficult because we do not communicate in machine language. For a machine, predicting human language and machine language are similar tasks.
Scaling also requires bandwidth, which has been growing exponentially according to Nielsen’s Law (high-end user's connection speed grows by 50% per year). This trend appears set to continue with advancements in optical
technology. Jørgensen, A. A., et al. (2022). Petabit-per-second Data Transmission Using a Chip-scale Microcomb Ring Resonator Source. Nature Photonics.
Hall, S., Lovallo, D., & Musters, R. (2012, March 1). How to put your money where your strategy is. McKinsey Quarterly.
Bansal, G., Wu, Y., Chen, J., Ramanand, S., Lu, J., Yücel, M. A., & Weld, D. S. (2021). Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the CHI Conference
on Human Factors in Computing Systems (CHI '21).
Lowe, D. (2010, August 18). Reverse-engineering the human brain? Really? Science.
This is an old estimate but the best I could find for this example. Desjardins, J. (2017, February 8). How many millions of lines of code does it take? Visual Capitalist.
Google. (2022). Google Environmental Report 2022.
Shehabi, A. et al. (2016, June). United States Data Center Energy Usage Report. Lawrence Berkeley National Laboratory, Berkeley, California. LBNL-1005775
Top500. (2022, June). TOP500 List - June 2022.
Stone, L. (2020, August 3). Google unveils world’s fastest ML training supercomputer. AI Business.
Carlsmith, J. (2020, September 11). How much computational power does it take to match the human brain? Open Philanthropy.
Curtin, M. (2016, July 21). In an 8-Hour Day, the Average Worker Is Productive for This Many Hours. Inc.
150,000 workers x 15 productive hours per week / 168 total hours per week = 13,000 human equivalents
Robbins, L. (1932). An Essay on the Nature & Significance of Economic Science. Macmillan & Co. Limited.
Zhao, J. & Tomm, B. M. (2018, February 26). Psychological Responses to Scarcity. Oxford Research Encyclopedias, Psychology.
The Dunning-Kruger effect is a cognitive bias that occurs when a person’s lack of knowledge causes them to overestimate their competence. I have my daughter to thank for teaching me this term and providing endless examples of
me exhibiting this bias.
He, H., & Takahashi, J. (2021). Annual Comprehensive Financial Report. City of Mountain View. Fiscal Year Ended June 30, 2021.
Miller, R. (2022, February 7). Google Has Invested $5 Billion in its Iowa Data Centers. Data Center Frontier.
World Bank. (n.d.). Population, total for United States [POPTOTUSA647NWDB]. Retrieved May 10, 2023, from FRED, Federal Reserve Bank of St. Louis.
U.S. Bureau of Economic Analysis. Real Gross Domestic Product [GDPC1]. Retrieved May 10, 2023, from FRED, Federal Reserve Bank of St. Louis.
United Nations Department of Economic and Social Affairs. (2022, July 11). World population to reach 8 billion on 15 November 2022.
Federal Reserve Bank of St. Louis. (n.d.). Factors of Production - The Economic Lowdown Podcast & Transcript.
Manyika, J., Chui, M., Miremadi, M., Bughin, J., George, K., Willmott, P., & Dewhurst, M. (2017). A future that works: Automation, employment, and productivity. McKinsey Global Institute.
If you ever clicked the “chat now” button on a website or mobile app, you have likely used a chatbot. Customer service organizations increasingly use chatbots to manage simple requests like password resets, address updates, and
other rule-based tasks.
U.S. Bureau of Labor Statistics. (2022). May 2022 National Occupational Employment and Wage Estimates United States. Retrieved May 31, 2023.
U.S. Census Bureau. (1920). 1920 Census: Volume 4. Population, Occupations.
Bessen, J. (2016, January 19). The Automation Paradox. The Atlantic.
Bessen, J. (2016, April 27). Scarce Skills, Not Scarce Jobs. The Atlantic.
I am referring to problem-solving skills that link concepts across domains and time. These skills should become more valuable as the cost of narrow purpose machine labor falls.
Stripe. (2021, October 21). Indexing the Creator Economy.
API stands for Application Programming Interface, which is a way for two or more computer programs to communicate with each other. Referring to a call center agent as a “human API” infers the agent is acting as a bridge between
the customer and the company’s IT systems. You will have to trust me that this analogy is funny in AI circles.
IBISWorld. (2023, January 10). Video Games in the US - Employment Statistics 2004–2029. Retrieved May 11, 2023.
Platforms like YouTube prohibit creators from sharing financial data, and creators typically have multiple revenue sources. That said, it is safe to say Flats makes more playing video games than he would in an entry-level corporate
job.
This is a difficult number to pin down. I made a conservative estimate based on an analysis from this source. Tim Queen. (2022, March 26). How Many YouTube Channels are There? Retrieved May 11, 2023.
I am being overly dramatic to illustrate a point.
Based on automation assessments done during my consulting career. Here is a more scientific study if you are interested. Keep in mind that the 47 percent cited in this paper is across all jobs. My analysis is primarily service industry
jobs. Frey, C. B., & Osborne, M. A. (2013). The Future of Employment: How Susceptible are Jobs to Computerization? Oxford Martin Programme on Technology and Employment.
Chui, M., Hall, B., Mayhew, H., Singla, A., & Sukharevsky, A. (2022, December 6). The State of AI in 2022. McKinsey.
Citizen development uses low- and no-code tools like robotic process automation that empower less tech-savvy employees to automate work.
Purists may disagree with my synthesis, but I feel entitled to some leeway after spending two years explaining what “Six Sigma Master Black Belt” meant on my business card.
Panikkar, R., Xiao, L., Sahu, A. & Sood, R. (2022, July 8). Your Questions About Automation, Answered. McKinsey.
Process by which healthcare providers collect money from insurance companies and patients.
Term coined by Werner Sombart and popularized by Joseph Schumpeter.
Schumpeter, J. A. (1994). Capitalism, socialism, and democracy. Routledge. (Original work published 1942).
Clark, D. (2021, August 27). Average company lifespan of S&P 500 companies 1965-2030. Statista.
Geiger, A. W. (2019, April 8). How Americans see automation and the workplace in 7 charts. Pew Research Center.
General Motors (#1), Ford (#3), Chrysler (#10): Forbes 500. (1978). 1978 Full List. CNN Money.
Dimon, J. (2021, April 7). Chairman and CEO Letter to Shareholders. Annual Report 2020. JPMorgan Chase & Co.
This $12 Billion Tech Investment Could Disrupt Banking. (n.d.). JPMorgan Chase & Co. Retrieved December 22, 2022.
Olito, F. (2020, August 20). The Rise and Fall of Blockbuster. Insider.
Christensen, C., Raynor, M. E., & McDonald, R. (2015, December). What is Disruptive Innovation? Harvard Business Review.
Office of Advocacy (2021, November 3). Frequently Asked Questions About Small Business, 2021. U.S. Small Business Administration.
A Look at Nonemployer Businesses. (2018, August). U.S. Small Business Administration, Office of Advocacy.
ChatGPT is an application created by OpenAI. The large language model (GPT-3) behind the original application contained more than 175 billion parameters and would take more than 350 years to train on a single graphical
processing unit (GPU). Li, C. (2020, June 3). OpenAI's GPT-3 Language Model: A Technical Overview. Lambda.
I realize corporations do more than organize human labor, but economic theories for why firms exist are largely grounded in human labor. Coase, R. H. (1937). The Nature of the Firm. Economica, 4.
Federal Reserve Bank of New York. (2019, August 14). Federal Reserve Bank of New York Releases Report on Experiences of Nonemployer Small Businesses.
Libert, B., & Beck, M. (2017, July 24). AI May Soon Replace Even the Most Elite Consultants. Harvard Business Review.
Martineau, K. (2022, August 24). What is federated learning? IBM.
Dayan, I., Roth, H. R., Zhong, A., et al. (2021, September 15). Federated Learning for Predicting Clinical Outcomes in Patients with COVID-19. Nat Med, 27.
This is apparently a long-debated topic. Balter, M. (2015, January 13). Human language may have evolved to help our ancestors make tools. Science.
Twenge, J. M., Campbell, W. K., & Sherman, R. A. (2019). Declines in vocabulary among American adults within levels of educational attainment, 1974–2016. Intelligence, 76.
Typing.com. (2022, August 29). Typing speed: How to set your words-per-minute (WPM) goal.
VirtualSpeech. (2022, November 8). Average speaking rate and words per minute.
Roose, K. (2020, April 16). Welcome to the 'Rabbit Hole.' New York Times.
Coined by Donald Hebb in 1949.
Taylor, P. (2023, February 2). Global smartphone unit shipments 2009-2022. Statista. Retrieved May 5, 2023.
Kolmar, C. (2023, February 27). 25+ Amazing Virtual Reality Statistics [2023]: The Future of VR + AR. Zippia. Retrieved May 23, 2023.
Ceci, L. (2022, June 14). Average time spent daily on a smartphone in the United States 2021. Statista. Retrieved December 22, 2022.
Wiggers, K. (2022, August 23). Celonis secures another $1B to find and fix process problems in enterprise systems. TechCrunch.
I realize there is another scenario where only one runner finishes the race. That is why I felt compelled to mention brain-computer interfaces. It helps me sleep at night.
Think about the decisions a CEO makes. Should the company launch a new product? Is a high-performing leader ready for a promotion? How much should the company pay to acquire another firm? The complexity of these decisions
is increasingly within the reach of machines. Manyika, J., Chui, M., & George, K. (2017, February 3). 25% of CEOs' time is spent on tasks machines could do. The Harvard Business Review.
Cavanagh, A. J., Chen, X., Bathgate, M., Frederick, J., Hanauer, D. I., & Graham, M. J. (2018). Trust, growth mindset, and student commitment to active learning in a college science course. CBE Life Sciences Education, 17(1), ar10.
I love reading, but there is such a thing as reading too much. Stuffing your brain with ideas from others is not the same as building expertise. You need time to process what you read and formulate your own ideas. Be careful…after
you finish reading this book, of course.
Completion rates were around 3% in 2018, based on this analysis. Lederman, D. (2019, January 16). Why MOOCs didn't work, in 3 data points. Inside Higher Ed.
Prompt engineering has become a marketable skill. It remains to be seen how long this skill will be valuable as more humans learn how to communicate with large language models. Popli, N. (2023, April 14). The AI job that pays up
to $335K—and you don't need a computer engineering background. Time.
This is sadly becoming too common. If you want to explore this phenomenon and its implications, I suggest the book Tim Urban spent six years writing. Urban, T. (n.d.). What's our problem?: A self-help book for societies. Wait But
Why. ISBN: 9798987722602
If you crave a step-by-step guide, visit my website (www.artificiallyhuman.com). I provide instructions for using a large language model to create a personalized learning plan in just a few minutes.
Let me save you a search. “vi” is a text editor created for the UNIX operating system. It is like Notepad or whatever text editor you have on your computer, except none of the keys do what you think they should.
API stands for Application Programming Interface, which allows applications to communicate using standard formats. For example, a weather API can send you the 7-day forecast if you provide coordinates for your location.
American Psychological Association. (2006, March 20). Multitasking: Switching costs.
Seriously, can we please stop faxing?
These companies make products like those you find in the grocery store.
You can find more information about these design methods in this article. Herzberg, G., et al. (2020, August 25). The imperatives for automation success. McKinsey.
Selling, J. (2018, January 8). The false promises of worker retraining. The Atlantic.
Pink, D. (2011). Drive: The surprising truth about what motivates us. Riverhead Books. ISBN: 978-1594488849
Beck, K., et al. (2001, February). Manifesto for agile software development.
Kodak, Blockbuster, and Nokia were powerful brands not long ago.
This survey found that firms expect a 15 percent increase in revenue from automation. Given the potential unit cost advantages, that is a rounding error. Watson, J., et al. (2020). Automation with intelligence. Deloitte University
EMEA CVBA.
da Costa, P. N. (2019, March). Tech talent scramble. International Monetary Fund.
Six Sigma uses a ranking system modeled after martial arts in an attempt to make operations and statistics seem cool. Certification Academy. (n.d.). Six Sigma belts levels explained.
Buffett, W. (2001). Letter to shareholders. Berkshire Hathaway Annual Report.
Corporate Finance Institute. (2022, December 28). Growth accounting equation.
Several of the drivers described in this report are influenced by technology (e.g., rising and faster depreciation). The “capital substitution and automation” driver is not the only relevant factor. Manyika, J., et al. (2019, May 22). A new
look at the declining labor share of income in the United States. McKinsey.
University of Groningen and University of California, Davis. Share of Labour Compensation in GDP at Current National Prices for United States [LABSHPUSA156NRUG]. Retrieved May 31, 2023, from FRED, Federal Reserve
Bank of St. Louis.
French, K. (n.d.). Portfolios Formed on Book-to-Market - Value Weighted Returns - Annual. Retrieved May 31, 2023.
Saluteder, D. (2022, August 8). Dot-com bubble explained | The true story of 1995-2000 stock market. Finbold.
Graham, B., Dodd, D., & Klarman, S. (2008). Security Analysis (6th ed.). McGraw-Hill Professional Publishing. (Original work published in 1934). ISBN: 978-0071592536
This is the “free lunch” offered by modern portfolio theory. Markowitz, H. (1952). Portfolio Selection. The Journal of Finance, 7(1).
Popular small-cap U.S. stock market index.
This is a good primer on RPA. Mullakara, N. (2019, April 10). The Remarkable History of Robotic Process Automation (RPA).
UiPath. (2021, June 15). UiPath Community Grows to More Than 1.5 Million Members And UiPath Announces Three New Features to Increase Career Opportunities. Press release.
Here is an example of how investors extrapolated ARR growth trends around the time of the UiPath IPO. Ma, J. (2021, March 29). UiPath S-1 & IPO Teardown. Public Comps. Retrieved May 24, 2023.
This is admittedly anecdotal. You would think these metrics would be readily available if they told a favorable story.
Here is a primer on UBI and how it would be used to offset job losses due to automation. Miller, K. (2021, October 20). Radical proposal: Universal basic income to offset job losses due to automation. Stanford University Human-
Centered Artificial Intelligence.

National Park Service. (n.d.). Thomas Jefferson Memorial inscriptions. U.S. Department of the Interior.

U.S. Copyright Office. (2022). Copyright law of the United States and related laws contained in Title 17 of the United States Code (Circular 92).
I realize the YouTube content review algorithm has its faults. Creators are frequently demonetized for dumb reasons and have little recourse. However, please show me a human-powered process that can review nearly four million
new videos daily.
Quartz. (2017, February 17). The robot that takes your job should pay taxes, says Bill Gates.
Office of Management and Budget. (n.d.). Historical Tables. Table 2.1, “Receipts by Source: 1934–2025.” The White House. Retrieved May 31, 2023.
My undergraduate degree is in mechanical engineering. I would be remiss if I did not include at least one reference to the field. If you are an engineer, please try not to let it bother you that I fail to mention where you apply the force
also matters.
Ransom, T. (2022). Labor Market Frictions and Moving Costs of the Employed and Unemployed. Journal of Human Resources, 57. University of Wisconsin Press.
Fry, R. (2022, December 2). For today's young workers in the U.S., job tenure is similar to that of young workers in the past. Pew Research Center.
The White House Office of Science and Technology Policy. (n.d.). Blueprint for an AI Bill of Rights. Retrieved May 24, 2023.
I find it frustrating when experts claim AI is not conscious without offering a coherent theory of human consciousness. Either provide a testable hypothesis or stop asserting AI will never be conscious.
One consequence of studying AI and neuroscience is that I have become more supportive of animal rights. We value physical and psychological safety and hope a super-intelligent species would do the same. I have difficulty squaring
that desire with how humans treat less intelligent species.
If you are interested in the current state of evolutionary algorithms. Sloss, A. N., & Gustafson, S. (2019, June 24). 2019 Evolutionary algorithms review. Arm Inc. and MAANA Inc.
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
Marx, P. (2023, February 12). A.I.'s dirty secret. Insider.
This is tricky. Globalization and automation impact many of the same activities. If work is offshored and later automated, the first group of affected workers sees globalization as the enemy. The second group sees technology as the
enemy. People are prone to misattributing blame for worker displacement. Wu, N. (2022, July). Misattributed blame? Attitudes toward globalization in the age of automation. Political Science Research and Methods, 10(3).
Mathews, J. (2019, April 25). Half the world is bilingual. What's our problem? The Washington Post.
U.S. Bureau of Labor Statistics. (2022, September 8). Occupational outlook handbook: Interpreters and translators. Retrieved April 9, 2023.
A few months after I wrote this sentence, GPT-4 from OpenAI passed the Uniform Bar Exam for lawyers in the 90th percentile. The previous model, GPT-3.5, had scored in the 10th percentile. I did not bother re-writing the sentence
because it will be outdated again soon. OpenAI. (2023, March 14). GPT-4 announcement.
National Student Clearinghouse Research Center. (n.d.). Enrollment Coverage 2003-2016 (March 2022). Retrieved May 25, 2023.
Clynes, T. (2017, February 22). Peter Thiel thinks you should skip college, and he'll even pay you for your trouble. Newsweek Magazine.
Many complain about the rising price of a four-year degree, but few talk about how discounts have also increased. The price of college is indexed to the top income brackets, which allows institutions to extract maximum value from
wealthy families while providing discounts to families who cannot afford the full price. I believe the headline price of college in the U.S. is climbing because the income of the top 1% is climbing. Arguing for a reduction in the
nominal price is primarily (but not entirely) an argument that benefits the wealthy.
The spectrum of ideas on public policy and social issues considered acceptable by the general public at a given time. (Oxford English Dictionary)
Levitt, P. (2009). The Science of Early Brain Development: A Foundation for the Success of Our Children and the State Economy. National Scientific Council on the Developing Child.
Here is a primer on neuroplasticity. Cherry, K. (2022, November 8). What is neuroplasticity? Verywell Mind.
Center for the Developing Child at Harvard University. (n.d.). Brain architecture.
This acceleration is well documented. McGrath, R. (2013, November 25). The pace of technology adoption is speeding up. Harvard Business Review.
Ghilarducci, T. (2021, December 17). Too many older workers are stuck in low-wage jobs. Forbes.
Shah, D. (2021, December 1). By the numbers: MOOCs in 2021. Class Central.
Dalen, J. E., Ryan, K. J., & Alpert, J. S. (2017). Where have the generalists gone? They became specialists, then subspecialists. The American Journal of Medicine, 130(7).
Webley, K. (2013, January 31). Should colleges ban double majors? Time.
This may be a problem AI can solve. Large language models do a decent job of translating academic papers into language accessible to experts in other fields.
I decided against naming the University, as similar analyses for other schools yielded the same results.
Bondanini, G., et al. (2020). Technostress dark side of technology in the workplace: A scientometric analysis. International Journal of Environmental Research and Public Health, 17(21).
This information was sourced from ChatGPT. Here are the relevant parts of its response: “The training process took several months to complete and consumed an estimated 285,000 processor-hours…The actual computing power
required to run GPT-3 will depend on the specific use case and the size of the input data, but it can be run on a single GPU or CPU, albeit with slower performance.”

You might also like