Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Data Centers of the Future

Reaching Sustainability

Jon Lin and Gary Aitkenhead

Beijing Boston Farnham Sebastopol Tokyo


Data Centers of the Future
by Jon Lin and Gary Aitkenhead
Copyright © 2023 O’Reilly Media. All rights reserved.
Printed in the United States of America.
Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA
95472.
O’Reilly books may be purchased for educational, business, or sales promotional
use. Online editions are also available for most titles (http://oreilly.com). For more
information, contact our corporate/institutional sales department: 800-998-9938 or
corporate@oreilly.com.

Acquisitions Editor: Jennifer Pollock Interior Designer: David Futato


Development Editor: Angela Rufino Cover Designer: Randy Comer
Production Editor: Katherine Tozer Illustrator: Kate Dullea
Copyeditor: Paula Fleming

April 2023: First Edition

Revision History for the First Edition


2023-04-17: First Release

The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Data Centers
of the Future, the cover image, and related trade dress are trademarks of O’Reilly
Media, Inc.
The views expressed in this work are those of the authors and do not represent the
publisher’s views. While the publisher and the authors have used good faith efforts
to ensure that the information and instructions contained in this work are accurate,
the publisher and the authors disclaim all responsibility for errors or omissions,
including without limitation responsibility for damages resulting from the use of
or reliance on this work. Use of the information and instructions contained in this
work is at your own risk. If any code samples or other technology this work contains
or describes is subject to open source licenses or the intellectual property rights of
others, it is your responsibility to ensure that your use thereof complies with such
licenses and/or rights.
This work is part of a collaboration between O’Reilly and Equinix. See our statement
of editorial independence.

978-1-492-09898-0
[LSI]
Table of Contents

Data Centers of the Future. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


Data Centers and the Digital Economy 3
The Shape of the Problem 6
Innovating with Energy 9
Innovating with Cooling 16
Innovation with IT Equipment 22
Buildings for the Future 24
The Future Begins Now 31

v
Data Centers of the Future

Humanity is suffering from a climate crisis. You don’t have to look


far to see the evidence: prolonged droughts, temperatures up to
105°F (40°C) in places like London where the temperature rarely
breaks 85°F (30°C), increasingly destructive wildfires, and the grow‐
ing loss of natural habitats impacting a variety of species and ecosys‐
tems. Building an inclusive, equitable, and safe world goes hand in
hand with sustainable growth. Every business has to look at itself
and think about its role in solving these problems, and the data
center industry is no exception.
Three big trends are driving growth in every business and, in turn,
driving the need for sustainable data centers. First, every business
is a digital business, and every business that wants to succeed in
the 21st century needs to negotiate its own transition to the digi‐
tal economy. The technology industry has buzzed about big data,
data science, and now machine learning and artificial intelligence,
requiring larger and richer datasets at each stage. Any executive
knows that, whatever else the company is about, it’s about data. As
we move further into a digital economy, we see data at a scale that
was unimaginable in the recent past. Petabytes are common, some
companies have exabytes, and the industry is trending toward yotta‐
bytes and beyond. Wherever you have data at scale, you inevitably
have data centers, and the data centers will grow in number as well
as size and density.
The second trend has to do with the way businesses now use data.
It’s been years since data was primarily used for batch jobs that
ran overnight. Businesses now need data for real-time, or quasi
real-time, applications. While there are many different kinds of

1
data, all customers want their data as soon as possible. After all,
you don’t want to wait a few seconds for that Netflix movie to
start. Automated stock-trading software needs to place bids in milli‐
seconds, before competitors can bid. Minimizing latency must be
factored into the design and location of data centers, leading to the
deployment of thousands of new, “far edge” data centers.
A third trend: the world is getting warmer. Greenhouse gases have a
real effect on the climate. The goal of the Paris Agreement is to limit
global warming to no more than 2°C and ideally less than 1.5°C.
As temperatures climb and natural disasters become more common,
public pressure to “do something” about greenhouse gas emissions
increases. That public pressure has become customer and regulatory
pressure. And it should. Whatever the industry, companies need to
consider themselves stewards of the world we share. Data centers
are no exception. Environmental stewardship will be key to our
survival.
Data centers sit at the intersection of all these trend vectors. They
use a lot of power (currently estimated at over 1% of the world’s
electricity), and increased demand for data guarantees that power
usage will only increase, even as data centers become more efficient.
The US Energy Information Administration estimates that world‐
wide energy use will increase 50% by 2050, driven by economic
growth worldwide rather than any specific industry. Latency consid‐
erations mean that data centers need to be located near population
centers where renewable energy may be unavailable; locating them
in areas with rich hydroelectric, wind, and solar power is costly,
and it’s rarely an option if latency is a consideration. Data center
customers—the businesses whose servers are located within the data
center—are applying pressure on data center operators to reduce
greenhouse gas emissions. Legislative pressure on data center opera‐
tors is also increasing: Singapore enacted a moratorium on new data
center construction (recently lifted), and the European Union has
put in place regulations about server efficiency, codes of conduct for
data center operators, and more. The US Securities and Exchange
Commission (SEC) has proposed a rule requiring corporations to
disclose climate-related data, including greenhouse gas emissions, to
investors. The news cycle also applies pressure. You don’t have to
look far to read about people’s negative perceptions of data centers,
whether they’re big internet server farms, massive cryptocurrency
mining camps, or large colocation facilities with global reach.

2 | Data Centers of the Future


The current situation isn’t pretty. Data centers are viewed as big, hot,
power-hungry installations. But that’s not the reality of this story.
Data centers have come a long way since the 1970s and 1980s. We
now know a lot more about how to conserve and reuse energy, we
know a lot more about renewable energy, and we know a lot more
about building energy-efficient buildings. Is it possible, in 2023, to
build a data center that’s truly sustainable? Not yet—but we can get a
lot closer to that goal than we could a few years ago.
This report investigates what it takes to build a sustainable data
center: what can be done now, what we will be able to do in the
future, and how to build data centers that are future ready. The Data
Centers of the Future will be clean, efficient, and upgradable to more
environmentally sound technologies as they become available. These
are the Data Centers of the Future (DCoF). We will have to live
with them for 30 years or more, so if we are to reach our climate
commitments, we have to start now.

Data Centers and the Digital Economy


We are living in a digital economy, sometimes called the “Fourth
Industrial Revolution”. Our personal lives are mediated by data, and
that data is served by data centers. We don’t watch (analog) TV or
go to theaters; we stream video. We don’t send (postal) mail; we
communicate with friends and family via email, texts, and social
media. We don’t read (print) books; we download them to a tablet.
This isn’t news to anyone, but it’s important to note that this shift
has taken place in only 20 or 30 years and has accelerated in the
last 5.
Businesses have been through the same digital transformation.
Many are still going through that change, and those that have
already made the change are continuing to evolve because it’s a
constantly moving target. It’s difficult to find a business that can’t
take orders online; even small, local businesses can offer nationwide
service and delivery. The digital transformation even touches the
physical plant: smart thermostats and smart lighting produce data
that can be used to optimize heating, air-conditioning, and light‐
ing, thereby reducing energy costs. However, many businesses still
haven’t integrated data into the core of their processes, from manu‐
facturing to finance to customer support. The businesses that thrive
will be those that are completely engaged in the data economy.

Data Centers and the Digital Economy | 3


Infrastructure for the economy of the 20th century was visible:
roads, bridges, airports. This infrastructure is still relevant and
can teach us some important lessons. The digital economy’s infra‐
structure is just as real as the 20th century’s, but less visible: net‐
works, massive data centers, interfaces, terminals where people work
(phones, tablets, laptops, desktops). It’s just as real, but far less tangi‐
ble. Both kinds of infrastructure are fundamentally about transpor‐
tation—bits in one case, physical goods in the other—and, while
they seem disconnected, we can learn a lot by comparing them. But
because the infrastructure of the data economy is far less tangible—
it’s become part of the plumbing, moved into the walls as it were—
it’s easy for consumers (including businesses) to ignore what it takes
to make it work: the very real energy and cooling requirements,
along with the difficult task of optimizing performance and reliabil‐
ity while minimizing emissions. The “cloud” sounds so clean and
magical, but it doesn’t exist without some of the largest data centers
in the world. It’s ironic that data centers are often perceived as
“dirty,” while “the cloud” is clean. In reality, the “cloud” metaphor
hides the cost of doing real work, all of which takes place in data
centers.
Businesses in the data economy thrive on interconnection: direct
engagement with customers, suppliers, and business partners. In
the last millennium, business was one-to-one; think phone calls, fax
machines, trains, trucks, and delivery vans. In the digital economy,
it’s many-to-many. Data sharing, negotiations, and deals can take
place among many partners. That sharing can only be facilitated
by digital networks. It requires fast, high-volume, low-latency com‐
munication that ideally takes place within a data center (assuming
that both companies are using the same colocation facility) or uses
very high-speed trunks between data centers (ideally, between data
centers in close proximity), rather than the public internet.

4 | Data Centers of the Future


Figure 1. Evolution of mobile communications (https://
link.springer.com/article/10.1007/s12652-020-02521-x)

Latency, along with last-mile connectivity, is also an issue for the


customers whom the data center’s clients are serving. When some‐
one streams video, they don’t want to wait for the video to begin;
they want it to start immediately. That problem can be solved by
caching the beginnings of videos in smaller, shared data centers
that are close to large metropolitan areas while serving the bulk of
the videos, after the startup, from a larger corporate data center.
Many kinds of businesses can benefit from similar patterns; research
presented at one of O’Reilly’s Velocity conferences showed that users
are sensitive to surprisingly small amounts of latency.
But the benefits come with a cost, since there are constraints on
where data centers can be located and their locations affect strategies
for buying electrical power. In some areas, buying clean power is
difficult. At an extreme, in some places the electrical grid can’t pro‐
vide sufficient power, and the data center has to power itself. Real
estate and construction prices are also likely to constrain building

Data Centers and the Digital Economy | 5


options. In the physical economy, repairing a bridge could cause
traffic to back up for hours. In the data economy, when a data center
goes offline, businesses and other critical infrastructure come to a
halt. That’s not acceptable; losses can run into the millions of dollars
per minute.
Therefore, data centers, along with the networks that interconnect
them, have to be reliable. “Five nines” availability (99.999%), which
originated in the public telephone network, is equivalent to about
5 minutes of downtime per year. Most of a data center’s customers
want at least five nines of uptime, if not six nines (99.9999%, or 30
seconds per year), and some data centers are already reaching seven
nines. In turn, this means that data centers must have carefully
planned backup systems for everything: network connections and
power, certainly, and even the CPUs and disk drives themselves. You
cannot design a system with five or six nines availability unless you
eliminate single points of failure. Most of the work that goes into
designing a modern data center goes to meeting these extremely
high reliability standards.
The infrastructure of the 20th century was expensive to build, main‐
tain, and replace, but sustainability was rarely an issue. Data centers
are also expensive to build, maintain, and replace. But unlike with
our older infrastructure, the reality of global warming requires us
to design carefully for environmental stewardship. Fortunately, data
centers can evolve in ways that bridges and highways can’t; the data
economy is infinitely more flexible than the physical economy. We’re
at an inflection point in which new hardware, new software, alterna‐
tive fuels, onsite power generation, new cooling technologies, better
management tools, and other innovations are becoming available.
This is a time to embrace such innovation—both in building new
data centers and in upgrading existing data centers for increased
capacity. Innovation is enabling us to build the Data Centers of the
Future.

The Shape of the Problem


It’s easy to make great predictions about how data centers might
operate, but all too often those predictions fall apart when they meet
reality. There’s no question that the reality of data centers is com‐
plex. Although we think immediately of the mammoth data centers
built by large enterprise technology companies, data centers exist on

6 | Data Centers of the Future


a multidimensional continuum, each with its own requirements and
capabilities.
Size
The largest data centers occupy multiple acres, and a data cen‐
ter company often builds many data centers on one site. The
smallest micro data centers are scarcely larger than a shipping
container. Although many forces are driving data centers to be
larger and denser, there is also a growing market for local data
centers built for a specific customer’s needs. Prefabricated micro
data centers that can be deployed quickly will certainly have
their applications.
Location
In the best of all possible worlds, data centers would be located
in areas with abundant wind power, solar power, and water.
In real life, data centers often need to be located near major
population concentrations, where power options are limited.
Real estate is also limited, which may constrain options for
alternative power; for example, it’s impossible to find real estate
for a significant wind farm installation in a major metropoli‐
tan area. In some densely populated areas, sufficient electrical
power for a data center may not be available at all; Singapore
is one place where this is a problem. Water availability can be
an issue, particularly in the southwestern US. Data centers are
also subject to local zoning and construction codes, which affect
what can be built and how. On the other hand, locating a data
center near a city or industrial center can provide opportunities
to reuse waste heat in a district heating network.
Tenancy
Companies that operate data centers, like Equinix, are building
colocation facilities and providing a variety of “as a service”
(AAS) solutions from digital to bare metal. Their customers
buy servers and disk storage, which the company operates and
maintains. While colocation has big advantages for the data
center’s customers, such as shared infrastructure and extremely
low latency networking to partners in the same facility, the
colocation business model puts constraints on the options avail‐
able for cooling and power. The data center must be able to
accept whatever equipment the customer wants to install. At the
other end of the spectrum, the massive data centers built by
Fortune 50 technology enterprises and their peers have a single

The Shape of the Problem | 7


tenant. Those companies are in complete control over what goes
into the data centers and therefore have more flexibility over
issues like cooling and power. Colocation presents significant
challenges for designing and operating a data center efficiently
and needs the engagement and support of customers to achieve
broader sustainability goals.
Specialization
It’s possible to build data centers where most (if not all) of the
tenants are in related businesses—for example, a financial data
center—on the assumption the tenants will need to transact
business at high speed with each other. In a specialized data
center, communications between tenants stay within the facility,
without incurring the latencies of an external network. It is also
easier to build high-speed connections to other data centers that
are used by the same industry.
Age
A data center is an investment with a lifetime of 20 to 30 years.
Any company’s fleet of data centers ranges from older centers
to new construction. New data centers can, of course, be built
according to best current practices. Older data centers and the
servers they house need to be upgraded or renewed to maintain
their value. Servers that are 20 years old are close to worthless,
except perhaps to a recycler; servers have gotten much more
efficient and smaller over time—but also denser, increasing
their cooling and power requirements. Renewing and upgrading
older data sites is a constant process.
Now that we can locate any data center in a five-dimensional space,
what can we conclude? There’s one conclusion, and it is trivially
obvious: no two data centers are alike. It’s easy to think that data
center companies stamp out identical data centers using the same
blueprints, but that’s not likely. Even companies that have complete
control of the contents of their data centers have to contend with
equipment that gradually becomes outdated, the constraints of
locale-appropriate cooling and power (you don’t want to use water
cooling where the water supply is limited), and more. Each data
center is sui generis, its own thing, and needs to be optimized on its
own terms.
Optimizing a physical data center still isn’t the entire problem. Other
issues need to be considered if we’re going to operate sustainably. A

8 | Data Centers of the Future


data center operator needs to be transparent about its energy use.
The company can’t just make statements about the importance of
climate issues without backing up those statements with real action.
Statements about environmental responsibility should reflect clear
goals and show how the company is working to achieve those goals.
The statements also need to be honest—it’s unfortunate that we
have to say that, but we do. Sustainability reports have to show real
progress, not just greenwashing.
A data center is also part of a community and needs to function
well within the community. It can’t be an eyesore or a source of
noise pollution that angers local residents but always gets its way.
A data center operator should model good behavior—both in fol‐
lowing building codes and ordinances and in promoting practices
like diversity and inclusion. Being an active part of communities
through internship programs, partnerships with local universities
and other educational institutions, and other community relation‐
ships is another way for a data center operator to give back to the
communities in which they’re located.
Data centers can innovate in many ways to become sustainable;
this is only a start. The steps taken for any specific data center will
depend on that data center’s characteristics. As we said, every data
center is unique.

Innovating with Energy


Any discussion of building a sustainable data center has to start
with power. Power isn’t the only issue, but it’s certainly the most
obvious. Regardless of the data center’s efficiency, it will require a lot
of electrical power; by today’s standards, a 1-megawatt data center
is small. The power available from the grid ranges from moderately
clean (with a lot of renewable energy sources in the mix) to dirty
(primarily coal-fired generators). In the US, the Environmental Pro‐
tection Agency (EPA) provides a tool that shows the carbon emis‐
sions per megawatt by region, state, or generating plant. However,
a data center can easily be a utility’s largest single customer and as
such can put pressure on utilities and power generators to adopt
more climate-friendly energy sources. In regulated areas that have
many data centers, and where the dominant (monopoly) electrical
utilities have little or no renewable power, data center operators
have worked together and successfully pressured the utilities into

Innovating with Energy | 9


adding green power to their mix. This strategy has worked success‐
fully in Virginia and North Carolina, both of which have a large
concentration of major data centers. In North Carolina, Apple, Face‐
book/Meta, and Google persuaded Duke Energy to offer a new tier
of renewable energy service.1
Data center operators use physical power purchase agreements
(PPAs) and virtual power purchase agreements (VPPAs) to encour‐
age project developers to build renewable energy capacity. PPAs
are long-term contracts for the purchase of renewable energy (typi‐
cally wind or solar). They guarantee the developer a fixed price for
renewable energy, thus making it easier for the developer to finance
a renewable energy project. VPPAs are similar in concept, except
that they don’t require the energy generated to be physically deliv‐
ered to the data center. VPPAs are typically executed between data
center operators and project developers without regard to the load
zone where the data center is located. They are a way to compensate
for carbon-intensive energy in areas where renewable energy isn’t
available or economically viable. Both PPAs and VPPAs give the
data center operators Energy Attribute Certificates (EACs), which
certify the generation and delivery of renewable energy to the grid,
and include an audit trail that allows green energy to be accounted
for and tracked.
eBay solved the problem of dirty power by turning the tables on
a local utility that was generating almost all of its power with coal-
fired generators.2 For its Topaz data center in Utah, eBay built its
own generating facility, powered primarily by natural gas fuel cells,
in combination with other technologies such as solar and wind.
That facility powers the data center, and the local power grid has
been relegated to backup power. While natural gas fuel cells are not
carbon neutral, they emit much less CO2 than coal-fired generators.
eBay has financed other projects to offset the fuel cells’ carbon
emissions. Equinix is using a similar approach at some of its data
centers.

1 Greenpeace, “Clicking Clean”, April 2014, 31.


2 Greenpeace, “Clicking Clean”, 29.

10 | Data Centers of the Future


Power in the Building
Electrical distribution is a complex arrangement of voltage and
phase combinations, known to the domestic consumer only as
120V or 220V AC (alternating current) single-phase. In North
America, power arrives at a data center through transmission or
distribution services ranging from 115kV to 13kV AC three-phase.3
For the power to be useful, a number of additional “step-down
transformers” bring the 13kV AC three-phase power down to usable
120/220/230V AC single-phase.
In a typical North American system, utility power delivered from
the grid is stepped down from 13 kV to 480V three-phase with an
on-site transformer for delivery into the building, where it serves
uninterruptible power supply (UPS) systems and mechanical loads.
The 480V three-phase power from the UPS is distributed to multi‐
ple transformers in the data hall, where power is stepped down to
415/230V or 208/120V three/single-phase. Redundancy is built in at
every stage so that a failure anywhere in the chain won’t take the
data center down. When power fails, large battery-backed UPSs take
over immediately, while diesel generators start. A UPS with a fully
charged battery can supply power for 5–10 minutes, and in most
designs, the generators should be running in under 15 seconds.
When the generators are running, the transfer switch connects load
to them.
Standardizing on 48V DC (direct current), rather than 12V DC or
120V AC, power distribution throughout the data center will make
it easier to optimize a power distribution system. The higher voltage
reduces the current carried by the power buses, in turn reducing
electrical losses and the amount of copper required. The individual
servers are responsible for converting 48V to the voltages used by
the CPUs, disk drives, and memory chips.
Transformers appear anytime a voltage changes or the power
changes from AC to DC. Extremely efficient AC transformers can
operate at 97.0%–99.7% efficiency while some low-performing DC-
DC converters may be only 80% efficient.

3 Power transmission varies from country to country and from grid to grid. This discus‐
sion describes North American practices; other countries will be similar.

Innovating with Energy | 11


The fewer power conversions required before a load consumes the
power, the higher the data center efficiency overall. In a typical data
center today, for a CPU to consume power, four conversions are
required at a minimum:
480V 3 Phase AC → 415V/230V 3 Phase/1 Phase AC → 12V DC →
1V DC → CPU

Assuming an average power conversion efficiency of 97% (but real‐


istically much lower) on a 20-megawatt data center, 600 kilowatts
of power is lost to conversion. It is possible to drop several of the
conversion steps by changing the power architecture. An implemen‐
tation being used by some leading-edge companies is this:
480V 3 Phase AC → +48V DC → 1V DC → CPU

With one less conversion step, conversion losses can begin to


approach 99.7% efficiency. One of the reasons for 13-kilovolt AC
distribution is to overcome distribution loss; the lower the voltage,
the higher the energy loss to heat during transmission. In addition,
+12V DC systems are being transitioned to +48V DC to overcome
increasing loss in IT equipment.

Figure 2. From the utility to the servers: how power gets to the racks
reliably

12 | Data Centers of the Future


Backup Power Sources
More than anything else, customers want their data centers to be
reliable. Reliability can’t be compromised on the path to decarbon‐
ization. Therefore, data center designers work hard to eliminate
single points of failure (SPOF). One of the most likely points of
failure is commercial power. Every organization that needs continu‐
ous uptime has to have backup power, most often from a generator
that runs on diesel fuel. Diesel is often the only fuel that’s readily
available, especially under emergency circumstances.
However, there are good alternatives. Hydrotreated vegetable oils
(HVO) aren’t carbon neutral but are an improvement, reducing
greenhouse gas emissions by up to 90% compared to diesel. There
are multi-fuel generators that can run on either diesel or HVOs. To
build the Data Centers of the Future, we start with multi-fuel gener‐
ators and upgrade generators at older data centers to be compatible
with HVO and other alternative low-carbon fuel sources.
As we’ve mentioned, fuel cells can be used as a primary power
source. They have high power densities (they can generate a lot
of power in a relatively small space), and they’re quiet, so they’re
suitable for urban sites; residents in the vicinity of a diesel generator
are going to notice the noise, in addition to smoke from the exhaust.
Fuel cells also operate at efficiencies of up to 60%, significantly
better than almost any diesel generator system.
Fuel cells running on hydrogen produce no greenhouse gases; their
only waste outputs are water and heat. Fuel cells that rely on natural
gas as a feedstock that is converted to hydrogen can use wastewater
in the steam reformation process. Wastewater from a fuel cell can
also be recycled as part of the building’s “gray” (nonpotable) water
supply, making a facility water neutral or even water positive, gener‐
ating more water than it consumes. Being water positive is particu‐
larly important for data centers located in regions with limited water
supplies.
While all hydrogen is chemically the same, to build a climate-
neutral backup power supply, you have to consider the source. Most
commercial hydrogen is produced through high-temperature meth‐
ane reforming or electrolysis. Both techniques require significant
energy, which is often “dirty” energy from coal-fired power plants.
Clean hydrogen, hydrogen that is produced using wind or solar
power, is expensive and isn’t widely available. We expect production

Innovating with Energy | 13


of clean hydrogen to increase, in part because clean hydrogen is
likely to become an important technology for storing power from
solar arrays for use at night and other times when solar isn’t avail‐
able. As the clean hydrogen economy develops, green hydrogen
costs will decrease, and the infrastructure for storing and transport‐
ing hydrogen will improve. Data center operators who are commit‐
ted to reaching climate neutrality will be important in creating the
demand for clean hydrogen infrastructure.
Methane, liquefied natural gas (LNG), and biogas (gas collected
from agricultural waste, sewage treatment, and other sources) can
also be used in fuel cells, essentially moving the high-temperature
reforming process on-site. These fuel cells are not carbon neutral,
but their greenhouse gas emissions are much lower than diesel or
HVO-fueled generators. Emissions are also much lower than if these
waste gases were just vented into the atmosphere, as they currently
are. Carbon capture technology could be used to eliminate the cell’s
CO2 emissions. As with generators, multi-fuel cells allow companies
like Equinix to build data centers that can easily be upgraded to
cleaner fuels in the future: operators can use fuels that are widely
available immediately and plan to convert to clean hydrogen later.
That flexibility, from designing in the ability to convert to better
fuels and energy sources as they become available, is the key to
building the Data Centers of the Future.

Alternate Power Sources


Wind and solar are renewable power sources. They aren’t appropri‐
ate for backup power, since their availability depends on weather
and other factors. For wind and solar to be effective backup solu‐
tions, they must themselves be backed by some kind of long-term
energy storage. Long-term energy storage is still an unsolved prob‐
lem, even though some very large battery arrays have already been
built. China has built an 800 megawatt-hour array of vanadium flow
batteries; a 450 megawatt-hour array in Australia is currently the
world’s largest lithium installation. Vanadium flow batteries don’t
have the energy density of lithium batteries, but they have much
longer lifetimes. Flow batteries based on iron are in development.
Using solar or wind energy to produce green hydrogen or to reform
methane into hydrogen is another potential way to store power for
use in a backup system.

14 | Data Centers of the Future


While wind turbines have maximum generating capacities up to 15
megawatts, it’s important to realize that the power rating assumes
maximum wind speed, which is almost never the case; most of the
time, the turbine will be generating much less than its maximum
capacity. Furthermore, the largest turbines are designed for offshore
use; on land, the largest turbines generate in the 1- to 5-megawatt
range. Most wind farms will be built in locations on land and at sea
where the wind is relatively constant and reliable; in the US, much
of the western Great Plains has average wind speeds over 15 miles
per hour.
For data center operators, the primary use of wind power will be
in obtaining renewable energy certificates (RECs) to match brown
energy used in other locations. The more renewable energy can be
added to the grid, the better, and the places where wind farms can be
effective tend to be far from the population centers where most data
centers are located. Increased wind generation in the Midwest will
encourage utilities to build the transmission lines needed to sell that
energy in more densely populated areas.
Solar energy has some of the same problems as wind. Solar energy
isn’t very dense; at a typical data center, on-site solar energy can
provide at best a small percentage of the total electrical budget.
While it’s possible to build megawatt-scale solar plants, that requires
a lot of land, and land is scarce and expensive in the urban areas
where most data centers are located. Solar energy is also unavailable
at night and doesn’t supply as much power when it is cloudy. Still, it
is an important part of a sustainable strategy, and it is most econom‐
ically practical for deserts and other arid land. Energy generated
from solar cells can be sold back to the local grid, and it can be used
to obtain RECs to compensate for brown energy used elsewhere.
Data center rooftops, parking lots, and other open areas are good
opportunities for solar power.
Although geothermal energy isn’t yet widely used for data centers,
any discussion of alternate energy sources for the future must take
this energy source into account. We’ll see later how it is used
for cooling. But geothermal energy can also be used to generate
power—as is the case in Iceland, where data centers rely on plentiful
geothermal power. Whether geothermal can be effective elsewhere
is still a topic for research, but increased use of geothermal seems
inevitable. Cornell University is embarking on a plan to replace its
natural gas–fueled heating plant with geothermal energy from a pair

Innovating with Energy | 15


of two-mile-deep wells—and if you can heat buildings, you can also
generate electricity. What’s most impressive about this project is that
no one would pick upstate New York as a geothermal testbed. If it
can work there, it can work anywhere. Is the Earth itself the ideal
energy source for the Data Center of the Future? We will see.

Innovating with Cooling


All of the electricity that powers servers, powers cooling systems,
transmits data over a network, and even lights offices is eventually
converted to heat. Getting rid of excess heat is one of the biggest
concerns of any data center design. Given the power levels involved,
it shouldn’t be a surprise that the cooling system itself is the sec‐
ond largest contributor to the overall power budget. A data center
must keep its equipment at appropriate temperatures; otherwise
the servers will shut down or fail, and the center won’t meet its
reliability guarantees. But at the same time, minimizing power use
by the cooling system is a crucial step on the way toward meeting
climate targets. The power used by servers will always increase, as
servers become faster and denser. Power used by the cooling system
is under our control.
The commonly used metric for a data center’s cooling system is
power usage effectiveness (PUE), which is simply the total power
used by the data center divided by the power required to run the
IT load (servers and other hardware). By definition, PUE is always
greater than 1. In the early days of data centers, PUE was probably
as high as 4 or 5, but in the 1970s and 1980s, issues like climate
change and global warming were considered hypothetical rather
than real. Currently, many data centers have a PUE of around 1.57,
while a few have a PUE of 1.1 or slightly below. Data centers with
a very low PUE tend to be owned by companies that have complete
control of the hardware in the data center (for example, Google
claims a trailing 12-month PUE of 1.1 for its fleet of data centers).
It’s very difficult for a colocation provider to achieve that kind
of efficiency, because it has to adapt its data centers’ cooling and
electrical systems to work with servers that are purchased by its
customers.
PUE is at best a crude measure of a data center’s efficiency. It
has a number of shortcomings. Amazon Web Services’ (AWS)
James Hamilton provides a good critique: PUE doesn’t allow for

16 | Data Centers of the Future


differences in where the power intake happens at the facility and the
consequent losses associated with it (e.g., at the 100kV feed arriving
from the power utility, at the 10kV output of an on-site substation,
or at the 480V going into the buildings). While power transformers
have efficiencies on the order of 97%, with three stages of trans‐
formers, plus the overhead of the UPSs, the losses add up, and
measuring the power input to the data center at the 480V entrance
to the building ignores these losses. PUE also doesn’t account for
cooling equipment on the servers themselves; air-cooled servers
have onboard fans, which can use as much as 10% of the server’s
total power (and liquid-cooled servers sometimes have their own
pumps). To get a more accurate measure of efficiency, power spent
on onboard computing shouldn’t be counted as part of the useful
computing load. Finally, PUE can’t account for using waste heat
elsewhere, which we’ll discuss momentarily. Total usage efficiency
(TUE) has been proposed as a metric that takes these factors into
account, but adoption is slow.
No single number can take into account the engineering needed
to achieve efficiency, which requires understanding the temperature
flows within the facility. A better approach to measuring efficiency is
to look at the incoming temperature of the coolant and the increase
in temperature (ΔT) when the coolant leaves the system. Higher
incoming temperatures reduce the energy needed to get the incom‐
ing coolant to an acceptable level. In an air-cooled system, the com‐
pressors require less energy, and they don’t need to run at all if the
outdoor air is cooler than the maximum temperature for incoming
air. Higher ΔT means that heat is being removed more effectively,
though the maximum allowable temperature is constrained by the
electronics that are being cooled. Servers that run too hot will fail
more frequently, so the cooling system must be designed with the
data center’s uptime guarantees and service-level agreements (SLAs)
in mind. As more IT equipment manufacturers design for higher
temperatures, SLAs will adjust accordingly.
One exciting possibility is the ability to recycle waste heat, some‐
thing Thomas Edison pioneered in 1882; 140 years later, Europe
is significantly ahead of the US in recycling heat. Equinix has col‐
laborated with a local utility to use waste heat for home heating
in Helsinki, Finland, and waste heat from one of Equinix’s Amster‐
dam data centers is used to heat office buildings and parts of the
University of Amsterdam. Another recent project uses waste heat

Innovating with Cooling | 17


from a new Paris data center to heat a community swimming pool.
Recycling heat doesn’t improve any metric for data center efficiency,
but creating a circular economy in which waste heat is reused is the
best of all possible worlds.

Air Cooling
Currently, data centers are predominantly air cooled. The data cen‐
ter industry is trending toward various forms of liquid cooling, but
we certainly wouldn’t say that a climate-neutral data center can’t
be air cooled. Air cooling means, more or less, what you think:
many fans circulating lots of air over the equipment that needs to
be cooled. Air-conditioning units guarantee that the incoming air
is at an appropriate temperature. A data center also needs to filter
and (in many locations) dehumidify the incoming air. After it has
passed through the data center, hot air may be exhausted back to the
environment, or it may go through the air-conditioning system to be
reused.
The hotter the air supply can be, the more efficient the cooling
system. Google’s air-cooled data centers can operate with the incom‐
ing air as high as 80°F (26.6°C), which means that they can often
use outside air directly, without any additional cooling. Outside air
must be chilled during a hot spell, but allowing incoming air to
be at a higher temperature minimizes the demand for cooling. The
ASHRAE A1 Allowable recommendations for incoming air are a
temperature of 15° to 32°C (59° to 89.6°F), with a relative humidity
of 8% to 80% and a dew point of −12° to 17°C (10.4° to 62.6°F).
The widespread practice for organizing server racks in an air-cooled
data center is to separate them into hot and cold aisles. Cool air
is supplied to the cold aisles. Hot air is exhausted to the hot aisles
from the other side of the racks. In practice, this means that the air
intakes from two adjacent rows of racks face each other; likewise,
the exhaust sides of two adjacent racks face each other. Preventing
the incoming cold air from mixing with the hot exhaust makes
the cooling system more efficient—as anyone who has ever stood
behind a rack can appreciate.

18 | Data Centers of the Future


Figure 3. Hot/cold aisle cooling. Hot air may be exhausted rather than
fed back to the air-conditioning system. Outside air might not need
air-conditioning.

Air cooling has the advantage that it can be used with almost any
hardware; remember that data centers offering colocation have limi‐
ted control over their customers’ servers. Liquid cooling requires
significantly more standardization. Another advantage of air cooling
is that, if the cooling fails, you have a few minutes to turn the servers
off before they overheat. With liquid cooling, thermal runaway hap‐
pens much sooner; if the cooling system fails, servers need to be
shut down in seconds before they’re damaged. Finally, air cooling
can be used in almost any building, including older data centers
and older buildings being repurposed as data centers; liquid cooling
requires significantly more changes to the building’s infrastructure.

Liquid Cooling
Water cooling is one of the oldest technologies for cooling electron‐
ics. Water cooling was used for high-power radio transmitters as
early as the 1920s, significantly before air cooling was an option.
Water cooling was also common for large mainframes in the 1970s;
in the late ’70s, Bell Labs’ Holmdel site had a reflecting pond with
a fountain that provided evaporative cooling for its mainframe com‐
puter systems; in the winter, the hot water was used to heat the
building.

Innovating with Cooling | 19


As mainframes became less common and data centers evolved into
large collections of smaller servers rather than a small number of
mainframes, cooling technology shifted to air, which is a better
solution for cooling many machines of different types. However,
technology tends to move in cycles. In the past few years, more
and more cores have been packed onto a single chip, increasing
power requirements and density. Servers are hotter than ever before,
and in the near future, we believe that a new generation of servers
will require liquid cooling. Liquid cooling has some tremendous
advantages: it allows higher densities, and it eliminates large and
power-hungry air-conditioning and dehumidification systems, duct‐
work, and onboard fans and other components. Liquid cooling even
takes up less space in the data center: the pipes used to carry coolant
are much smaller than the ductwork (typically installed under a
raised floor) for distributing air.
Liquid cooling systems can be either single phase or two phase
(sometimes called vapor phase or multiphase). The difference
between the two is simple. In single-phase cooling, the coolant
enters as liquid and leaves as liquid. In two-phase cooling, the
coolant is allowed to boil; the additional energy required for phase
transformation allows two-phase cooling to remove heat more effec‐
tively. How the data center industry will trend in coming years is not
yet clear.
Immersion cooling is highly efficient and allows the highest power
densities, but it has several disadvantages for data centers that offer
colocation services. Immersion requires hardware and racks that
are designed specifically to support it, so it can only be used when
customers select that hardware. It also places constraints on the
data center’s physical design. Immersion cooling racks are often
(though not always) horizontal, rather than vertical, so they don’t
mix well with air-cooled vertical racks. Because of the coolant’s
weight, immersion cooling systems are also heavier than other cool‐
ing systems, limiting them to the first floor of most data centers. A
data center designed for colocation would have to devote a portion
of the first floor to immersion cooling, fragmenting its space by
cooling technology and making it harder to use efficiently.
Loop cooling brings the coolant directly to the heat sinks on the
components that need to be cooled; in the future, power-hungry
GPUs and CPUs will bring coolant directly on-chip. This kind of
cooling is more flexible; the data center only needs to bring coolant

20 | Data Centers of the Future


to the inlet and outlet ports on each server. Once the coolant has
been brought to a server, the server manufacturer can use whatever
technologies it chooses to bring the coolant to components. It’s
impossible to move to liquid cooling of any form without designing
appropriate systems for monitoring and controlling servers. The
ability to detect failure and shut down power as soon as possible is
an important problem in hard real-time engineering; time guaran‐
tees must be met, or equipment will be destroyed.
Water availability is another issue. With climate change, regions
that are chronically short of water are now experiencing droughts
that are longer and more severe. Even areas that used to have
adequate water supplies are coming under stress. Using water as
a coolant may not be an option in these areas. Evaporative cooling
systems, which are common and require relatively little energy, lose
significant amounts of water to evaporation. They also lose water
to cleaning, which is part of regular maintenance. Other cooling
technologies use more energy, and coolant systems still need to be
flushed periodically to avoid buildup of contaminants. The Data
Centers of the Future must be different. In water-stressed areas, they
must be careful not to use water needlessly, even returning water to
the environment if possible.
The Data Centers of the Future will have to accommodate both
air- and loop-based liquid cooling systems in anticipation of the
inevitable shift toward liquid cooling to support higher densities
and improved efficiency. Unfortunately, this means that data centers
will have to provide both kinds of infrastructure: fans, ducts, and
air chillers for air cooling, along with pumps, pipes, and chillers
for liquids. Moving from a colocation business to a cloud-based
business model (such as Equinix Metal) will increase data center
operators’ ability to standardize, giving them more flexibility with
cooling solutions.

Alternate Cooling Technologies


A small number of data centers use geothermal cooling: pumping
coolant through a closed loop that runs underground at a relatively
shallow depth. Temperatures at depths up to a few hundred feet
are extremely stable. In Amsterdam, Equinix uses cold groundwater
(ATES) to chill air for its cooling system; in Toronto, Equinix uses
cold water pumped from the depths of Lake Ontario.

Innovating with Cooling | 21


Figure 4. Equinix Toronto data center, using lake cooling

Innovation with IT Equipment


In the Data Center of the Future, tying together innovations in
power and cooling is critical to moving the industry forward. Typ‐
ical colocation customers procure equipment from large original
equipment manufacturers (OEMs) and are unable to wield the influ‐
ence required to change a number of variables simultaneously to
change the industry. Equinix is leading the way with Open19, an
open source standard for data center hardware. Open19 introduces
several changes to the traditional 19″ rack-and-server design, for the
overall operational efficiencies needed in the next evolution of data
centers.
Standards are good—they provide the consistency needed to allow
innovation to thrive. In the data center, the only true standard is
EIA-310-D, which defines the width and height of a rack unit and
locations of mounting hardware. Rack depth, power strip location,
and AC input location relative to a server/switch are all undefined.
They shift and change as each vendor builds a piece of equipment to
be housed in a rack.
By modifying the architecture of a rack and decoupling traditional
expectations, operators are free to standardize power and cooling
while allowing customers the ability to rapidly deploy infrastructure.

22 | Data Centers of the Future


With the participation of OEMs, the data center can provide its
colocation customers with hardware options that will operate more
efficiently. This is one way to build the Data Center of the Future:
provide racks of standardized shapes for servers with consistent
power and cooling connections, rather than support a multitude
of different voltages, connectors, and cooling types. Open19 makes
efficient power and cooling easier, even as densities increase. How‐
ever, it will be some time before colocation customers can take
advantage of the standardization of Open19.
Aggregation can provide striking benefits, especially in power sup‐
plies used in information technology equipment (ITE). Servers and
switches in a data center almost always have dual supplies with
automatic fail-over, usually in a two-to-one configuration. For every
server/switch, two power supplies are required, but they typically
operate at only half the load. Lightly loaded power supplies can
be hugely inefficient. Open19 pulls traditional power supplies from
each server and aggregates them into a resilient power shelf. By
aggregating power supplies at a higher ratio than two to one, you
can manage peak power demand and resiliency requirements and
operate power supplies at peak efficiency. You would also gain the
benefit of two or more generations of useful lifespan by decoupling
the power supply from the ITE.
Cooling innovation, especially using liquid cooling in the data
center, is particularly important as liquid cooling moves from a
bespoke need in high-performance computing (HPC) to a ubiqui‐
tous requirement. However, standardizing the interface between
the server and the rack, the connectors used to bring coolant on
board, remains a problem. While connectors for liquid coolant
are not standardized, rack-level standards like Open19 specify the
connectors and their locations on the server chassis. As Open19
gains traction, loop-based liquid cooling will become easier for data
centers to implement.
With an open hardware ecosystem, you can expand the open soft‐
ware ecosystem to take advantage of open APIs to expand software
innovation. By providing an understanding of the relationship of
the power and cooling environment to the workload, you support
the development of creative solutions. Non-real-time workloads
can be shifted to times when renewable energy is available, or
they can be shifted to manage cooling and heat output during
demand-response events. Container orchestration systems such as

Innovation with IT Equipment | 23


Kubernetes are gaining the ability to intelligently manage resources
in these environments.

Buildings for the Future


When considering the climate impact of a data center, many people
just think about power consumption. However, the building itself
has an important impact on the climate. The LEED standard, or
similar standards depending on the region, should be the basis for
any construction, including both new construction and renovation
of old buildings. LEED addresses issues like energy and water effi‐
ciency, lighting, waste reduction, construction practices, and opera‐
tions. The newest version of the LEED standard addresses embodied
carbon, the CO2 emissions generated by manufacturing raw materi‐
als (concrete, bricks, steel) and transporting them to the building
site, in addition to the process of construction itself. BCA-IMDA
and BCA-IDA Green Mark (from Singapore) and the NABERS
ratings (from Australia) provide supplemental standards that are
specific to data center design.
These standards cover the construction and (to a lesser extent)
operation of a data center. We can do even better: we can assess
the environmental consequences of the data center’s entire lifecycle.
Whole Building Lifecycle Assessment (WBLCA) is a methodology
for assessing a building’s embodied carbon and other environmental
impacts through all the phases of its lifecycle: production of raw
materials, construction, use, and eventual demolition.
Any major supplier should be selected based on its ability to meet all
of a project’s needs, inclusive of sustainability—meaning both social
and environmental aspects. It’s important for a supplier and the data
center operator to share values and goals on the path to sustainable
infrastructure. Suppliers must have targets for reducing their own
climate impact and environmentally sound plans for avoiding waste
and disposing of it, especially when it comes to hazardous materials,
electronic waste, and construction materials. In addition to environ‐
mental aspects, the social aspects of sustainability, such as human
rights and diversity and inclusion, should be considered across
stakeholders, from employees to suppliers to community partners.
This helps to build a data center that neighbors will embrace, rather
than resent.

24 | Data Centers of the Future


Data center operators need to consider new building materials,
including low-carbon steel and concrete, that are beginning to
appear on the market. Steelmaking and cement production are
among the most carbon-intense industries in the world, but that
is beginning to change. In 2021 HYBRIT, a Swedish joint venture,
announced the first delivery of fossil fuel-free steel; they used
carbon-free electricity and hydrogen to replace coke. Startups like
Biomason and CarbiCrete are developing methods for producing
carbon-negative cement. If a company that makes steel or reinforced
concrete finds out that carbon-neutral products have eager custom‐
ers and can be more profitable than its traditional products, that’s a
gain that extends far beyond the data center industry.
Another option is to consider alternatives to building a new facility.
A simple way to reduce the carbon emissions inherent in construc‐
tion is to reduce the amount of construction. One way to do so is
to upgrade existing data centers to add capacity. Increasing density
is likely to require additional power (both primary and backup) and
more effective cooling; this is a good time to renegotiate deals with
utilities, replace diesel generators with multi-fuel generators or fuel
cells, and upgrade to liquid cooling. However, this decision has to
be approached carefully. Is it possible to renovate the data center,
replacing older equipment with newer servers at a higher density?
Can the power and cooling systems be made adequate for a newer,
denser data center? Can the electrical utilities supply the additional
power that will be needed? Is there enough room for additional
backup generators, fuel cells, and chillers? Is there sufficient network
capacity to serve existing and new customers? If the answer to any
of these questions is “no,” then upgrading the data center might be
a poor option. The cost, both financially and environmentally, will
be too high to justify a data center that is suboptimal and, even after
renovation, will have a limited lifespan.
Repurposing an older industrial building can also be an attractive
option. In many parts of the world, older industrial areas and facto‐
ries can provide the shell of a data center, sometimes even in the
heart of an urban area (for example, Digital Equipment Corpora‐
tion’s former headquarters was built in an abandoned textile mill).
An additional advantage of repurposing existing buildings is that
they are more likely to fit in with their environment architecturally;
a data center that reuses a well-selected building won’t be perceived
as an eyesore by its neighbors. It’s also an opportunity to bring an

Buildings for the Future | 25


old and inefficient building up to LEED standards. However, repur‐
posing an older building has its own embodied carbon costs, and
they may be higher than those of new construction. Walls and floors
may need to be upgraded with new reinforced concrete, electrical
power will be inadequate, and cooling may be all but nonexistent,
suitable for office air-conditioning at most. Like renovation, repur‐
posing is a great idea—the best way to avoid embodied carbon is not
to create it—but it is important to be aware of the tradeoffs. Without
careful analysis, the result could have a higher environmental cost
than new construction.
Data center operators are in a unique position to dramatically
change the construction industry. Rather than just building new
data centers by doing “business as usual,” they can negotiate with
general contractors and other suppliers for better materials and
practices. Equinix has estimated that the embodied carbon of a
data center is equivalent to three to four years of operational car‐
bon. Minimizing the environmental cost of a new building is an
important, though often unappreciated, part of building the climate-
neutral Data Center of the Future.

Recycling and a Circular Economy


The final stage in a WBLCA is demolition or replacement—if not
of the building itself then of the servers and other systems that
occupy it. It’s impossible to build and operate a data center without
using carbon and other natural resources. But carbon use isn’t the
entire story. We need to ensure that, at the end of their lifecycles,
servers and other resources embodied in the building are reused
effectively. Data center operators need to think about creating a
circular economy, in which their waste material becomes someone
else’s raw material.
When it comes to gear at the end of its usefulness to us, we first con‐
sider remarketing. Recycling electronics from data centers isn’t easy.
To ensure that the data on them can’t be recovered, old disk drives
are often destroyed, which greatly reduces any value they might
have. While modern electronics takes advantage of many expensive
metals (not just rare earths), recycling CPUs and GPUs is difficult.
Though there is a market for used data center equipment. Google
states that 23% of the hardware components in server upgrades are
refurbished equipment and that it is reselling roughly 8.2 million
components per year.

26 | Data Centers of the Future


Recycling batteries from UPS systems is obviously important. Recy‐
cling lithium-ion batteries is problematic, although some companies
with recycling technologies have appeared. While lead-acid batteries
don’t provide the same energy density or lifespan, recycling them is
much easier, and almost all are recycled. At the same time, there’s
another trade-off: lead-acid UPS batteries are much heavier than
lithium batteries, requiring a six-foot-thick concrete pad to support
them. Although we’ve noted that alternatives are appearing, it’s
important to remember that CO2 emissions from making concrete
are significant. Lithium-ion batteries also have a significantly longer
life: 2,000 to 3,000 charging cycles, as opposed to 1,000 for lead-acid.
Data centers that want to extend their climate commitments to their
value chain need to actively engage with their suppliers on sustaina‐
bility goals. Again, it’s important for the operator and the suppliers
to have the same goals. Does the supplier track its scope 1, 2, and
3 greenhouse gases? Does it have credible, science-aligned targets
for reductions? What can the supplier do about recycling and waste
processing? Despite recent advances in battery chemistry, lithium-
ion batteries will inevitably be in use for a long time, and recycling
will inevitably be an important part of the lithium economy.
Two other forms of waste can contribute to a circular economy if
handled resourcefully. We’ve already mentioned waste heat, which in
some locations can be repurposed in centralized municipal heating
systems. Similarly, wastewater produced by fuel cells, or recaptured
from a water-cooling system, can be an important resource for
drought-prone communities where it can be used for irrigation,
flushing toilets, and other applications that don’t require clean water.
Supporting the local community is another part of building a cir‐
cular economy: the Data Center of the Future must give back to
the community in addition to taking from it. Data center operators
need to take their responsibilities to their communities seriously.
Too many data centers are poor neighbors, facing constant com‐
plaints about noise and emissions. Some of these problems can be
resolved through better technology; fuel cells don’t make noise or
emit smoke, and liquid cooling tends to be quieter than air cooling.
Data center operators should already be prioritizing the use of
diverse, local labor for construction. Local suppliers for services
like trash removal, security, and maintenance can be very important
for organic economic development, since after they are operational,

Buildings for the Future | 27


data centers typically employ very few people relative to their size.
Finally, it’s very important for the operators and the general contrac‐
tors to have visible commitments to diversity, human rights, and
environmental sustainability.
There are many other ways that data center operators can engage
with their communities. In the latter half of the 20th century, a
digital divide between affluent and less affluent communities devel‐
oped. As digital technology becomes an increasingly important part
of everyday infrastructure, bridging that gap becomes a crucial
issue for society. Data centers, as key elements of the digital infra‐
structure, have a responsibility here too. The Equinix Foundation,
partnering with organizations to advance digital inclusion, from
providing access to technology and connectivity to developing the
skills required for technology careers, is one example of another way
to engage with communities. The foundation aims to support non‐
profits working to prepare individuals of all ages and backgrounds
to succeed in today’s digital world. Supporting this foundation
shows the data center’s neighbors that this data center is more than a
source of noise and smoke.
A data center has other impacts on the local environment. It’s
important to consider the site’s impact on local plants and animals.
Site planners should account for endangered species (even if only
locally endangered) that need to be preserved; they should strive
to design sites that preserve biodiversity. That can mean leaving
greenways that allow animals to move from one forested area to
another; adding plantings that provide food and cover for other
inhabitants, or restoring local species that have been displaced by
invasives. It may be important to take bird strikes into account when
designing a wind farm or to mitigate habitat loss when installing a
large solar array. A green facade or a green roof (like the spectacular
seven-acre green roof on New York’s Javits Center) can provide
habitat for birds, bats, and insects—particularly honeybees, which
are undergoing drastic population declines. Green facades and roofs
can even support small-scale farming.

28 | Data Centers of the Future


Figure 5. A “green roof ” can be anything from a lawn to a farm

Figure 6. A data center with green facade

The Data Centers of the Future must be part of a circular economy


in which they’re fully integrated into their community.

Buildings for the Future | 29


Reliability and Efficiency Through Automation
Reliability is the paramount concern for every data center operator
and customer. Metrics for reliability and uptime are embedded in
SLAs with customers; when a data center is down, the customer
and the data center are both losing money. However, a modern data
center isn’t something that can be run “by hand,” like a 1970s main‐
frame. Like modern jet aircraft, which can only fly stably because
of software that is constantly monitoring and adjusting engines and
control surfaces, a data center runs safely and reliably because of
software.
We’ve already touched on a number of the roles software plays
in running a data center. High-resolution monitoring software can
watch temperatures on individual devices, in addition to parameters
like disk retries and noise made by the motor bearings in cooling
fans, and predict when a device is about to fail. Predictive mainte‐
nance extends the lifetime of components, allowing the data center
to wait until hardware is about to fail before replacing it. Replacing
devices just because they’ve reached the elapsed runtime listed on a
specification sheet is wasteful, both economically and ecologically.
This practice also makes the data center more reliable. Of the thou‐
sands of servers, disk drives, routers, power supplies, and other
components that make up a data center, some are going to fail early.
Predictive maintenance allows the data center operator to replace
those devices before they become problems.
Software also protects devices from failures in their support systems.
If a fan fails in an air-cooled system, the devices it is cooling must
be shut down within minutes, before they are destroyed by thermal
runaway. In a water-cooled system, thermal runaway will destroy
servers in seconds, rather than minutes. In either case, intervention
is needed before a human can possibly take action. Software moni‐
toring the data center’s cooling system is responsible for shutting
down and removing power from the affected servers before it is too
late.
Monitoring and automation can optimize the distribution of power
within a data center, or within a network of data centers. All data
centers want to minimize stranded power, or power that a data cen‐
ter has committed to but that remains unused. In a colocation situa‐
tion, the data center’s customers will contract for a certain number
of racks and a certain maximum power. Each customer’s actual

30 | Data Centers of the Future


usage will only rarely reach its maximum committed power—but
the data center is still committed to provide that power if and when
it’s needed. Combined with artificial intelligence (AI) modeling of
how to distribute loads effectively, software-defined power (SDP)
enables power to be shifted away from racks that are temporarily
underutilized and toward racks that are heavily loaded. Workloads
can even be moved from high-carbon data centers to data centers
powered by renewable resources, possibly allowing some racks to
be powered down completely. Other power sources can be brought
online as needed, possibly on a rack-by-rack basis. The goal is to use
the data center’s most expensive and important resource—power—
as efficiently as possible, minimizing operating costs and producing
savings that can be passed on to customers.
Back in 2016, Google used AI to model a data center and its
cooling systems and achieved a 40% reduction in energy used for
cooling. This work was particularly important because no two data
centers are alike; every data center represents a different optimiza‐
tion problem that involves the availability of land, water, and power
in addition to customer requirements. Designing to maximize sus‐
tainability only makes the optimization problem more difficult—but
also makes solving it more important. Using AI enabled Google to
get beyond heuristics and intuition. Using AI to design efficient data
centers will certainly become a standard practice.

The Future Begins Now


Even though we’re no longer seeing processor speeds double every
18 months, the data center industry is still driven by Moore’s law.
More cores are being squeezed onto a chip, making servers and data
centers hotter, denser, and more power hungry. Cerebras’ 85,000-
core chip for AI applications represents an extreme, but in this
industry, extremes soon become the norm.
To meet environmental commitments, data centers must drive PUE
as close to 1.0 as possible (the most advanced designs are currently
below 1.1). They must also find sources of green primary and
backup power. They will have to become water positive, contribu‐
ting more water to the ecosystem than they use. They will have to
minimize embodied carbon and recycle components ranging from
generators to servers as they’re taken out of service. And all of

The Future Begins Now | 31


this has to happen in the context of increasing density and power
consumption.
Can we reach net zero now? No. But we can build data centers that
are on the road to net zero. Above all, that’s what the Data Center of
the Future is. It’s not a magic bullet, but a way of thinking about the
optimal design and operation of a data center so that it can achieve
net zero as new technologies become available. Do utilities provide
renewable energy now? Perhaps not, but a data center operator
can use PPAs and VPPAs to encourage a utility to add renewable
energy to its mix. Can backup power systems run on clean fuels
now? In some cases, new data centers can install generators that
can run on multiple cleaner fuels including HVO, methane, natural
gas, and hydrogen, simplifying the transition to better fuels as they
become available. Generating equipment at older data centers can be
upgraded to use multiple fuels or fuel cells when needed.
Data centers have a lifetime of 30 years. Technology has come a long
way in the last 30 years, and the Data Center of the Future will lever‐
age technology and innovation to give rise to the next generation
of both environmentally and socially responsible enterprises at the
center of the digital economy. In the first century BC, the Rabbi
Hillel wrote the words that have been quoted (and misquoted) thou‐
sands of times, eventually becoming “If not us, who? If not now,
when?” Those are powerful words. If we don’t start on the path
toward sustainability, now, who will? And when? The data center
industry is getting on the path toward sustainability now and using
its influence to pull other industries with it. Waiting for a better
time, for better technology, or for other industries to get started first
is the same as saying “no.” A commitment to a sustainable future has
to be made now, or it won’t happen.
That’s why we are building the Data Centers of the Future—today.

32 | Data Centers of the Future


About the Authors
A connected world is a better world. It creates opportunity,
reduces poverty, revolutionizes healthcare, revitalizes economies
and democratizes learning.
Jon Lin and Equinix are driving the future of global connectivity,
bringing together people, places, and organizations with the critical
ecosystems that power the world’s digital infrastructure – securely
and sustainably. Equinix is a FORTUNE 500, fast growing, global
tech company that bridges organizations to a “digital first” world.
With nearly 250 data centers across 30 countries, Equinix is fueling
change at an accelerated pace. The data that lives and is shared
across its platform has the power to positively alter our future—be
it the next technology breakthrough, big business idea, or cure to
many diseases.
Reporting directly to the CEO, Jon drives strategy and execution for
Equinix’s data center services business. In this role, he ensures the
growth and innovation for the company’s interconnected business—
including physical data center services and software development.
Jon is a fierce advocate for diversity, inclusion and belonging. He
believes that different backgrounds and perspectives lead to better
decision making, greater innovation, and higher engagement in the
workplace and in society.
Prior, Jon served as President of Americas, running the company’s
largest region by revenue. He has also held leadership roles at Tata
Communications and UUNET/Verizon.
Jon graduated from Johns Hopkins University where he studied
biology and computer science. When he’s not working, you’ll find
him in Virginia with his wife and two kids, or on a mountain in
Colorado snowboarding.
Gary Aitkenhead joined Equinix in 2021 as Senior Vice President of
EMEA Operations and is responsible for overseeing the operations
of 80+ data centers, delivering mission-critical platform solutions to
customers in 20 countries across EMEA.
Before joining Equinix, Gary was Chief Executive of the Defence
Science and Technology Laboratory for the UK Ministry of Defence,
playing a leading role in the transformation of Defence into the
information age through advanced technology research and its
application. Prior to that, Gary worked at Motorola as a senior
global business leader in the mission-critical communications sector
serving emergency services, utilities, and transportation users.
Gary grew up in Glasgow and graduated with First Class Honours in
Electrical and Electronic Engineering from Strathclyde University.
He is a Chartered Engineer, Fellow of the Royal Academy of Engi‐
neering and a Fellow of the Institute of Engineering and Technology.
Married, with three daughters, Gary lives in Hampshire. His inter‐
ests include cooking, the outdoors and mountain biking.

You might also like