Download as pdf or txt
Download as pdf or txt
You are on page 1of 91

Thermal Guidelines

for Data Processing


Environments
Third Edition

ASHRAE Datacom Series


Book 1
ISBN 978-1-936504-33-6 (paperback)
ISBN 978-1-936504-36-7 (PDF)

© 2004, 2008, 2012 ASHRAE. All rights reserved.


1791 Tullie Circle, NE · Atlanta, GA 30329 · www.ashrae.org
ASHRAE is a registered trademark of the American Society of Heating, Refrigerating
and Air-Conditioning Engineers, Inc.

Printed in the United States of America


Cover image by Joe Lombardo of DLB Associates
____________________________________________

ASHRAE has compiled this publication with care, but ASHRAE has not investigated, and
ASHRAE expressly disclaims any duty to investigate, any product, service, process, proce-
dure, design, or the like that may be described herein. The appearance of any technical data
or editorial material in this publication does not constitute endorsement, warranty, or guar-
anty by ASHRAE of any product, service, process, procedure, design, or the like. ASHRAE
does not warrant that the information in the publication is free of errors, and ASHRAE does
not necessarily agree with any statement or opinion in this publication. The entire risk of the
use of any information in this publication is assumed by the user.

No part of this publication may be reproduced without permission in writing from


ASHRAE, except by a reviewer who may quote brief passages or reproduce illustrations in
a review with appropriate credit, nor may any part of this publication be reproduced, stored
in a retrieval system, or transmitted in any way or by any means—electronic, photocopying,
recording, or other—without permission in writing from ASHRAE. Requests for permission
should be submitted at www.ashrae.org/permissions.
___________________________________________

Library of Congress Cataloging-in-Publication Data

Thermal guidelines for data processing environments —3rd ed.


p.cm. -- (ASHRAE datacom series; bk. 1)
Includes bibliographical references.
ISBN 978-1-936504-33-6 (softcover)
1. Data processing service centers—Cooling. 2. Data processing service centers—Heating and
ventilation.
3. Buildings—Environmental engineering. 4. Data processing service centers—Design and
construction.
5. Electronic data processing departments—Equipment and supplies—Protection. 6. Electronic
apparatus and appliances—Cooling. I. ASHRAE
TH7688.C64T488 2012
697.9'316—dc23
2012029220

SPECIAL PUBLICATIONS
Mark Owen, Editor/Group Manager of Handbook and Special Publications
Cindy Sheffield Michaels, Managing Editor
Matt Walker, Associate Editor
Roberta Hirschbuehler, Assistant Editor
Sarah Boyle, Editorial Assistant
Michshell Phillips, Editorial Coordinator
PUBLISHING SERVICES
David Soltis, Group Manager of Publishing Services and Electronic Communications
Tracy Becker, Graphics Specialist
Jayne Jackson, Publication Traffic Administrator
PUBLISHER
W. Stephen Comstock
Preface to the Third Edition

Prior to the 2004 publication of the first edition of Thermal Guidelines for Data
Processing Environments, there was no single source in the data center industry for
ITE temperature and humidity requirements. This book established groundbreaking
common design points endorsed by the major IT OEMs. The second edition,
published in 2008, created a new precedent by expanding the recommended temper-
ature and humidity ranges.
This third edition breaks new ground through the addition of new data center
environmental classes that enable near-full-time use of free-cooling techniques in
most of the world’s climates. This exciting development also brings increased
complexity and tradeoffs that require more careful evaluation in their application
due to the potential impact on the IT equipment to be supported.
The newly added environmental classes expand the allowable temperature and
humidity envelopes. This may enable some facility operators to design data centers
that use substantially less energy to cool. In fact, the classes may enable facilities in
many geographical locations to operate year round without the use of mechanical
refrigeration, which can provide significant savings in capital and operating
expenses in the form of energy consumption.
The recommended operating range has not changed from the second edition of
the book. However, a process for evaluating the optimal operating range for a given
data center has been introduced for those owners and operators who have a firm
understanding of the benefits and risks associated with operating outside the recom-
mended range. The third edition provides a method for evaluating ITE reliability and
estimating power consumption and airflow requirements under wider environmental
envelopes while delineating other important factors for further consideration. The
most valuable update to this edition is the inclusion for the first time of IT equipment
failure rate estimates based on inlet air temperature. These server failure rates are the
result of IT OEMs evaluating field data, such as warranty returns, as well as compo-
nent reliability data. These data will allow data center operators to weigh the poten-
tial reliability consequences of operating in various environmental conditions versus
the cost and energy consequences.
A cornerstone idea carried over from previous editions is that inlet temperature
is the only temperature that matters to IT equipment. Although there are reasons to
want to consider the impact of equipment outlet temperature on the hot aisle, it does
not impact the reliability or performance of the IT equipment. Also, each manufac-
turer balances design and performance requirements when determining their equip-
ment design temperature rise. Data center operators should expect to understand the
x  Preface to the Third Edition

equipment inlet temperature distribution throughout their data centers and take steps
to monitor these conditions. A facility designed to maximize efficiency by aggres-
sively applying new operating ranges and techniques will require a complex, multi-
variable optimization performed by an experienced data center architect.
Although the vast majority of data centers are air cooled at the IT load, liquid
cooling is becoming more commonplace and likely will be adopted to a greater
extent due to the enhanced operational efficiency, potential for increased density,
and opportunity for heat recovery. Consequently, the third edition of Thermal Guide-
lines for Data Processing Environments for the first time includes definitions of
liquid-cooled environmental classes and descriptions of their applications. Even a
primarily liquid-cooled data center may have air-cooled IT within. As a result, a
combination of an air-cooled and liquid-cooled classes will typically be specified.
Contents
Preface to the Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Chapter 1—Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Document Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Primary Users of This Document . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Compliance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Definitions and Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 2—Environmental Guidelines for Air-Cooled Equipment . . . 9


2.1 Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 New Air-Cooled Equipment Environmental Specifications . . . . 10
1.3 Guide for the Use and Application of the
ASHRAE Data Center Classes . . . . . . . . . . . . . . . . . . . 22
1.4 Server Metrics to Guide Use of New Guidelines . . . . . . . . . . . 24
1.4.1 Server Power Trend vs. Ambient Temperature . . . . . . . . 24
1.4.2 Acoustical Noise Levels in Data Center vs.
Ambient Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.3 Server Reliability Trend vs. Ambient Temperature . . . . . . 29
1.4.4 Server Reliability vs. Moisture, Contamination, and
other Temperature Effects . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4.5 Server Performance Trend vs. Ambient Temperature . . . 35
1.4.6 Server Cost Trend vs. Ambient Temperature . . . . . . . . . . 35
1.4.7 Summary of New Air-Cooled Equipment
Environmental Specifications . . . . . . . . . . . . . . . . . . . . . 36

Chapter 3—Environmental Guidelines for


Liquid-Cooled Equipment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.1 ITE Liquid Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.2 Facility Water Supply Characteristics for ITE . . . . . . . . . . . . . . 43
1.2.1 Facility Water Supply Temperature Classes for ITE. . . . . 43
1.2.1.1 Liquid-Cooling Environmental Class Definitions . . . 43
vi ~ Contents

1.2.2 Condensation Considerations. . . . . . . . . . . . . . . . . . . . . .45


1.2.3 Operational Characteristics. . . . . . . . . . . . . . . . . . . . . . . .46
1.2.4 Water Flow Rates/Pressures . . . . . . . . . . . . . . . . . . . . . .47
1.2.5 Velocity Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
1.2.6 Water Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
1.3 Liquid-Cooling Deployments in NEBS-Compliant Spaces . . . . .48
1.3.1 NEBS Space Similarities and Differences . . . . . . . . . . . .49
1.3.2 Use of CDU in NEBS Spaces . . . . . . . . . . . . . . . . . . . . . .50
1.3.3 Refrigerant Distribution Infrastructure . . . . . . . . . . . . . . . .50
1.3.4 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
1.3.5 Condensation Consideration. . . . . . . . . . . . . . . . . . . . . . .51
1.3.6 Close-Coupled Cooling Units . . . . . . . . . . . . . . . . . . . . . .51

Chapter 4—Facility Temperature and Humidity Measurement . . . . .53


4.1 Facility Health and Audit Tests . . . . . . . . . . . . . . . . . . . . . . . . . .53
4.1.1 Aisle Measurement Locations. . . . . . . . . . . . . . . . . . . . . .54
4.1.2 HVAC Operational Status . . . . . . . . . . . . . . . . . . . . . . . . .56
4.1.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
4.1.3.1 Aisle Temperature and Humidity Levels. . . . . . . . . .56
4.1.3.2 HVAC Unit Operation . . . . . . . . . . . . . . . . . . . . . . . .56
1.2 Equipment Installation Verification Tests . . . . . . . . . . . . . . . . . .56
1.3 Equipment Troubleshooting Tests . . . . . . . . . . . . . . . . . . . . . . .57

Chapter 5—Equipment Placement and Airflow Patterns . . . . . . . . . .61


5.1 Equipment Airflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
5.1.1 Airflow Protocol Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . .61
5.1.2 Airflow Protocol for Equipment . . . . . . . . . . . . . . . . . . . . .61
5.1.3 Cabinet Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
1.2 Equipment Room Airflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
1.2.1 Placement of Cabinets and Rows of Cabinets . . . . . . . . .63
1.2.2 Cabinets with Dissimilar Airflow Patterns . . . . . . . . . . . . .64
1.2.3 Aisle Pitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65

Chapter 6—Equipment Manufacturers’ Heat and


Airflow Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
6.1 Providing Heat Release and Airflow Values. . . . . . . . . . . . . . . .69
6.2 Equipment Thermal Report . . . . . . . . . . . . . . . . . . . . . . . . . . . .70
6.3 EPA ENERGY STAR£ Reporting. . . . . . . . . . . . . . . . . . . . . . . .71
6.3.1 Server Thermal Data Reporting Capabilities . . . . . . . . . .74
Thermal Guidelines for Data Processing Environments, Third Edition~ vii

Appendix A—2008 ASHRAE Environmental Guidelines for


Datacom Equipment—Expanding the
Recommended Environmental Envelope . . . . . . . . . . . . . . . . . . . . . . 75

Appendix B—2011 Air-Cooled Equipment


Thermal Guidelines (I-P) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Appendix C—Detailed Flowchart for the Use and


Application of the ASHRAE Data Center Classes . . . . . . . . . . . . . . . 89

Appendix D—Static Control Measures . . . . . . . . . . . . . . . . . . . . . . . . 95

Appendix E—OSHA and Personnel Working in


High Air Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Appendix F—Psychrometric Charts . . . . . . . . . . . . . . . . . . . . . . . . . 103

Appendix G—Altitude Derating Curves . . . . . . . . . . . . . . . . . . . . . . 107

Appendix H—Practical Example of the Impact of


Compressorless Cooling on Hardware Failure Rates . . . . . . . . . . . 108

Appendix I—IT Equipment Reliability Data for


Selected Major U.S. and Global Cities. . . . . . . . . . . . . . . . . . . . . . . . 113

Appendix J—Most Common Problems in


Water-Cooled Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

References and Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133


I
Introduction

Over the years, the power density of electronic equipment has steadily increased, as
shown and projected in Figure 1.1. In addition, the mission critical nature of
computing has sensitized businesses to the health of their data centers. The combi-
nation of these effects makes it obvious that better alignment is needed between
equipment manufacturers and facility operations personnel to ensure proper and
fault-tolerant operation within data centers.
This need was recognized by an industry consortium in 1999 that began a grass-
roots effort to provide a power density road map and to work toward standardizing
power and cooling of the equipment for seamless integration into the data center. The
Industry Thermal Consortium produced the first project of heat density trends. The
IT subcommittee of ASHRAE Technical Committee (TC) 9.9 is the successor of that
industry consortium. Figure 1.1 shows the latest projection from the IT subcommittee
and is based on best estimates of heat release for fully configured systems. An updated
set of power trend charts is published in the second edition of Datacom Equipment
Power Trends and Cooling Applications, Second Edition (ASHRAE 2012). These
updated equipment power trends extend to 2020 as shown for the 1U server, in
Figure 1.2.
The objective of Thermal Guidelines for Data Processing Environments, Third
Edition is to

• provide standardized operating environments for equipment,


• provide and define a common environmental interface for the equipment and
its surroundings,
• provide guidance on how to evaluate and test the operational health of the data
center,
• provide a methodology for reporting the environmental characteristics of a
computer system,
• guide data center owners and operators in making changes in the data center
environment, and
• provide the basis for measuring the effect of any changes intended to save
energy in the data center.

This book, which was generated by ASHRAE TC 9.9, provides equipment


manufacturers and facility operations personnel with a common set of guidelines for
environmental conditions. It is important to recognize that the IT subcommittee is
made up of subject matter experts from the major IT equipment manufacturers. It is
the intent of ASHRAE TC 9.9 to update this document regularly.
2  Introduction

Figure 1.1 Heat density trends, projections for information technology


products (ASHRAE 2005a).

Figure 1.2 1U server trends showing 2005 and new 2011 projections
(ASHRAE 2012).
Thermal Guidelines for Data Processing Environments, Third Edition 3

Unless otherwise stated, the thermal guidelines in this document refer to data
center and other data-processing environments. Telecom central offices are discussed
in detail in Telcordia NEBS1™ documents GR-63-CORE (2012) and GR3028-
CORE (2001), as well as ANSI T1.304 (1997) and the European ETSI standards
(1994, 1999). The NEBS documents are referenced when there is a comparison
between data centers and telecom rooms. These two equipment environments have
historically been very different. Nevertheless, it is important to show the comparison
where some convergence in these environments may occur in the future.

1.1 DOCUMENT FLOW


Following this introductory chapter, this guide continues as follows:

1. Chapter 2, “Environmental Guidelines for Air-Cooled Equipment” provides

• descriptions of the six environmental classes and the temperature and


humidity conditions that information technology (IT) equipment must
meet,
• the recommended operating environment for four of the information
technology equipment (ITE) classes,
• the opportunity for facility operators to plan excursions into the allowable
range or modify the recommended operating envelope based on details
provided in this book on the effect of data center environmentals on
server operation and reliability,
• the effect of altitude on each data center class, and
• a comparison with the NEBS1 environment for telecom equipment.
2. Chapter 3, “Environmental Guidelines for Liquid-Cooled Equipment”
provides information on five environmental classes for supply water tempera-
ture and other characteristics.
3. Chapter 4, “Facility Temperature and Humidity Measurement” provides a
recommended procedure for measuring temperature and humidity in a data
center.
Different protocols are described depending on whether the purpose of the
measurement is to perform

• an audit on the data center,


• an equipment installation verification test, or
• an equipment troubleshooting test.
4. Chapter 5, “Equipment Placement and Airflow Patterns” examines

• recommended airflow protocols,


• hot-aisle/cold-aisle configurations, and
• recommended equipment placement.
5. Chapter 6, “Equipment Manufacturers Heat and Airflow Reporting,”
provides the manufacturer with a methodology to report sufficient dimensional,
heat load, and airflow data to allow the data center to be adequately designed
4  Introduction

to meet equipment requirements but not be overdesigned, as might be the case


if nameplate equipment ratings were used to estimate heat loads.
6. Appendix A, “2008 ASHRAE Environmental Guidelines for Datacom
Equipment—Expanding the Recommended Environmental Envelope,”
describes some of the methodology used in determining the recommended
envelope and also some scenarios for how the recommended and allowable
envelopes can be applied in an operational data center.
7. Appendix B, “Air-Cooled Equipment Thermal Guidelines (I-P)” shows the
new air-cooled equipment classes in I-P units.
8. Appendix C, “Detailed Flowchart for the Use and Application of the
ASHRAE Data Center Classes” provides, in detailed flowcharts, guidance
for data center operators to achieve data center operation within a specific envi-
ronmental envelope.
9. Appendix D, “Static Control Measures” discusses the need for minimum
humidity levels and basic electrostatic discharge (ESD) protection protocols in
the data center.
10. Appendix E, “OSHA and Personnel Working in High Air Temperatures”
provides some information and guidance on personnel working in high-
temperature environments.
11. Appendix F, “Psychrometric Charts” shows various psychrometric charts for
the air-cooled classes in different units.
12. Appendix G, “Altitude Derating Curves” shows the envelopes of tempera-
ture and elevation for Classes A1–A4, B, C, and NEBS.
13. Appendix H, “A Practical Example of the Impact of Compressorless Cool-
ing on Hardware Failure Rates” uses a hypothetical data center implemen-
tation in the city of Chicago to guide the reader through assessing the impact of
a compressorless cooling design on hardware failure rates using the informa-
tion in this book.
14. Appendix I, “IT Equipment Reliability Data for Selected Major U.S. and
Global Cities” uses ASHRAE Weather Data Viewer software (2009d) and the
relative hardware failure rate information in this book to provide localized
metrics on net hardware failure rates and annual hours per year of compres-
sorized cooling needed in selected major U.S. and global cities to comply with
the various environmental envelopes.
15. Appendix J, “Most Common Problems in Water-Cooled Systems”
describes the most common water-cooling problems and failures.
16. References and Bibliography provides references as cited throughout this
book.

1.2 PRIMARY USERS OF THIS DOCUMENT


Primary users of this book are those involved in the design, construction, commis-
sioning, operating, implementation, and maintenance of equipment rooms. Others
who may benefit from this guide are those involved in the development and design
Thermal Guidelines for Data Processing Environments, Third Edition 5

of electronic equipment. Specific examples of the book’s intended audience include


the following:

• computer equipment manufacturers—research and development, marketing,


and sales organizations;
• infrastructure equipment manufacturers—cooling and power;
• consultants;
• general and trade contractors;
• equipment operators, IT departments, facilities engineers, and chief informa-
tion officers.

1.3 COMPLIANCE
It is the hope of TC 9.9 that many equipment manufacturers and facilities managers
will follow the guidance provided in this document. Data center facilities managers
can be confident that these guidelines have been produced by the IT manufacturers.
Manufacturers can self-certify the compliance of specific models of equipment
as intended for operation in data processing Air-Cooling Environmental Classes A1,
A2, A3, A4, B, and C and Liquid Cooling Environmental Classes W1–W5.

1.4 DEFINITIONS AND TERMS


air:
conditioned air: air treated to control its temperature, relative humidity, purity,
pressure, and movement.
supply air: air entering a space from an air-conditioning, heating, or ventilating
apparatus.
ANSI: American National Standards Institute.
cabinet: frame for housing electronic equipment that is enclosed by doors and is
stand-alone; this is generally found with high-end servers.
CDU: coolant distribution unit.
data center: a building or portion of a building whose primary function is to house
a computer room and its support areas; data centers typically contain high-end serv-
ers and storage products with mission-critical functions.
DP: dew point.
equipment: refers, but is not limited to, servers, storage products, workstations,
personal computers, and transportable computers; may also be referred to as elec-
tronic equipment or IT equipment.
equipment room: data center or telecom central office room that houses computer
and/or telecom equipment; for rooms housing mostly telecom equipment, see
Telcordia GR-3028-CORE (2001).
6  Introduction

ESD: electrostatic discharge.


ETSI: European Telecommunications Standards Institute.
frameworks: structural portion of a frame.
heat:
total heat (enthalpy): a thermodynamic quantity equal to the sum of the internal
energy of a system plus the product of the pressure-volume work done on the
system.
h = E + pv
where
h = enthalpy or total heat content
E = internal energy of the system
p = pressure
v = volume
For the purposes of this document, h = sensible heat + latent heat.
sensible heat: heat that causes a change in temperature.
latent heat: change of enthalpy during a change of state.
heat load per product footprint: calculated by using product measured power
divided by the actual area covered by the base of the cabinet or equipment.
HPC: high-performance computing.
humidity:
absolute humidity: the mass of water vapor in a specific volume of a mixture
of water vapor and dry air.
relative humidity (RH):
a. Ratio of the partial pressure or density of water vapor to the saturation pres-
sure or density, respectively, at the same dry-bulb temperature and baro-
metric pressure of the ambient air.
b. Ratio of the mole fraction of water vapor to the mole fraction of water
vapor saturated at the same temperature and barometric pressure; at
100% RH, the dry-bulb, wet-bulb, and dew-point temperatures are equal.
humidity ratio: the ratio of the mass of water to the total mass of a moist air
sample; it is usually expressed as grams of water per kilogram of dry air (gw/
kgda) or as pounds of water per pound of dry air (lbw/lbda).
Thermal Guidelines for Data Processing Environments, Third Edition 7

IEC: International Electrotechnical Commission; a global organization that


prepares and publishes international standards for all electrical, electronic, and
related technologies.
ITE: information technology (IT) equipment.
IT OEM: information technology original equipment manufacturer (OEM).
IT space: a space dedicated primarily to computers and servers but with environ-
mental and support requirements typically less stringent than those of a data center.
liquid cooled: the use of direct water or refrigerant for cooling ITE (as opposed to
using air).
Liquid Cooling Guidelines: TC 9.9 publication, Liquid Cooling Guidelines for
Datacom Equipment Centers (ASHRAE 2006).
NEBS: formerly Network Equipment-Building System; provides a set of physical,
environmental, and electrical requirements for a central office of a local exchange
carrier.
OSHA: Occupational Safety and Health Administration.
PCB: printed circuit board
power:
measured power: the heat release in watts, as defined in Section 5.1, “Providing
Heat Release and Airflow Values.”
nameplate rating: term used for rating according to nameplate (IEC 60950
[1999], under clause 1.7.1): “Equipment shall be provided with a power rating
marking, the purpose of which is to specify a supply of correct voltage and
frequency, and of adequate current-carrying capacity.”
rated voltage: the supply voltage as declared by the manufacturer.
rated voltage range: the supply voltage range as declared by the manufacturer.
rated current: “The input current of the equipment as declared by the manu-
facturer” (IEC 1999); the rated current is the absolute maximum current that is
required by the unit from an electrical branch circuit.
rated frequency: the supply frequency as declared by the manufacturer.
rated frequency range: the supply frequency range as declared by the manu-
facturer, expressed by its lower- and upper-rated frequencies.
PUE: power usage effectiveness.
rack: frame for housing electronic equipment.
rack-mounted equipment: equipment that is to be mounted in an Electronic Industry
Alliance (EIA 1992) or similar cabinet; these systems are generally specified in EIA
units, such as 1U, 2U, 3U, etc., where 1U = 44 mm (1.75 in.).
8  Introduction

RH: see humidity, relative.

room load capacity: the point at which the equipment heat load in the room no longer
allows the equipment to run within the specified temperature requirements of the
equipment; Chapter 4, “Facility Temperature and Humidity Measurement,” defines
where these temperatures are measured. The load capacity is influenced by many
factors, the primary factor being the room theoretical capacity; other factors, such
as the layout of the room and load distribution, also influence the room load capacity.

room theoretical capacity: the capacity of the room based on the mechanical room
equipment capacity; this is the sensible capacity in kW (tons) of the mechanical
room for supporting the computer or telecom room heat loads.

TCO: total cost of ownership.

temperature:

dew point: the temperature at which water vapor has reached the saturation
point (100% RH).

dry bulb: the temperature of air indicated by a thermometer.

wet bulb: the temperature indicated by a psychrometer when the bulb of one
thermometer is covered with a water-saturated wick over which air is caused to
flow at approximately 4.5 m/s (900 ft/min) to reach an equilibrium temperature
of water evaporating into air, where the heat of vaporization is supplied by the
sensible heat of the air.

Thermal Guidelines: TC 9.9 publication, Thermal Guidelines for Data Processing


Environments.

ventilation: the process of supplying or removing air by natural or mechanical


means to or from any space; such air may or may not have been conditioned.

x-factor: a dimensionless metric that measures the relative hardware failure rate at
a given constant equipment inlet dry-bulb temperature when compared to a baseline
of the average hardware failure rate at a constant equipment inlet dry-bulb temper-
ature of 20°C (68°F).

x-factor, time-weighted (or net): a dimensionless metric indicating a statistical


equipment failure rate over a defined range of environmental temperatures when
compared to a constant baseline temperature of 20°C (68°F). It is calculated by
summing individual time-at-temperature bins multiplied by their associated x-
factor.
2
Environmental Guidelines for
Air-Cooled Equipment
Chapters 2 and 3 summarize data center environmental guidelines developed by
members of the TC 9.9 committee representing the IT equipment (ITE) manufac-
turers. These environmental guidelines are for terrestrial-based systems and do not
cover electronic systems designed for aircraft or spacecraft applications. In this
document the term server is used to generically describe any ITE, such as servers,
storage, network products, etc., used in data-center-like applications.

2.1 BACKGROUND
TC 9.9 created the original publication Thermal Guidelines for Data Processing
Environments in 2004 (ASHRAE 2004). At the time, the most important goal was
to create a common set of environmental guidelines that ITE would be designed to
meet. Although computing efficiency was important, performance and availability
took precedence. Temperature and humidity limits were set accordingly. Progress-
ing through the first decade of the 21st century, increased emphasis has been placed
on computing efficiency. Power usage effectiveness (PUE) has become the new
metric by which to measure the effect of design and operation on data center effi-
ciency. To improve PUE, free-cooling techniques, such as air- and water-side econ-
omization, have become more commonplace with a push to use them year-round. To
enable improved PUE capability, TC 9.9 created additional environmental classes,
along with guidance on the use of the existing and new classes. Expanding the capa-
bility of ITE to meet wider environmental requirements can change the equipment’s
reliability, power consumption, and performance capabilities; this third edition of
the book provides information on how these capabilities are affected.
In the second edition of Thermal Guidelines (ASHRAE 2008), the purpose of the
recommended envelope was to give guidance to data center operators on maintaining
high reliability and also operating their data centers in the most energy efficient
manner. This envelope was created for general use across all types of businesses and
conditions. However, different environmental envelopes may be more appropriate for
different business values and climate conditions. Therefore, to allow for the potential
to operate in a different envelope that might provide even greater energy savings, this
third edition provides general guidance on server metrics that will assist data center
operators in creating an operating envelope that matches their business values. Each
of these metrics is described. Through these guidelines, the user will be able to deter-
mine what environmental conditions best meet their technical and business needs.
Any choice outside of the recommended region will be a balance between the addi-
tional energy savings of the cooling system versus the deleterious effects that may be
created on total cost of ownership (TCO) (total site energy use, reliability, acoustics,
10  Environmental Guidelines for Air-Cooled Equipment

Figure 2.1 Server metrics for determining data center operating


environment envelope.

or performance). A simple representation of this process is shown in Figure 2.1 for


those who decide to create their own envelope rather than use the recommended enve-
lope for operation of their data center.
A flow chart is also provided to help guide the user through the appropriate eval-
uation steps. Many of these metrics center around simple graphs that describe the
trends. However, the use of these metrics is intended for those who plan to go beyond
the recommended envelope for additional energy savings. Their use will require
significant additional analysis to understand the TCO impact of operating beyond
the recommended envelope.
The other major change in the environmental specification is to the data center
classes. Previously there were two classes (Classes 1 and 2) that applied to ITE used
in data center applications. The new environmental guidelines have more data center
classes to accommodate different applications and priorities of ITE operation. This
is critical because a single data center class forces a single optimization, whereas
each data center needs to be optimized based on the operator’s own criteria (e.g.,
TCO, reliability, performance, etc.).

2.2 NEW AIR-COOLED EQUIPMENT


ENVIRONMENTAL SPECIFICATIONS
Prior to the formation of TC 9.9, each commercial information technology (IT)
manufacturer published its own independent temperature specification. Typical data
centers were operated in a temperature range of 20°C to 21°C (68°F to 69.8°F) under
Thermal Guidelines for Data Processing Environments, Third Edition 11

the common notion that colder is better. Most data centers deployed ITE from multi-
ple vendors. This resulted in designing for ambient temperatures based on the ITE
with the most stringent temperature requirements (plus a safety factor). TC 9.9
obtained informal consensus from the major commercial ITE manufacturers for
both recommended and allowable temperature and humidity ranges and for four
environmental classes, two of which were applied to data centers.
Another critical accomplishment of TC 9.9 was to establish ITE air inlets as the
common measurement point for temperature and humidity compliance; require-
ments in any other location within the data center were optional.
The environmental guidelines/classes are really the domain and expertise of IT
OEMs. TC 9.9’s IT subcommittee is exclusively composed of engineers from
commercial IT manufacturers; the subcommittee is strictly technical.
The commercial IT manufacturers’ design, field, and failure data are shared (to
some extent) within the IT subcommittee, which enables greater levels of disclosure
and ultimately lead to the decision to expand the environmental specifications. Prior
to TC 9.9, there were no organizations or forums to remove the barriers to sharing
information among competitors. This is critical because some manufacturers
conforming while others do not results in a multivendor data center where the most
stringent requirement plus a safety factor would most likely prevail.
From an end-user perspective, it is also important that options, such as the
following, are provided for multivendor facilities:

• Option 1—Use of ITE optimized for a combination of attributes, including


energy efficiency and capital cost, with the dominant attribute being reliability.
• Option 2—Use of ITE optimized for a combination of attributes, including
some level of reliability, with the dominant attribute being energy and com-
pressorless cooling.

The industry needs both types of equipment but also needs to avoid having
Option 2 inadvertently increase the acquisition cost of Option 1 by increasing
purchasing costs through mandatory requirements not desired or used by all end
users. Expanding the temperature and humidity ranges can increase the physical size
of the ITE (e.g., more heat-transfer area required), increase ITE airflow, etc. This can
impact embedded energy cost, power consumption, and ITE purchase cost but
enables peak performance under high-temperature operation.
By adding new classes and not mandating all servers conform to something such
as an air inlet temperature of 40°C (104°F), the increased server packaging cost for
energy optimization becomes an option rather than a mandate. Before the new
classes are described and compared to their 2008 version, several key definitions
need to be highlighted.
recommended environmental range: Facilities should be designed to achieve,
under normal circumstances, ambient conditions that fall within the recommended
range. This recommended range may be as defined either in Table 2.3 or by the
process outlined later in this chapter whereby the user can apply the metrics in
Figure 2.1 (described in more detail in this book) to define a different recommended
range more appropriate for particular business objectives.
12  Environmental Guidelines for Air-Cooled Equipment

allowable environmental range: The allowable envelope is where IT manufacturers


test their equipment in order to verify that it will function within those environmental
boundaries. Typically, IT manufacturers perform a number of tests prior to the
announcement of the product to verify that it meets all the functional requirements
within the environmental envelope. This is not a statement of reliability but one of
functionality of the ITE. In addition to the allowable dry-bulb temperature and rela-
tive humidity (RH) ranges, the maximum dew point (DP) and maximum elevation
values are part of the allowable operating environment definitions.
practical application: Prolonged exposure of operating equipment to conditions
outside its recommended range, especially approaching the extremes of the allow-
able operating environment, can result in decreased equipment reliability and
longevity (server reliability values versus inlet air temperatures are now provided in
this 2011 version to provide some guidance on operating outside the recommended
range). Exposure of operating equipment to conditions outside its allowable oper-
ating environment risks catastrophic equipment failure. With equipment at high
power density, it may be difficult to maintain air entering the equipment within the
recommended range, particularly over the entire face of the equipment. In these situ-
ations, reasonable efforts should be made to achieve conditions within the recom-
mended range. If these efforts prove unsuccessful, operation within the allowable
environment is likely to be adequate, but facility operators may wish to consult with
the equipment manufacturers regarding the risks involved.

The primary difference between the first edition of the Thermal Guidelines
published in 2004 and the second edition, published in 2008, were in the changes to
the recommended envelope shown in the Table 2.1. The 2008 recommended guide-
lines have not changed in the third edition, as shown in Table 2.3. However, as stated
above there is an opportunity to define a different recommended envelope based on
the metrics shown in Figure 2.1 and documented later in this chapter. More infor-
mation is provided later on this subject. For more background on the changes made
in 2008 to the recommended envelope, refer to Appendix A.
To enable improved operational efficiency, ASHRAE TC 9.9 has added two new
ITE environmental classes to Thermal Guidelines that are more compatible with
chillerless cooling. The naming conventions have been updated to better delineate the
types of ITE. Old and new classes are now specified differently (comparisons are
shown in Table 2.2). The 2008 version shows two data center classes (Classes 1 and
2) which have been kept the same in this update but are now referred to as Classes A1
and A2. The new data center classes are referred to as Classes A3 and A4.

Environmental Class Definitions for Air-Cooled Equipment


Compliance with a particular environmental class requires full operation of the
equipment over the entire allowable environmental range, based on nonfailure
conditions.
Class A1: Typically a data center with tightly controlled environmental param-
eters (dew point, temperature, and RH) and mission critical operations; types of
Thermal Guidelines for Data Processing Environments, Third Edition 13

Table 2.1 Comparison of 2004 and 2008 Versions of


Recommended Envelopes

2004 Version 2008 Version

Low-end temperature 20°C (68°F) 18°C (64.4°F)


High-end temperature 25°C (77°F) 27°C (80.6°F)

Low-end moisture 40% RH 5.5°C (41.9°F) DP

High-end moisture 55% RH 60% RH and 15°C (59°F) DP

Table 2.2 2008 and 2011 Thermal Guideline Comparisons

2011 2008 Environmental


Applications IT Equipment (ITE)
Classes Classes Control

Enterprise servers,
A1 1 Tightly controlled
storage products
A2 2 Data center Volume servers, storage
A3 NA products, personal Some control
computers, workstations
A4 NA

Office, home, Personal computers,


B 3 transportable workstations, laptops, Minimal control
environment, etc. and printers
Point-of-sale, Point-of-sale equipment,
C 4 industrial, ruggedized controllers, or No control
factory, etc. computers and PDAs

products typically designed for this environment are enterprise servers and stor-
age products.
Class A2/A3/A4: Typically an information technology space with some control
of environmental parameters (dew point, temperature, and RH); types of prod-
ucts typically designed for this environment are volume servers, storage prod-
ucts, personal computers, and workstations. Among these 3 classes A2 has the
narrowest temperature and moisture requirements and A4 has the widest envi-
ronmental requirements.
Class B: Typically an office, home, or transportable environment with minimal
control of environmental parameters (temperature only); types of products typi-
cally designed for this environment are personal computers, workstations,
laptops, and printers.
Class C: Typically a point-of-sale or light industrial or factory environment
with weather protection, sufficient winter heating and ventilation; types of
14 Environmental Guidelines for Air-Cooled Equipment
Table 2.3 2011 Thermal Guidelines—SI Version (I-P Version in Appendix B)
Equipment Environment Specifications for Air Cooling
Product Operationb,c Product Power Offc,d
Maximum
Humidity Maximum Maximum Dry-Bulb Maximum
Dry-Bulb Rate RH,
Class a Range, Dew Point, Elevatione,j, Temperature, Dew Point,
Temperaturee,g, °C of Changef, %
Noncondensingh,i °C m °C °C
°C/h
Recommended (Suitable for all 4 classes; explore data center metrics in this paper for conditions outside this range.)
5.5°C DP to
A1 to A4 18 to 27 60% RH
and 15°C DP
Allowable

A1 15 to 32 20% to 80% RH 17 3050 5/20 5 to 45 8 to 80 27

A2 10 to 35 20% to 80% RH 21 3050 5/20 5 to 45 8 to 80 27


–12°C DP and 8% RH
A3 5 to 40 24 3050 5/20 5 to 45 8 to 80 27
to 85% RH
–12°C DP and 8% RH
A4 5 to 45 24 3050 5/20 5 to 45 8 to 80 27
to 90% RH

B 5 to 35 8% to 80% RH 28 3050 N/A 5 to 45 8 to 80 29

C 5 to 40 8% to 80% RH 28 3050 N/A 5 to 45 8 to 80 29


*The 2008 recommended ranges as shown here and in Table 2 1 can still be used for data centers For potentially greater energy savings, refer to the section “Detailed Flowchart for the
Use and Application of the ASHRAE Data Center Classes” in Appendix C for the process needed to account for multiple server metrics that impact overall TCO
Notes for Table 2.3, 2011 Thermal Guidelines—SI Version (I-P Version in Appendix B)
a. Classes A1, A2, B, and C are identical to 2008 classes 1, 2, 3 and 4. These classes have simply been renamed to avoid confusion with Classes A1
through A4. The recommended envelope is identical to that published in the 2008 version of Thermal Guidelines.
b. Product equipment is powered ON.
c. Tape products require a stable and more restrictive environment (similar to Class A1). Typical requirements: minimum temperature is 15°C, maximum
temperature is 32°C, minimum RH is 20%, maximum RH is 80%, maximum dew point is 22°C, rate of change of temperature is less than 5°C/h, rate
of change of humidity is less than 5% RH per hour, and no condensation.

Thermal Guidelines for Data Processing Environments, Third Edition15


d. Product equipment is removed from original shipping container and installed but not in use,
e.g., during repair, maintenance, or upgrade.
e. Classes A1, A2, B, and C—Derate maximum allowable dry-bulb temperature 1°C/300 m above 900 m. Above 2400 m altitude, the derated dry-bulb
temperature takes precedence over the recommended temperature. A3—Derate maximum allowable dry-bulb temperature 1°C/175 m above 900 m.
A4—Derate maximum allowable dry-bulb temperature 1°C/125 m above 900 m.
f. 5°C/h for data centers employing tape drives and 20°C/h for data centers employing disk drives.
g. With diskette in the drive, the minimum temperature is 10°C.
h. The minimum humidity level for Classes A3 and A4 is the higher (more moisture) of the –12°C dew point and the 8% RH. These intersect at
approximately 25°C. Below this intersection (~25°C) the dew point (–12°C) represents the minimum moisture level, while above it, RH (8%) is
the minimum.
i. Moisture levels lower than 0.5°C DP, but not lower –12°C DP or 8% RH, can be accepted if appropriate control measures are implemented to limit the
generation of static electricity on personnel and equipment in the data center. All personnel and mobile furnishings/equipment must be connected to the
ground via an appropriate static control system. The following items are considered the minimum requirements (see Appendix A for additional details):
1) Conductive materials
a) conductive flooring
b) conductive footwear on all personnel that go into the data center, including visitors just passing through
c) all mobile furnishing/equipment to be made of conductive or static dissipative materials
2) During maintenance on any hardware, a properly functioning wrist strap must be used by any personnel who contacts ITE.
j. To accommodate rounding when converting between SI and I-P units, the maximum elevation is considered to have a variation of ±0.1%. The impact on
ITE thermal performance within this variation range is negligible and enables the use of rounded values of 3050 m (10,000 ft).
New note: Operation above 3050 m requires consultation with IT supplier for each specific piece of equipment.
16  Environmental Guidelines for Air-Cooled Equipment

products typically designed for this environment are point-of-sale equipment,


ruggedized controllers, or ruggedized computers and PDAs.

For comparison, the NEBS and ETSI Class 3.1 environmental specifications are
listed in Table 2.4.
The European Telecommunications Standards Institute (ETSI) defines stan-
dards for information and communications technologies and is recognized by the
European Union as a European standards organization. ETSI has defined a set of five
environmental classes based on the end-use application. ETSI Classes 3.1 and 3.1e
apply to telecommunications centers, data centers and similar end-use locations.
These classes assume a noncondensing environment, no risk of biological or animal
contamination, normal levels of airborne pollutants, insignificant vibration and
shock, and that the equipment is not situated near a major source of sand or dust.
Classes 3.1 and 3.1e apply to permanently temperature-controlled enclosed loca-
tions where humidity is not usually controlled. A high-level summary of Classes 3.1
and 3.1e is given in Table 2.5 along with a climatogram of those same conditions
(Figure 2.2).
For more details on the Class 3.1 and 3.1e specification requirements, please
consult ETS 300 019-1-3 (ETSI 2009).
The new guidelines (Table 2.3) were developed with a focus on providing as
much information as possible to the data center operators, allowing them to maxi-
mize energy efficiency without sacrificing the reliability required by their business.
The allowable environmental ranges for the four data center classes, including the
two new ones (A3 and A4), are shown in Figures 2.3 and 2.4 (SI and I-P, respec-
tively). Derating to maximum temperature must be applied as previously described.
Class A3 expands the temperature range to 5°C to 40°C (41°F to 104°F) while
also expanding the moisture range to extend from a low moisture limit of 8% RH and
–12°C (10.4°F) dew point to a high moisture limit of 85% RH.
Class A4 expands the allowable temperature and moisture range even further than
Class A3. The temperature range is expanded to 5°C to 45°C (41°F to 113°F), while
the moisture range extends from 8% RH and –12°C (10.4°F) dew point to 90% RH.
Based on the allowable lower moisture limits for Classes A3 and A4, there are
some added minimum requirements that are listed in note “i” in the table that pertain
to the protection of the equipment from electrostatic discharge (ESD) failure-inducing
events that could possibly occur in low-moisture environments.
The recommended envelope is highlighted as a separate row in Table 2.3
because of some misconceptions regarding the use of the recommended envelope.
When it was first created, it was intended that within this envelope the most reliable,
acceptable and reasonably power-efficient operation could be achieved. Data from
the manufacturers were used to create the recommended envelope. It was never
intended that the recommended envelope would represent the absolute limits of inlet
air temperature and humidity for ITE. As stated in the second edition of Thermal
Guidelines, the recommended envelope defined the limits under which ITE would
operate most reliably while still achieving reasonably energy-efficient data center
operation. However, in order to use economizers as much as possible to save energy
Table 2.4 NEBS Environmental Specifications
Equipment Environmental Specifications for Air Cooling
Product Operationb,c Product Power Offc,d

Humidity Maximum Maximum Maximum Rate Dry-Bulb


Dry-Bulb RH, Maximum

Thermal Guidelines for Data Processing Environments, Third Edition17


Class Range, Dew Point, Elevatione,j, of Changef, Temperature,
Temperaturee,g,°C % Dew Point, °C
Noncondensingh,i °C m °C/h °C
Recommended
NEBSa 18 to 27e 55% RH Maximumf
Allowable

NEBSa 5 to 40b,c,d 5% to 85% RHb,d 28b 4000b N/A N/A N/A N/A
a The product operation values given for NEBS are from GR-63-CORE (Telcordia 2006) and GR-3028-CORE (Telcordia 2001) GR-63-CORE also addresses conformance testing of new
equipment for adequate robustness Some of the test conditions are summarized below For complete test details, please review GR-63-CORE Conformance test conditions (short-term) of
new equipment:
Dry-Bulb Temperature
Frame Level: –5°C to 50°C, 16 hours at –5°C, 16 hours at 50°C, (GR-63-CORE)
Shelf Level: –5°C to 55°C, 16 hours at –5°C, 16 hours at 55°C, (GR-63-CORE)
Max Rate of Change: 96°C/h warming and 30°C/h cooling (GR-63-CORE and GR-3028-CORE)
RH: 5 to 90% 3 hours at <15% RH, 96 hours at 90% RH (GR-63-CORE)
Max Dew Point: 28°C (GR-63-CORE)
b Requirements for continuous operating conditions that new equipment will tolerate (GR-63-CORE) A feature or function that, in the view of Telcordia, is necessary to satisfy the needs of
a typical client company is labeled “Requirement” and is flagged by the letter “R ” The conformance testing described in footnote “j” is designed to ensure that equipment tolerates the
specified continuous operating conditions
c Derate maximum dry-bulb temperature 10°C at and above 1,800 m
d Also ANSI T1 304-1997 (ANSI 1997)
e Recommended facility operation per GR-3028-CORE No NEBS requirements exist
f Generally accepted telecom practice; the major regional service providers have shut down almost all humidification based on Telcordia research Personal grounding is strictly enforced
to control ESD failures No NEBS requirements exist
18  Environmental Guidelines for Air-Cooled Equipment

Table 2.5 ETSI Class 3.1 and 3.1e Environmental Requirements

Continuous Class 3.1: 10% of Class 3.13: 1% of


Operation Operational Hours Operational Hours

5°C to 10°C –5°C to 5°C


Temperature 10°C to 35°C (41°F to 50°F) and (21°F to 41°F) and
Ranges (50°F to 95°F) 35°C to 40°C 40°C to 45°C
(95°F to 104°F) (104°F to 113°F)

Humidity 5% to 10% RHb and 5% to 10% RHb and


10% to 80% RHa
Ranges 80% to 85% RHc 80% to 85% RHc

a With minimum absolute humidity of no less than 1 5 g/m3 (0 04 g/ft3) and a maximum absolute humidity
of no more than 20 g/m3 (0 57 g/ft3)
b With minimum absolute humidity of no less than 1 g/m3 (0 03 g/ft3)
c With maximum absolute humidity of no more than 25 g/m3 (0 71 g/ft3)

Note: Maximum rate of temperature change for continuous operation, Class 3 1 and Class 3 1e is 0 5°C/min
(0 9°F/min) averaged over a period of 5 minutes

Figure 2.2 Climatogram of the ETSI Class 3.1 and 3.1e environmental
conditions (ETSI 2009).
Thermal Guidelines for Data Processing Environments, Third Edition 19

Figure 2.3 Environmental classes for data centers—SI Version (see


Table 2.3 for dry-bulb temperature derating for altitude).

Figure 2.4 Environmental Classes for Data Centers—I-P Version (see


Table 2.3 for dry-bulb temperature derating for altitude).
20  Environmental Guidelines for Air-Cooled Equipment

during certain times of the year, the inlet server conditions may fall outside the
recommended envelope but still within the allowable envelope. The second edition
of Thermal Guidelines also states that it is acceptable to operate outside the recom-
mended envelope for short periods of time without risk of affecting the overall reli-
ability and operation of the ITE. However, some still felt the recommended envelope
was mandatory, even though that was never the intent.
Equipment inlet air temperature measurements are specified in Chapter 4.
However, to aid in data center layout and inlet rack temperature monitoring, manu-
facturers of electronic equipment should include temperature sensors within their
equipment that monitor and display or report the inlet air temperature. For product
operation, the environmental specifications given in Table 2.3 refer to the air enter-
ing the electronic equipment. Air exhausting from electronic equipment is not rele-
vant to the manufacturers of such equipment. However, the exhaust temperature is
a concern, for example, for service personnel working in the hot exhaust airstream.
Some information and guidance from OSHA is given in Appendix E for personnel
working in high-temperature environments.
The allowable and recommended envelopes for Classes A1–A4, B, C, and
NEBS are depicted in psychrometric charts beginning with Appendix F. The recom-
mended operating environment specified in Table 2.3 is based in general on reliabil-
ity aspects of the electronic hardware:

1. High RH levels have been shown to affect failure rates of electronic compo-
nents. Examples of failure modes exacerbated by high RH include conductive
anodic failures, hygroscopic dust failures, tape media errors and excessive
wear, and corrosion. The recommended upper RH limit is set to limit this effect.
2. Electronic devices are susceptible to damage by electrostatic discharge, while
tape products and media may have excessive errors in rooms that have low RH.
The recommended low RH limit is set to limit this effect.
3. High temperature affects the reliability and life of electronic equipment. The
recommended upper ambient temperature limit is set to limit these temperature-
related reliability effects.
4. The lower the temperature in the room that houses the electronic equipment, the
more energy is required by the HVAC equipment. The recommended lower
ambient temperature limit is set to limit extreme overcooling.

For data center equipment, each individual manufacturer tests to specific envi-
ronmental ranges, and these may or may not align with the allowable ranges spec-
ified in Table 2.3.
For telecommunications equipment, the test environment ranges (short-term) are
specified in NEBS GR-63-CORE (Telcordia 2012), together with the allowable oper-
ating conditions (long-term or continuous). To ensure adequate robustness for tele-
communications equipment, the test ranges for new equipment are based on
environmental conditions that may—with a certain low probability—occur in various
telecommunications environments. These limits shall not be considered continuous
Thermal Guidelines for Data Processing Environments, Third Edition 21

Figure 2.5 World population distribution versus altitude (Cohen and


Small 1998).

operating conditions but do include short-term operation of 96 consecutive hours at


extreme temperature.
In selecting the maximum altitude for which data center products should oper-
ate, Figure 2.5 shows that the majority of the population resides below 3000 m
(9840 ft); therefore, the maximum altitude for Classes A1–A4 was chosen as
3050 m (10,000 ft). Telecom NEBS (GR-63 CORE) uses a maximum altitude of
4000 m (13,120 ft) to account for telecommunications central offices that are located
at the higher elevations.
The purpose of specifying a derating on the maximum dry-bulb temperature for
altitude (see Table 2.3 footnote “e”) is to identify acceptable environmental limits that
compensate for degradation in air cooling performance at high altitudes. The rate of
heat transfer in air-cooled electronics is a function of convective heat transfer and
coolant mass flow rates, both of which decrease as a result of reduced air density,
which accompanies the lower atmospheric pressure at high altitudes. An altitude
derating restricts the maximum allowable upper operating temperature limit when the
system is operated at higher altitudes and permits a higher operating temperature limit
when the system is operated at lower altitudes. Altitude derating thus ensures that
system component temperatures stay within functional limits, while extending the
useful operating range to the maximum extent possible for a given cooling design.
One area that needed careful consideration was the application of the altitude
derating for the new classes. By simply providing the same derating curve as defined
for Classes A1 and A2, the new Classes A3 and A4 would have driven undesirable
increases in server energy to support the higher altitudes upon users at all altitudes.
22  Environmental Guidelines for Air-Cooled Equipment

In an effort to provide for both a relaxed operating environment, as well as a total


focus on the best solution with the lowest TCO for the client, modification to this
derating was applied. The new derating curves for Classes A3 and A4 maintain
significant relaxation, while mitigating extra expense incurred both during acquisi-
tion of the ITE but also under operation due to increased power consumption. See
Appendix F for the derating curves.
The relationship between dry-bulb temperature, altitude, and air density for the
different environments is depicted graphically in Appendix G.
The impact of two key factors (reliability and power versus ambient tempera-
ture) driving the previous inclusion of the recommended envelope is now provided,
as well as several other server metrics to aid in defining an envelope that more closely
matches each user’s business and technical needs. The relationships between these
factors and inlet temperature are now provided, allowing data center operators to
decide how they can optimally operate within the allowable envelopes.

2.3 GUIDE FOR THE USE AND APPLICATION OF THE


ASHRAE DATA CENTER CLASSES
The addition of further data center classes significantly complicates the decision
process for the data center owner/operator when trying to optimize efficiency,
reduce total cost of ownership, address reliability issues, and improve performance.
Table 2.6 attempts to capture many of the characteristics involved in the decision-
making process. Data center optimization is a complex multivariable problem and
requires a detailed engineering evaluation for any significant changes to be success-
ful. An alternative operating envelope should be considered only after appropriate
data are collected and interactions within the data center are understood. Each
parameter’s current and planned status could lead to a different endpoint for the data
center optimization path.
The worst case scenario would be for an end-user to carelessly assume that ITE
is capable of operating in Classes A3 or A4, or that the mere definition of these
classes, with their improved environmental capability, magically solves existing
data center thermal management or power density or cooling problems due to their
improved environmental capability. While some new ITE may operate in these
classes, other ITE, including legacy equipment, may not. Data center problems
would most certainly be compounded if the user erroneously assumes that Class A3
or A4 conditions are acceptable. The rigorous use of the tools and guidance in this
chapter should preclude such errors. Table 2.6 summarizes the key characteristics
and potential options to be considered when evaluating the optimal operating range
for each data center.
By understanding the characteristics described above, along with the data center
capability, one can follow the general steps necessary in setting the operating
temperature and humidity range of the data center:

1. Consider the state of best practices for the data center. Most best practices,
including airflow management and cooling-system controls strategies, should
be implemented prior to the adoption of higher server inlet temperature.
Thermal Guidelines for Data Processing Environments, Third Edition 23

Table 2.6 Range of Options to Consider for


Optimizing Energy Savings

Range of Options to Consider for Optimizing Energy Savings

Characteristic Range of options

Project type New, retrofit, existing upgrade

Layout and arrangement, economizer air-


Architectural aspects flow path, connections between old and new
sections

Extensive range, from none to full contain-


ment (see Notes 1 and 2), room’s perfor-
Airflow management mance in enabling uniform ITE inlet
temperatures and reducing or eliminating
undesired recirculation

Cooling system return, under floor, in-row,


Cooling controls sensor location
IT inlet

Temperature/humidity rating of: power dis-


tribution equipment, cabling, switches and
Temperature/humidity rating network gear, room instrumentation, humid-
of all existing equipment ification equipment, cooling unit allowable
supply and return temperatures, personnel
health and safety requirements.

None, to be added, existing, water-side,


Economizer
air-side

Chiller None, existing

Range of temperature in the region (obtain


bin data and/or design extremes), number of
Climate factors—Temperature
hours per year above potential ASHRAE
class maximums

Range of humidity in the region (obtain bin


data and/or design extremes for RH and dew
Climate factors—Humidity point), coincident temperature and humidity
extremes, number of hours per year outside
potential ASHRAE class humidity ranges

Cooling architecture Air, liquid, perimeter, row, rack level

Data center type 3 HPC, internet, enterprise, financial


1 Some CRAC/CRAH units have limited return temperatures, as low as 30°C (86°F)
2 With good airflow management, server temperature rise can be on the order of 20°C (68°F); with an inlet
temperature of 40°C (104°F) the hot aisle could be 60°C (140°F)
3 Data center type affects reliability/availability requirements
24  Environmental Guidelines for Air-Cooled Equipment

2. Determine the maximum allowable ASHRAE class environment from


Table 2.3 based on review of all ITE environmental specifications
3. Use default recommended operating envelope (see Table 2.3), or if more energy
savings is desired, use the following information to determine the operating
envelope:
a. Climate data for locale (only when utilizing economizers)
b. Server power trend versus ambient temperature (see section 2.4.1)
c. Acoustical noise levels in the data center versus ambient temperature (see
section 2.4.2)
d. Server reliability trend versus ambient temperature (see section 2.4.3)
e. Server reliability versus moisture, contamination and other temperature
effects (see section 2.4.4)
f. Server performance trend versus ambient temperature (see section 2.4.5)
g. Server cost trend versus ambient temperature (see section 2.4.6)

The steps above provide a simplified view of the flowchart in Appendix C. The
use of Appendix C is highly encouraged as a starting point for the evaluation of the
options. The flowchart provides guidance to the data center operators seeking to
minimize TCO on how best to position their data center for operating in a specific
environmental envelope. Possible endpoints range from optimization of TCO within
the recommended envelope as specified in the 2008 version of the ASHRAE enve-
lopes (see Table 2.1) to a chillerless data center using any of the data center classes.
More importantly, Appendix C describes how to achieve even greater energy savings
through the use of a TCO analysis using the servers metrics provided in the next
sections.

2.4 SERVER METRICS TO GUIDE USE OF NEW GUIDELINES


The development of the recommended envelopes for the 2004 and 2008 versions
were based on the IT manufacturers’ knowledge of both the reliability and equip-
ment power consumption trends of servers as a function of inlet air temperature. In
order to use a different envelope providing greater flexibility in data center opera-
tion, some knowledge of these two factors must be provided. The next sections
provide trend data for ITE for both power and reliability over a wide range of ambi-
ent temperatures. In addition, some aspects of server performance, acoustics, corro-
sion and cost versus ambient temperature and humidity are also discussed.
A number of server metrics are presented in the sections below and are shown
as ranges for the parameter of interest. The ranges are meant to capture most of the
volume server market. If specific server information is desired, contact the IT manu-
facturer.

2.4.1 Server Power Trend versus Ambient Temperature


Data were collected from a number of ITE manufacturers covering a wide range of
products. Most of the data collected for the A2 environment fell within the envelope
displayed in Figure 2.6. The power increase is a result of both fan power, component
power, and the power conversion for each. The component power increase is a result
Thermal Guidelines for Data Processing Environments, Third Edition 25

Figure 2.6 Server power increase (Class A3 is an estimate) versus


ambient temperature for Classes A2 and A3.

of an increase in leakage current for some silicon devices. As an example of the use
of Figure 2.6, if a data center is normally operating at a server inlet temperature of
15°C (59°F) and the operator wants to raise this temperature to 30°C (86°F), it could
be expected that the server power would increase in the range of 3% to 7%. If the
inlet temperature increases to 35°C (95°F), the ITE power could increase in the
range of 7% to 20% compared to operating at 15°C (59°F).
26  Environmental Guidelines for Air-Cooled Equipment

Figure 2.7 Server flow rate increase versus ambient temperature increase.

Since very few data center class products currently exist for the Class A3 envi-
ronment, the development of the Class A3 envelope shown in Figure 2.6 was simply
extrapolated from the Class A2 trend. (Note: Equipment designed for NEBS envi-
ronments would likely meet the new class requirements but typically have limited
features and performance in comparison with volume ITE). New products for this
class would likely be developed with improved heat sinks and/or fans to properly cool
the components within the new data center class, so the power increases over the
wider range would be very similar to that shown for Class A2.
With the increase in fan speed over the range of ambient temperatures, ITE flow
rate also increases. An estimate of the increase in server airflow rates over the
temperature range up to 35°C (95°F) as displayed in Figure 2.7. It is very important
in designing data centers to take advantage of temperatures above the 25°C to 27°C
(77°F to 80.6°F) inlet ambient temperature range. With higher temperatures as an
operational target the design must be analyzed for capability of handling the higher
volumes of airflow. This includes all aspects of the airflow system. The base system
may be called upon to meet 250% (per Figure 2.7) of the nominal airflow (airflow
when in the recommended range). This may include the outdoor air inlet, filtration,
cooling coils, dehumidification/humidification, fans, underfloor plenum, raised
floor tiles/grates, and containment systems. A detailed engineering evaluation of the
data center systems higher flow rate is a requirement to ensure successful operation
at elevated inlet temperatures.

2.4.2 Acoustical Noise Levels in Data Center versus


Ambient Temperature
Expanding the operating envelope for datacom facilities may have an adverse effect
on acoustical noise levels. Noise levels in high-end data centers have steadily
Thermal Guidelines for Data Processing Environments, Third Edition 27

Table 2.7 Expected Increase in A-Weighted Sound Power Level


(in decibels)

25°C (77°F) 30°C (86°F) 35°C (95°F) 40°C (104°F) 45°C (113°F)

0 dB 4.7 dB 6.4 dB 8.4 dB 12.9 dB

increased over the years and have become, or at least will soon become, a serious
concern to data center managers and owners. For background and discussion on this,
see Chapter 9 “Acoustical Noise Emissions” in ASHRAE’s Design Considerations
for Datacom Equipment Centers, Second Edition (2009b). The following section
addresses ITE noise as opposed to total data center noise, which would include
computer room cooling noise sources, which also contribute to overall data center
noise exposure. The increase in noise levels is the obvious result of the significant
increase in cooling requirements of modern IT and telecommunications equipment.
The increase of concern results from noise levels in data centers approaching or
exceeding regulatory workplace limits, such as those imposed by OSHA (1980) in
the U.S. or by EC Directives in Europe (Europa 2003). TELCO equipment level
sound power requirements are specified in GR-63-CORE (Telcordia 2012). Empir-
ical fan laws generally predict that the sound power level of an air-moving device
increases with the fifth power of rotational speed, and this behavior has generally
been validated over the years for typical high-end rack-mounted servers, storage
units, and I/O equipment normally found on data center floors. This means that a 20%
increase in speed (e.g., 3000 to 3600 rpm) equates to a 4 dB increase in noise level.
The first part of the second edition of Thermal Guidelines addressed raising the
recommended operating temperature envelope by 2°C (3.6°F) (from 25°C to 27°C
[77°F to 80.6°F]). While it is not possible to predict a priori the effect on noise levels
of a potential 2°C (3.6°F) increase in data center temperatures, it is not unreasonable
to expect to see increases in the range of 3 to 5 dB for such a rise in ambient temper-
atures as a result of the air-moving devices speeding up to maintain the same cooling
effect. Data center managers and owners should therefore weigh the trade-offs
between the potential benefits in energy efficiency with this original, proposed new
recommended operating environment and the potential risks associated with
increased noise levels.
Because of the new ASHRAE air-cooled-equipment guidelines described in
this chapter, specifically the addition of Classes A3 and A4 with widely extended
operating temperature envelopes, it becomes instructive to look at the allowable
upper temperature ranges and their potential effect on data center noise levels. Using
the fifth power empirical law mentioned above, coupled with current practices for
increasing air-moving device speeds based on ambient temperature, the A-weighted
sound power level increases shown in Table 2.7 were predicted for typical air-cooled
high-end server racks containing a mix of compute, I/O, and water-cooled units.
Of course, the actual increase in noise level for any particular ITE rack depends
not only on the specific configuration of the rack but also on the cooling schemes and
fan speed algorithms used for the various rack drawers and components. Differences
28  Environmental Guidelines for Air-Cooled Equipment

would exist between high-end equipment that employs sophisticated fan speed
control and entry-level equipment using fixed fan speeds or rudimentary speed
control. However, the above increases in noise emission levels with ambient temper-
ature can serve as a general guideline for data center managers and owners
concerned about noise levels and noise exposure for employees and service person-
nel. The IT industry has developed its own internationally standardized test codes
for measuring the noise emission levels of its products (ISO 7779 [2010]) and for
declaring these noise levels in a uniform fashion (ISO 9296 [1988]). Noise emission
limits for ITE installed in a variety of environments (including data centers) are
stated in Statskontoret Technical Standard 26:6 (2004).
The above discussion applies to potential increases in noise emission levels, i.e.,
the sound energy actually emits from the equipment, independent of listeners in the
room or the environment in which the equipment is located. Ultimately, the real
concern is about the possible increase in noise exposure, or noise emission levels,
experienced by personnel in the data center. With regard to the regulatory workplace
noise limits and protection of employees against potential hearing damage, data
center managers should check whether potential changes in noise levels in their envi-
ronment will cause them to trip various action level thresholds defined in the local,
state, or national codes. The actual regulations should be consulted, as they are
complex and beyond the scope of this document to explain in full. The noise levels
of concern in workplaces are stated in terms of A-weighted sound pressure levels (as
opposed to A-weighted sound power levels used above for rating the emission of
noise sources). For instance, when noise levels in a workplace exceed a sound pres-
sure level of 85 dB(A), hearing conservation programs, which can be quite costly,
are mandated, generally involving baseline audiometric testing, noise level moni-
toring or dosimetry, noise hazard signage, and education and training. When noise
levels exceed 87 dB(A) (in Europe) or 90 dB(A) (in the U.S.), further action, such
as mandatory hearing protection, rotation of employees, or engineering controls,
must be taken. Data center managers should consult with acoustical or industrial
hygiene experts to determine whether a noise exposure problem will result when
ambient temperatures are increased to the upper ends of the expanded ranges
proposed in this book.
In an effort to provide some general guidance on the effects of the proposed
higher ambient temperatures on noise exposure levels in data centers, the following
observations can be made (though, as noted above, it is advised that one seek profes-
sional help in actual situations, because regulatory and legal requirements are at
issue). Modeling and predictions of typical ITE racks in a typical data center with
front-to-back airflow, have shown that the sound pressure level in the center of a typi-
cal aisle between two rows of continuous racks will reach the regulatory trip level
of 85 dB(A) when each of the individual racks in the rows has a measured (as
opposed to a statistical upper limit) sound power level of roughly 8.4 B (84 dB sound
power level). If it is assumed that this is the starting condition for a 25°C (77°F)
ambient data center temperature—and many fully configured high-end ITE racks
today are at or above this 8.4-B level—the sound pressure level in the center of the
aisle would be expected to increase to 89.7 dB(A) at 30°C (86°F) ambient, to 91.4
Thermal Guidelines for Data Processing Environments, Third Edition 29

dB(A) at 35°C (95°F) ambient, to 93.4 dB(A) at 40°C (104°F) ambient, and to 97.9
dB(A) at 45°C (113°F) ambient, using the predicted increases to sound power level
above. Needless to say, these levels are extremely high. They are not only above the
regulatory trip levels for mandated action (or fines, in the absence of action), but they
clearly pose a risk of hearing damage unless controls are instituted to avoid exposure
by data center personnel.

2.4.3 Server Reliability Trend versus Ambient Temperature


Before extensively using data center economizers or wider environmental operating
limits, it is important to understand the reliability (failure rate) impact of those
changes. The hardware failure rate within a given data center is determined by the
local climate, the type of economization being implemented, and the temperature
and humidity range over which economization is being carried out. Most econo-
mized facilities have a means of mixing hot exhaust air with incoming cold air, so
the minimum data center temperature is usually tempered to something in the range
of 15°C to 20°C (59°F to 68°F). All of the ITE (servers, storage, networking, power,
etc.) in a data center using an economizer must be rated to operate over the planned
data center class of temperature and humidity range. This section describes the
process for evaluating the effect of temperature on ITE reliability. Actual ITE reli-
ability for any given data center could be better or worse due to exposure to a wider
range of ambient temperatures through the use of economization. No guidance is
provided with respect to the reliability of other than ITE. Equipment other than ITE
must be separately evaluated in combination with the ITE to determine overall data
center reliability.
To understand the impact of temperature on hardware failure rates one can
model different economization scenarios. First, consider the ways economization
can be implemented and how this would impact the data center temperature. For
purposes of this discussion, consider three broad categories of economized facilities:

1. Economization over a narrow temperature range with little or no change


to the data center temperature: This is the primary control methodology,
where the data center is properly configured to control the air temperature at or
near the IT inlet to the data center operators chosen temperature. The econo-
mizer modulates to bring in more or less cool air (air side) or adjust the cool
water flow or temperature (water side) to meet this required temperature. If the
external conditions or internal load change such that the economizer can no
longer handle the task on its own, the chiller ramps up to provide additional
cooling capacity, thereby keeping the space at the desired temperature. This is
the most benign implementation of economizing, because the temperature of
the data center is essentially the same as if the data center was not economized.
If there is little or no temperature change, then there should be little or no failure
rate impact of temperature on the data center hardware from temperature. This
economization scenario probably represents the vast majority of economized
data centers.
30  Environmental Guidelines for Air-Cooled Equipment

2. Expanded temperature range economization, where there may be a net


increase in the data center temperature some of the time: Some data center
owners/operators may choose to reduce cooling energy by expanding econo-
mizer hours or raising computer room air-conditioner (CRAC) setpoints,
thereby widening the temperature range over which they will operate their facil-
ities. Facilities using this operational mode may be located in an environment
where expanded economizer hours are available but typically have mechanical
cooling as part of their system configuration.
3. A chillerless data center facility, where the data center temperature is
higher and varies with the outdoor air temperature and local climate:
Some data centers in cool climates may want to reduce their data center
construction capital costs by building a chillerless facility. In chillerless data
center facilities, the temperature in the data center varies over a much wider
range that is determined, at least in part, by the temperature of the outdoor air
and the local climate. These facilities may use supplemental cooling methods
that are not chiller based, such as evaporative cooling.

Since there are so many different variables and scenarios to consider for ITE
reliability, the approach taken by TC 9.9 was to initially establish a baseline failure
rate (x-factor) of 1.00 that reflected the average probability of failure under a
constant ITE inlet temperature of 20°C (68°F). Table 2.8 provides x-factors at other
constant ITE inlet temperatures for 7 × 24 × 365 continuous operation conditions.
The key to applying the x-factors in Table 2.8 is to understand that it represents a
relative failure rate compared to the baseline of a constant ITE inlet temperature of
20°C (68°F). Finally, Table 2.8 provides x-factor data at the average, upper, and
lower bounds. This table was created using manufacturer’s reliability data, which
included all components within the volume server package. In addition, because
there are many variations within a server package among the number of processors,
memory DIMMs, hard drives, etc., the resulting table contains lower and upper
bounds to take account of these. The data set that one opts to use should depend on
the level of risk tolerance for a given application.
It is important to note that the 7 × 24 × 365 use conditions corresponding to the
x-factors in Table 2.8 are not a realistic reflection of the three economization scenar-
ios outlined above. For most climates in the industrialized world, the majority of the
hours in a year are spent at cool temperatures, where mixing cool outdoor air with
air from the hot aisle exhaust keeps the data center temperature in the range of 15°C
to 20°C (59°F to 68°F) (x-factor of 0.72 to 1.00). Furthermore, these same climates
spend only 10% to 25% of their annual hours above 27°C (80.6°F), the upper limit
of the ASHRAE recommended range. The correct way to analyze the reliability
impact of economization is to use climate data to construct a time-weighted average
x-factor. An analysis of time-weighted x-factors will show that, even for the harshest
economization scenario (chillerless), the reliability impact of economization is
much more benign than the 7 × 24 × 365 x-factor data in Table 2.8 would indicate.
A summary of time-weighted x-factors for air-side economization for a variety of
U.S. cities is shown in Figure 2.8. The data assume a 1.5°C (2.7°F) temperature rise
Thermal Guidelines for Data Processing Environments, Third Edition 31

Table 2.8 Relative ITE Failure Rate x-Factor


as a Function of Constant ITE Air Inlet Temperature

Temperature Impact on Volume Server Hardware Failure Rate

Dry Bulb Temp, °C Failure Rate x-Factor


(°F) Lower Bound Average Bound Upper Bound
15.0 (59.0) 0.72 0.72 0.72
17.5 (63.5) 0.80 0.87 0.95
20.0 (68.0) 0.88 1.00 1.14
22.5 (72.5) 0.96 1.13 1.31
25.0 (77.0) 1.04 1.24 1.43
27.5 (81.5) 1.12 1.34 1.54
30.0 (86.0) 1.19 1.42 1.63
32.5 (90.5) 1.27 1.48 1.69
35.0 (95.0) 1.35 1.55 1.74
37.5 (99.5) 1.43 1.61 1.78
40.0 (104.0) 1.51 1.66 1.81
42.5 (108.5) 1.59 1.71 1.83
45.0 (113.0) 1.67 1.76 1.84
Note: Relative hardware failure rate x-factor for volume servers as a function of continuous operation

between the outdoor air temperature and the equipment inlet air temperature. More
than half of the cities have x-factor values at or below 1.0, and even the warmest cities
shows an x-factor of only about 1.25 relative to a traditional air-conditioned data
center that is kept at 20°C (68°F).
For more information on how to analyze the reliability impact of climate data,
refer to Appendix H. For detailed analyses and bar charts of time-weighted averages
for U.S. and world-wide cities see Appendix I.
It is important to be clear what the relative failure rate values mean. We have
normalized the results to a data center run continuously at 20°C (68°F), which has
the relative failure rate of 1.0. For those cities with values below 1.0, the implication
is that the economizer still functions and the data center is cooled below 20°C (68°F)
(to 15°C [59°F]) for those hours each year. In addition, the relative failure rate shows
the expected increase in the number of failed servers, not the percentage of total serv-
ers failing; e.g., if a data center that experiences 4 failures per 1000 servers incor-
porates warmer temperatures, and the relative failure rate is 1.2, then the expected
failure rate would be 5 failures per 1000 servers. To provide an additional frame of
reference on data center hardware failures, sources showed blade hardware server
failures were in the range of 2.5% to 3.8% over 12 months in two different data
32  Environmental Guidelines for Air-Cooled Equipment

Figure 2.8 Time-weighted x-factor estimates for air-side economizer for


selected U.S. cities.

centers with supply temperatures approximately 20°C (68°F) (Patterson et al. 2009;
Intel 2008). In a similar data center that included an air-side economizer with
temperatures occasionally ranging to 35°C (95°F) (at an elevation around 1600 m
[5250 ft]), the failure rate was 4.5%. These values are provided solely for guidance
with an example of failure rates. In these studies, a failure was deemed to have
occurred each time a server required hardware attention. No attempt to categorize
the failure mechanisms was made.
To provide additional guidance on the use of Table 2.8, Appendix H gives a prac-
tical example of the impact of a compressorless cooling design on hardware failures,
and Appendix I provides ITE reliability data for selected major U.S. and global cities.

2.4.4 Server Reliability versus Moisture, Contamination, and


other Temperature Effects
While the preceding discussion has been almost entirely about temperature, other
factors, such as pollution and humidity, can cause failures in data center equipment.
The effects of gaseous pollution, particulates, and humidity on the types of equip-
ment failures they can cause are well documented. One of the best sources of infor-
mation on pollution is the ASHRAE’s Particulate and Gaseous Contamination in
Datacom Environments” (2009c). When selecting a site for a new data center or when
adding an air-side economizer to an existing data center, the air quality and building
materials should be checked carefully for sources of pollution and particulates. Addi-
tional filtration should be added to remove gaseous pollution and particulates, if
needed. In addition to pollution, recent information (Hamilton et al. 2007; Sood
2010; Hinaga 2010) has shown that both temperature and humidity affect dielectric
properties of printed circuit board (PCB) dielectric materials. The dielectric (e.g.,
Thermal Guidelines for Data Processing Environments, Third Edition 33

FR4) provides the electrical isolation between board signals. With either increased
moisture in the PCB or higher temperature within the PCB, transmission line losses
increase. Signal integrity may be significantly degraded as the board’s temperature
and moisture content increase. Moisture content changes relatively slowly, on the
order of hours and days, based on the absorption rate of the moisture into the board.
Outer board layers are affected first, and longer-term moisture exposure affect these
layers first. Temperature changes on the order of minutes and can quickly affect
performance. As more high-speed signals are routed in the PCB, both temperature
and humidity will become even greater concerns for ITE manufacturers. The cost of
PCB material may increase significantly and may increase the cost of Classes A3 and
A4 rated ITE. The alternative for the ITE manufacturer is to use lower speed bus
options, which will lower performance.
Excessive exposure to high humidity can induce performance degradations or
failures at various circuitry levels. At the PCB level, conductive anodic filament
grows along the delaminated fiber/epoxy interfaces where moisture facilitates the
formation of a conductive path (Turbini and Ready 2002; Turbini et al. 1997). At the
substrate level, moisture can cause surface dendrite growth between pads of opposite
bias due to electrochemical migration. This is a growing concern due to continuing
C4 pitch refinement. At the silicon level, moisture can induce degradation or loss of
the adhesive strength in the dielectric layers, while additional stress can result from
hygroscopic swelling in package materials. The combination of these two effects
often causes delamination near the die corner region where thermal-mechanical stress
is inherently high and more vulnerable to moisture. It is worth noting that temperature
plays an important role in moisture effects. On one hand, higher temperature
increases the diffusivity coefficients and accelerates the electrochemical reaction. On
the other hand, the locally higher temperature due to self heating also reduces the local
RH, thereby drying out the circuit components and enhancing their reliability.
In addition to the above diffusion-driven mechanism, another obvious issue
with high humidity is condensation. This can result from sudden ambient tempera-
ture drop or the presence of a lower temperature source for water-cooled or refrig-
eration-cooled systems. Condensation can cause failures in electrical and
mechanical devices through electrical shorting and corrosion. Other examples of
failure mode exacerbated by high RH include hygroscopic dust failures (Comizzoli
et al. 1993), tape media errors, and excessive wear (Van Bogart 1995), and corrosion.
These failures are found in environments that exceed 60% RH for extended periods
of time.
As a rule, the typical mission-critical data center must give the utmost consid-
eration of the trade-offs before operating with an RH that exceeds 60% for the
following reasons:

• It is well known that moisture and pollutants are necessary for metals to
corrode. Moisture alone is not sufficient to cause atmospheric corrosion.
Pollution aggravates corrosion in the following ways:
• Corrosion products, such as oxides, may form and protect the metal
and slow down the corrosion rate. In the presence of gaseous pollutants
34  Environmental Guidelines for Air-Cooled Equipment

like SO2 and H2S and ionic pollutants like chlorides, the corrosion-
product films are less protective, allowing corrosion to proceed some-
what linearly. When the RH in the data center is greater than the deli-
quescent RH of the corrosion products, such as copper sulfate, cupric
chloride, and the like, the corrosion-product films become wet, dramat-
ically increasing the rate of corrosion. Cupric chloride, a common cor-
rosion product on copper, has a deliquescence RH of about 65%. A
data center operating with RH greater than 65% would result in the
cupric chloride absorbing moisture, becoming wet, and aggravating
copper corrosion rate.
• Dust is ubiquitous. Even with our best filtration efforts, fine dust will be pres-
ent in a data center and will settle on electronic hardware. Fortunately, most
dust has particles with high deliquescent RH, which is the RH at which the
dust absorbs enough water to become wet and promote corrosion and/or ion
migration. When the deliquescent RH of dust is greater than the RH in the
data center, the dust stays dry and does not contribute to corrosion or ion
migration. However on the rare occurrence when the dust has deliquescent
RH lower than the RH in the data center, the dust will absorb moisture,
become wet, and promote corrosion and/or ion migration, degrading hardware
reliability. A study by Comizzoli et. al. (1993) showed that, for various loca-
tions worldwide, leakage current due to dust that had settled on printed circuit
boards increased exponentially with RH. This study leads us to the conclusion
that maintaining the RH in a data center below about 60% will keep the leak-
age current from settled fine dust in the acceptable subangstrom range.

Gaseous contamination concentrations that lead to silver and/or copper corro-


sion rates greater that about 300 angstroms/month have been known to cause the two
most common recent failure modes of copper creep corrosion on circuit boards and
the corrosion of silver metallization in miniature surface mounted components. As
explained, corrosion rate is greatly increased when RH is greater than the deliques-
cence RH of the corrosion products formed on metal surfaces by gaseous attack.
In summary, if protection of mission-critical data center hardware is paramount,
equipment can best be protected from corrosion by maintaining an RH of less than
60% and limiting the particulate and gaseous contamination concentration to levels
at which the copper and/or silver corrosion rates are less than 300 and 200 angstrom/
month, respectively.
Given these reliability concerns, data center operators need to pay close atten-
tion to the overall data center humidity and local condensation concerns, especially
when running economizers on hot/humid summer days. When operating in polluted
geographies, data center operators must also consider particulate and gaseous
contamination, because the contaminates can influence the acceptable temperature
and humidity limits within which data centers must operate to keep corrosion-related
hardware failure rates at acceptable levels. Dehumidification, filtration, and gas-
phase filtration may become necessary in polluted geographies with high humidity.
Thermal Guidelines for Data Processing Environments, Third Edition 35

2.4.5 Server Performance Trend versus Ambient Temperature


The capability for supporting an environment depends on the thermal design and
management implementation of the ITE. Each component within the ITE has ther-
mal limits that must be met based on the intended use. Components such as proces-
sors have features that enable maximizing performance within power and thermal
constraints based on a thermal design power (TDP). That TDP is provided to guide
the IT thermal design engineer during the design phase so that cooling is sufficiently
sized. If the ITE is not designed to meet the full capability implied by the TDP,
performance can be impacted.
With some components, power consumption and performance reductions are
handled gracefully with somewhat predictable results. For example, processors can
automatically limit their power consumption if they threaten to become too hot, based
on real-time, on-chip thermal measurements. Other components have little or no
power management capability. Many components have no thermal sensors and no
mechanism for power management, and therefore no way to stay within their thermal
limits. Consequently, if environmental specifications are not met, the temperature
limits of such devices may be exceeded, resulting in loss of data integrity. A system
designed for one class but used in another class may continue to operate with light
workloads but may experience performance degradation with heavy workloads.
Performance degradation is driven by power management features. These features
are used for protection and typically will not be triggered in a well-designed system
that remains within the allowable operating ranges of the environmental parameters.
The exception occurs when a system is configured in an energy-saving mode
where power management features are triggered to enable adequate but not peak
performance. A configuration setting such as this may be acceptable for some
customers and applications but is generally not the default configuration that will,
in most cases, support full operation.
To enable the greatest latitude in use of all the classes, and with the new guid-
ance on recommended ranges, “full performance operation” has been replaced with
“full operation.” ITE is designed with little-to-no margin at the extreme upper limit
of the allowable range. The recommended range enabled a buffer for excursions to
the allowable limits. That buffer is now removed and, consequently, power and ther-
mal management features may be triggered within the allowable range to ensure no
thermal excursions outside the capability of the ITE under extreme load conditions.
ITE is designed based on the probability of an event occurring, such as the combi-
nation of extreme workloads simultaneously with room temperature excursions.
Because of the low probability of simultaneous worst-case events occurring, IT
manufacturers will skew their power and thermal management systems to ensure
that operation is guaranteed. The IT purchaser must consult with the equipment
manufacturer to understand the performance capability at extreme upper limits of
the allowable thermal envelopes.

2.4.6 Server Cost Trend versus Ambient Temperature


With ITE designed to Classes A3 or A4, the IT manufacturer has a number of ways
to support the wider environmental requirements. The trade-offs include cooling
36  Environmental Guidelines for Air-Cooled Equipment

solution capability, component selection based on temperature ratings, and perfor-


mance capability. With some components, such as processors, an increased temper-
ature capability will come at either a significant cost increase or a reduced
performance capability. The silicon must be tested to the temperature specification,
and if that specification is higher, the capability to produce a high-performance part
is reduced and it becomes more valuable, thereby increasing cost.
Higher-temperature-rated parts may or may not be available for all components.
As mentioned, improved PCB materials are available but could increase cost signif-
icantly over lower-performing materials. Improved heat sinks may be used to improve
cooling performance, but such improvement is limited and will normally be used in
conjunction with increased air-mover speeds. The effect of increased air-mover
speeds is evident in the previous power versus temperature guidance provided. One
must be aware that the need for higher air-mover speeds will only occur when the
system inlet temperature moves towards the high range of the thermal envelope. Typi-
cal speeds will still remain relatively low under more normal room temperatures.
Assuming that performance was maintained through cooling improvements,
the cost of a server would likely increase moving from Classes A2 to A3 and then
from Classes A3 to A4. Many server designs may require improved, noncooling
components (e.g., processors, memory, storage, etc.) to achieve Class A3 or A4
operation, because the cooling system may be incapable of improvement within the
volume constraints of the server, and the changes required to these components may
also affect server cost. In any case, the cost of servers supporting the new ASHRAE
classes should be discussed with the individual server manufacturer to understand
whether this will factor into the decision to support the new classes within an indi-
vidual data center.

2.4.7 Summary of New Air-Cooled Equipment


Environmental Specifications
Classes A3 and A4 have been added primarily for facilities wishing to avoid the
capital expense of compressor-based cooling. The new classes may offer some addi-
tional hours of economization above and beyond Classes A1 and A2, but there is no
guarantee that operation at the extremes of Classes A1 and A2 actually results in a
minimal energy condition. Fan power, both in the ITE and the facility, may push the
total energy to a higher level than is experienced when chilling the air. One of the
important reasons for the initial recommended envelope was that its upper bound
was typical of minimized IT fan energy. Moreover, higher-temperature operation
increases the leakage power of CMOS-based electronics, partially (or in extreme
environments, completely) offsetting the energy savings achieved by compressor-
less cooling.
This chapter points out that ITE failure rates can increase with temperature in
some cases, but those failure rate increases are moderated by the short time periods
spent at elevated temperature. For many locations and economization scenarios, the
net increase in ITE failure rate will be negligible during short-term periods of
elevated temperatures. For longer-term periods of elevated temperatures, equipment
may experience significant reductions in mean time between failures (MTBF). The
Thermal Guidelines for Data Processing Environments, Third Edition 37

potential reduction in MTBF is directly related to the level of the elevated temper-
atures. Diligent management of elevated temperatures to minimize event duration
should minimize any residual effect on MTBF. The guidance provided here should
allow the user to quantify the ITE failure rate impact of both their economization
scenarios and the climate where their data center facility is located. Refer to Appen-
dix F for specific recommended and allowable temperature limits.
The reasons for the original recommended envelope have not gone away. Oper-
ation at wider extremes will have energy and/or reliability impacts. A compressor-
less data center could actually provide better reliability than its tightly controlled
counterpart.
3
Environmental Guidelines for
Liquid-Cooled Equipment
Chapter 2 documented the expanded data center environmental guidelines by
adding two more envelopes that are wider in temperature and humidity. However,
those guidelines are for air-cooled IT equipment (ITE) and do not address water
temperatures provided by facilities for supporting liquid-cooled equipment (here
“liquid-cooled ITE” refers to any liquid, such as water, refrigerant, dielectric, etc.,
within the design control of the IT manufacturers). TC 9.9 published Liquid Cooling
Guidelines for Datacom Equipment Centers in 2006 (ASHRAE 2006), which
focused mostly on the design options for liquid-cooled equipment and did not
address the various facility water temperature ranges possible for supporting liquid-
cooled equipment. This chapter describes the classes for the temperature ranges of
the facility supply of water to liquid-cooled ITE. The location of this interface is the
same as that defined in Liquid Cooling Guidelines and detailed in Chapter 4 of that
book. In addition, Chapter 3 of this book and several appendices reinforce some of
the information provided in Liquid Cooling Guidelines on the interface between the
ITE and infrastructure in support of the liquid-cooled ITE. Since the classes cover
a wide range of facility water temperatures supplied to the ITE, a brief description
is provided for the possible infrastructure equipment that could be used between the
liquid-cooled ITE and the outdoor environment.
The global interest in expanding the temperature and humidity ranges for air-
cooled ITE continues to increase, driven by the desire to achieve higher data center
operating efficiency and lower total cost of ownership (TCO). For these reasons,
liquid cooling of ITE can provide high performance while achieving high energy effi-
ciency in power densities beyond air-cooled equipment and simultaneously enabling
use of waste heat when supply facility water temperatures are high enough. Chapter 3
specifies the environmental classes for the temperature of water supplied to ITE.
By creating these new facility water-cooling classes and not mandating use of
a specific class, TC 9.9 provides server manufacturers the ability to develop products
for each class depending on customer needs and requirements.
Developing these new classes for the commercial IT manufacturers, in consul-
tation with the Energy Efficient High Performance Computing (EE HPC) Working
Group should produce better results, since the sharing of critical data has resulted
in broader environmental specifications than would otherwise be possible.

3.1 ITE LIQUID COOLING


The increasing heat density of modern electronics is stretching the ability of air to
adequately cool the electronic components within servers and within the datacom
40  Environmental Guidelines for Liquid-Cooled Equipment

facilities that house them. To meet this challenge, the use of direct water or refrig-
erant cooling at the rack or board level is now being deployed. The ability of water
and refrigerant to carry much larger amounts of heat per volume or mass also offers
tremendous advantages. The heat from these liquid-cooling units is in turn rejected
to the outdoor environment by using either air or water to transfer heat out of the
building, or, in some facilities, it is used for local space heating. Because of the oper-
ating temperatures involved with liquid-cooling solutions, water-side economiza-
tion fits in well.
Liquid cooling can also offer advantages in terms of lower noise levels and close
control of electronics temperatures. However, liquid in electronic equipment raises
a concern about leaks. This is an issue because maintenance, repair, and replacement
of electronic components result in the need to disconnect and reconnect the liquid
carrying lines. To overcome this concern, IT OEM designers sometimes use a
nonconductive liquid, such as a refrigerant or a dielectric fluid, in the cooling loop
for the ITE.
In the past, high-performance mainframes were often water-cooled, and the
internal piping was supplied by the IT OEM. Components available today have simi-
lar factory-installed and leak-tested piping that can accept the water from the
mechanical cooling system, which may also employ a water-side economizer.
Increased standardization of liquid-cooled designs for connection methods and loca-
tions will also help expand their use by minimizing piping concerns and allowing
interchangeability of diverse liquid-cooled IT products.
The choice to move to liquid cooling may occur at different times in the life of
a data center. There are three main times, discussed below, when the decision
between air and liquid cooling must be made. Water’s thermal properties were
discussed earlier as being superior to air. This is certainly the case but does not mean
that liquid cooling is invariably more efficient than air cooling. Both can be very effi-
cient or inefficient, and which is best generally has more to do with design and appli-
cation than the cooling fluid. In fact, modern air-cooled data centers with air
economizers are often more efficient than many liquid-cooled systems. The choice
of liquid-cooled versus air-cooled generally has more to do with factors other than
efficiency.
New Construction
In the case of a new data center, the cooling architect must consider a number of
factors, including data center workload, availability of space, location-specific
issues, and local climate. If the data center will have an economizer, and the climate
is best suited to air-side economizers because of mild temperatures and moderate
humidity, then an air-cooled data center may make the most sense. Conversely, if the
climate is primarily dry, then a water-side economizer may be ideal, with the cooling
fluid conveyed either to the racks or to a coolant distribution unit (CDU).
Liquid cooling more readily enables the reuse of waste heat. If a project is
adequately planned from the beginning, reusing the waste energy from the data
center may reduce energy use of the site or campus. In this case, liquid cooling is the
obvious choice because the heat in the liquid can most easily be transferred to other
Thermal Guidelines for Data Processing Environments, Third Edition 41

locations. Also, the closer the liquid is to the components, the higher the quality of
the heat will be that is recovered and available for alternate uses.
Expansions
Another common application for liquid cooling is adding or upgrading equipment
in an existing data center. Existing data centers often do not have large raised floor
heights, or the raised floor plenum is full of obstructions such as cabling. If a new
rack of ITE is to be installed that is of higher power density than the existing raised-
floor air-cooling can support, liquid cooling can be the ideal solution. Current typi-
cal air-cooled rack powers can range from 6 to 30 kW. In many cases, rack powers
of 30 kW are well beyond what legacy air cooling can handle. Liquid cooling to a
datacom rack, cabinet-mounted chassis, cabinet rear door, or other localized liquid-
cooling system can make these higher-density racks nearly room neutral by cooling
the exhaust temperatures down to room temperature levels.
High Density and HPC
Because of the energy densities found in many high-performance computing (HPC)
applications and next-generation high-density routers, liquid cooling can be a very
appropriate technology if the room infrastructure is in place to support it. One of the
main cost and performance drivers for HPC is the node-to-node interconnect.
Because of this, HPC typically is driven toward higher power density than is a typi-
cal enterprise or internet data center. Thirty-kilowatt racks are typical, with densities
extending as high as 80 to 120 kW. Without some implementation of liquid cooling,
these higher powers would be very difficult if not impossible to cool. The advan-
tages of liquid cooling increase as the load densities increase. More details on the
subject of liquid cooling can be found in Liquid Cooling Guidelines for Datacom
Equipment Centers (ASHRAE 2006).
Several implementations of liquid cooling may be deployed. The most common
are as follows:

• Rear-door, in-row, or above-rack heat exchanger that removes a large percent-


age of the ITE waste heat from air to liquid.
• Totally enclosed cabinet that uses air as the working fluid and an air-to-liquid
heat exchanger.
• Direct delivery of the cooling fluid to the components in the system using
cold plates directly attached to processors, application-specific integrated cir-
cuit, memory, power supplies, etc. in the system chassis or rack, whether they
be servers or telecom equipment.

Note that new options, such as single- and two-phase immersion baths, are
becoming available that may become more prevalent in the near future.
The CDU may be external to the datacom rack, as shown in Figure 3.1, or below
or within the datacom rack, as shown in Figure 3.2.
Figures 3.1 and 3.2 show the interfaces for a liquid-cooled rack with remote heat
rejection. The interface is located at the boundary at the facility water system loop
and does not impact the datacom equipment cooling system loops, which are
42  Environmental Guidelines for Liquid-Cooled Equipment

Figure 3.1 Liquid-cooled rack or cabinet with external CDU.

Figure 3.2 Combination air- and liquid-cooled rack or cabinet with internal
CDU.

controlled and managed by the cooling equipment and datacom manufacturers.


However, the definition of the interface at the loop affects both the datacom equip-
ment manufacturers and the facility where the datacom equipment is housed. For
that reason, all of the parameters that are key to this interface are described in detail
here. Liquid Cooling Guidelines describes the various liquid-cooling loops that
could exist within a data center and its supporting infrastructure. These liquid loops
are shown in Figure 3.3. Two liquids are shown in each of Figures 3.1 and 3.2—first,
the coolant contained in the technology cooling systems (TCS), and second, the
coolant contained in the datacom equipment cooling system (DECS). The TCS may
include in-row and overhead forced air-to-liquid heat exchangers. If the TCS liquid
is a dielectric coolant, the external CDU pump may potentially be used to route the
Thermal Guidelines for Data Processing Environments, Third Edition 43

Figure 3.3 Liquid-cooling systems/loops for a data center.

TCS coolant directly to cold plates attached to DECS internal components in addi-
tion to or in place of a separate internal DECS. As seen in Figure 3.3, the water guide-
lines that are discussed in this document are at the chilled-water systems (CHWS)
loop. If chillers are not installed, then the guidelines would apply to the condenser
water systems (CWS) loop.
Although not specifically noted, a building-level CDU may be more appropriate
where there are a large number of racks connected to liquid cooling. In this case, the
location of the interface is defined the same as in Figure 3.1, but the CDU as shown
would be a building-level rather than a modular unit. Building level CDUs handling
many megawatts of power have been built for large HPC systems. Although
Figure 3.1 shows liquid cooling using a raised floor, liquid could be distributed
above ceiling just as efficiently.

3.2 FACILITY WATER SUPPLY CHARACTERISTICS FOR ITE


The facility water is anticipated to support any liquid-cooled ITE using water, water
plus additives, refrigerants, or dielectrics. The following sections focus on these
applications.
3.2.1 Facility Water Supply Temperature Classes for ITE
3.2.1.1 Liquid-Cooling Environmental Class Definitions
Compliance with a particular environmental class requires full operation of the
equipment within the class specified, based on nonfailure conditions. The ITE
specific for each class requires different design points for the cooling components
(cold plates, thermal interface materials, liquid flow rates, piping sizes, etc.) utilized
44  Environmental Guidelines for Liquid-Cooled Equipment

Table 3.1 2011 ASHRAE Liquid-Cooled Guidelines

Equipment Environment Specifications for Liquid Cooling

Class Typical Infrastructure Design Facility Supply


Main Supplemental Water Temperature,
Cooling Equipment Cooling Equipment °C (°F)

W1 Chiller/cooling tower Water-side economizer 2 to 17 (35.6 to 62.6)

W2 2 to 27 (35.6 to 80.6 )

W3 Cooling tower Chiller 2 to 32 (35.6 to 89.6)

Water-side economizer
W4 (with dry-cooler or cooling N/A 2 to 45 (35.6 to 113)
tower)

W5 Building heating system Cooling tower >45 (>113)

within the ITE. For IT designs that meet the higher supply temperatures, as refer-
enced by the ASHRAE classes in Table 3.1, enhanced thermal designs are required
to maintain the liquid-cooled components within the desired temperature limits.
Generally, the higher the supply water temperature is, the higher the cost of the cool-
ing solutions.

Class W1/W2: This is typically a data center that is traditionally cooled using chill-
ers and a cooling tower, but with an optional water-side economizer to improve
energy efficiency, depending on the location of the data center (see Figure 3.4).

Class W3: For most locations, these data centers may be operated without chillers.
Some locations will still require chillers (see Figure 3.4).

Class W4: These data centers are operated without chillers to take advantage of
energy efficiency and reduce capital expense (see Figure 3.5).

Class W5: Water temperature is high enough to make use of the water exiting the
ITE to heat local buildings in order to take advantage of energy efficiency, reduce
capital expense through chillerless operation, and also make use of the waste energy
(see Figure 3.6).

The facility supply water temperatures specified in Table 3.1 are requirements
to be met by the ITE for the specific class of hardware manufactured. For the data
center operator, the use of the full range of temperatures within the class may not be
required or even desirable given the specific data center infrastructure design.
There is currently no widespread availability of ITE in Classes W3–W5. Future
product availability in this range will be based on market demand. It is anticipated
that future designs in these classes may involve trade-offs between IT cost and
Thermal Guidelines for Data Processing Environments, Third Edition 45

Figure 3.4 Class W1/W2/W3 liquid-cooling classes typical infrastructure.

Figure 3.5 Class W4 liquid-cooling class typical infrastructure.

Figure 3.6 Class W5 liquid-cooling class typical infrastructure.

performance. At the same time, these classes will allow lower-cost data center infra-
structure in some locations. The choice of IT liquid-cooling class should involve a
TCO evaluation of the combined infrastructure and IT capital and operational costs.

3.2.2 Condensation Considerations


Liquid-cooling Classes W1, W2, and W3 allow the water supplied to the ITE to be
as low as 2°C (36°F), which is below the ASHRAE allowable room dew-point
guideline of 17°C (63°F) for Class A1 enterprise datacom centers. Electronics
equipment manufacturers are aware of this and are taking it into account in their
designs. Data center relative humidity and dew point should be managed according
to the guidelines in this book. If low fluid operating temperatures are expected, care-
ful consideration of condensation should be exercised. It is suggested that a CDU (as
shown in Figures 3.1 and 3.2) with a heat exchanger be employed to raise the coolant
temperature to at least 18°C (64.4°F) to eliminate condensation issues or have an
adjustable water supply temperature that is set 2°C (3.9°F) or more above the dew
point of the data center space.
46  Environmental Guidelines for Liquid-Cooled Equipment

3.2.3 Operational Characteristics


For Classes W1 and W2, the datacom equipment should accommodate chilled-water
supply temperatures that can be set by a campus-wide operational requirement. An
optimal path might be a balance between lower operational costs using higher
temperature chilled-water systems and lower capital cost with lower temperature
chilled-water systems. Consideration of condensation prevention is a must. In the
chilled-water loop, insulation is typically required. In connecting loops, condensa-
tion control is typically provided by an operational temperature above the dew point.
The chilled-water supply temperature measured at the inlet of the datacom equip-
ment or the CDU should not exceed a rate of change of 3°C (5.4°F) per five-minute
cycle. This may require that the infrastructure be powered by an uninterruptible power
supply (UPS).
The maximum allowable water pressure supplied by the facility water loop to
the interface of the liquid-cooled ITE should be 690 kPa (100 psig) or less, even
under surge conditions.
The chilled-water flow rate requirements and pressure-drop values of the data-
com equipment vary depending on the chilled-water supply temperature and
percentage of treatment (antifreeze, corrosion inhibitors, etc.) in the water. Manu-
facturers typically provide configuration-specific flow rate and pressure differential
requirements based on a given chilled-water supply temperature and rack heat dissi-
pation to the water.
For Classes W3, W4, and W5, the infrastructure will probably be specific to the
data center; therefore, the water temperature supplied to the water-cooled ITE will
depend on the climate zone. It may be necessary in these classes to operate without
a chiller installed, so it is critical to understand the limits of the water-cooled ITE and
its integration within the support infrastructure. The reliable operation of the data
center infrastructure will need to accommodate the local climate, where extremes in
temperature and humidity may occur.
The temperature of the water for Classes W3 and W4 depends on the cooling
tower design, the heat exchanger between the cooling tower and the secondary water
loop, the design of the secondary water loop to the ITE, and the local climate. To
accommodate a large geographic region, the range of water temperatures was chosen
to be 2°C to 45°C (35°F to 113°F).
For Class W5, the infrastructure will be such that the waste heat from the warm
water can be redirected to nearby buildings. Accommodating water temperatures
nearer the upper end of the temperature range is more critical to those applications
where retrieving a large amount of waste energy is critical. The water supply temper-
atures for this class are specified as greater than 45°C (113°F) since the water
temperature may depend on many parameters, such as climate zone, building heat-
ing requirements, distance between data center and adjacent buildings, etc. Of
course, the components within the ITE need to be cooled to their temperature limits
and still use the hotter water as the heat-sink temperature. In many cases, the hotter
water heat-sink temperature will be a challenge to the ITE thermal designer. There
Thermal Guidelines for Data Processing Environments, Third Edition 47

Figure 3.7 Typical water flow rates for constant heat load.

may be opportunities for heat recovery for building use even in Classes W3 and W4,
depending on the configuration and design specifications of the systems to which the
waste heat would be supplied.

3.2.4 Water Flow Rates/Pressures


Water flow rates are shown in Figure 3.7 for given heat loads and given temperature
differences. Temperature differences typically fall between 5°C to 10°C (9°F to
18°F). Minimum facility pressure differential (drop) should not be lower than 0.4 bar.

3.2.5 Velocity Limits


The velocity of the water in the piping supplied to the ITE must be controlled to
ensure that mechanical integrity is maintained over the life of the system. Velocities
that are too high can lead to erosion, sound/vibration, water hammer, and air entrain-
ment. Particulate-free water will cause less water velocity damage to the tubes and
associated hardware. Table 3.2 provides guidance on maximum water piping veloc-
ities for systems that operate over 8000 hours per year. Water velocities in flexible
tubing should be maintained below 1.5 m/s (5 ft/s).

3.2.6 Water Quality


Table 3.3 identifies the water quality requirements that are necessary to operate the
liquid-cooled system. Figure 3.3 shows a chilled-water system loop. The reader is
encouraged to reference Chapter 49, “Water Treatment,” of the ASHRAE Hand-
book—HVAC Applications (ASHRAE 2011), which provides a more in-depth
48  Environmental Guidelines for Liquid-Cooled Equipment

Table 3.2 Maximum Velocity Requirements

Pipe Size, mm (in.) Maximum Velocity (ft/s) Maximum Velocity (m/s)

>75 (>3) 7.0 2.1

38 to 75 (1.5 to 3) 6.0 1.8

25 (<1) 5.0 1.5

All flexible tubing 5.0 1.5

Table 3.3 Water Quality Specifications Supplied to ITE

Parameter Recommended Limits

pH 7 to 9

Corrosion Inhibitor(s) required

Sulfides <10 ppm

Sulfate <100 ppm

Chloride <50 ppm

Bacteria <1000 CFUs/mL

Total hardness (as CaCO3) <200 ppm

Residue after evaporation <500 ppm

Turbidity <20 NTU (nephelometric)

discussion of the mechanisms and chemistries involved. The most common prob-
lems in cooling systems are described in Appendix J.

3.3 LIQUID-COOLING DEPLOYMENTS IN NEBS-COMPLIANT SPACES


The use of liquid close-coupled cooling systems has extended from the IT environ-
ment to include active deployment in the NEBS space. In 2012 ATIS began an initia-
tive to standardize the components utilized in liquid close-coupled systems. As
communications and data transport systems have evolved, heat rejection require-
ments have risen considerably. Common heat signatures of 400 to 800 W/frame are
being replaced with cabinets supporting 20 kW and more. While this range is rela-
tively low compared to common ITE space described earlier, existing office cooling
infrastructures are often unable to support the increased demand. Liquid-cooling
systems provide a bridge system that supplements existing cooling infrastructure
and provides primary cooling in both greenfield and brownfield applications. Addi-
tional details on liquid cooling in NEBS spaces has been included due to the present
lack of detailed information on this topic in ASHRAE’s Liquid Cooling Guidelines.
Future editions of that publication will be updated with this information.
Thermal Guidelines for Data Processing Environments, Third Edition 49

Figure 3.8 Liquid cooling systems/loops for a NEBS space.

Temperature and humidity ranges for NEBS spaces may differ from those
shown in Table 2.3. The specific NEBS ranges are specified in the GR-63-CORE
standard (Telcordia 2012). The facility supply water temperature identified in
Table 3-1 should be reviewed and adjusted as needed to assure compliance with the
GR-63 equipment inlet temperature parameters.
While the use of distributed refrigerant systems is supportive of NEBS envi-
ronments, deployment of these systems is not limited specifically to these environ-
ments. The use of distributed refrigerant or other dielectrics is suitable for
deployment in IT and other equipment spaces.

3.3.1 NEBS Space Similarities and Differences


As shown in Figure 3.8, the backbone cooling infrastructure up to the CDU is essen-
tially the same for NEBS space deployment as for non-NEBS data centers. Cooling
systems presently use economizers to assist in mitigating energy use.
NEBS space deployments, however, have a very strict restriction of the distri-
bution of water within the active equipment space. While water-side connections are
permitted, they are typically connected outside of the space to cooling units (e.g.,
computer room air handlers) or are restricted to the perimeter of the supported space.
This limitation eliminates the risk of equipment contamination or failure due to
water solutions leaking from distribution piping, connectors, and hoses.
The typical NEBS space is different in basic construction in that it utilizes
almost exclusively slab floor deployments with overhead cable racking and piping.
This overhead piping arrangement adds to the risk factor of equipment damage due
to water solution leakage.
50  Environmental Guidelines for Liquid-Cooled Equipment

Alternate liquids to water provide an amenable solution that supports enhanced


close-coupled cooling while meeting water distribution restrictions. The typical use
of refrigerant systems provides an effective heat rejection transport medium while
assuring that leaks will not cause equipment damage due to the escape of the inert
gas. Other refrigerant or dielectrics may be effective in supporting the unique nature
of NEBS spaces.
The separation of a water-based cooling feed by the CDU to the refrigerant-
based distribution for the close-coupled cooling units also affords the option of using
a direct-exchange-compressor-based cooling feed in place of the water-based cool-
ing feed.
The following subsections provide information for the components used in the
liquid-cooled NEBS space illustrated in Figure 3.8.

3.3.2 Use of CDU in NEBS Spaces


The use of refrigerant-based liquid-cooling systems in NEBS spaces requires the
deployment of a CDU. The CDU provides a cost-effective thermal transfer point
between the chilled-water supply systems and the refrigerant distribution infrastruc-
ture, including close-coupled cooling units.
Because the CDU is supported via a water feed, the same limitations applied to
a typical computer room air-handler placement are in affect. The CDU unit is typically
placed outside of the equipment area or along the perimeter of the supported space.
Full leak containment is typically deployed as part of the common infrastructure.
Multiple CDUs may be placed to provide adequate capacity and to comply
with refrigerant distribution effective line length requirements. These deployments
are often arranged in groups of cabinets with common power, space, and cooling
capacities.

3.3.3 Refrigerant Distribution Infrastructure


Refrigerant distribution infrastructure is used to supply cooling units over, near, and
within equipment racks. The infrastructure includes supply and return piping, port
interfaces, and hard copper line interconnects or flexible hoses for system connec-
tivity. Supply and return lines may be placed under a raised floor or, more typically,
above equipment racks.

3.3.4 Connections
Refrigerant distribution systems use rigid copper piping compliant with ASTM
Type L/EN0157, Type Y, or Type ACR. Connections are required to be brazed, not
soldered, to ensure reliability. Type ACR is recommended for full interoperability
between manufacturers of distributed refrigerant systems.
Connections to the distribution infrastructure follow similar guidelines as to
water (ASHRAE 2006), with the ability to connect via a direct drop of rigid copper
or utilizing a quick connection port and associated flexible hose arrangement.
Thermal Guidelines for Data Processing Environments, Third Edition 51

3.3.5 Condensation Consideration


The CDU supporting a refrigerant distribution system actively maintains a defined
temperature separation between the equipment area dew point and the distributed
refrigerant temperature to ensure that condensation does not form on the distribution
infrastructure. While this design should be sufficient, it is recommended to cover the
distribution infrastructure with a minimum 12.7 mm (0.5 in.) of closed-cell elasto-
meric covering to ensure protection of the equipment located below the piping and
also to provide a measure of physical protection from impact or abrasion.

3.3.6 Close-Coupled Cooling Units


Close-coupled cooling units are used for local heat rejection. A variety of units are
available on the market to meet specific cooling considerations. They may be
deployed above equipment hot aisles, adjacent to equipment (in-row), directly
attached (spot cooling), behind the equipment (rear-door heat exchangers), or inte-
grated directly with equipment. Different units may be integrated into a single
system.
4
Facility Temperature and
Humidity Measurement
Data centers and telecommunications central offices can be a challenge to effec-
tively cool. In many cases, the aggregate internal heat load is less than the theoretical
room capacity, but localized overheating may still occur. Humidity conditions that
are out of specification may also be a problem. Temperature and humidity measure-
ments are the best way to assess the data center environment. These measurements
may be carried out manually or by utilizing automated data collection systems built
into IT equipment (ITE) or mounted on equipment racks. Care should be taken to
make measurements systematically and to ensure that they accurately represent
equipment intake conditions.
Facilities designed to operate with varying operating temperatures, such as
those making extensive use of free air cooling, are encouraged to install automated
aisle temperature and humidity monitoring.
Temperature and humidity measurement in a facility is generally required for
the following three reasons:

1. Facility health and audit tests (refer to Section 4.1)


2. Equipment installation verification tests (refer to Section 4.2)
3. Equipment troubleshooting tests (refer to Section 4.3)

The three tests listed above are hierarchical in nature, and the user should read
all of them prior to choosing the one that best fits their application. In some cases,
the proper test may be a mix of the above. For instance, a data center with low overall
power density but with localized high-density areas may elect to perform the test
listed in Section 4.1, “Facility Health and Audit Tests,” for the entire facility but also
perform the test listed in Section 4.2, “Equipment Installation Verification Tests,” for
the area with localized high power density.
The following sections outline the three recommended tests for measuring
temperature and humidity.

4.1 FACILITY HEALTH AND AUDIT TESTS


Facility health and audit tests are used to proactively determine the health of the data
center to avoid temperature- and humidity-related electronic equipment failures.
These tests can also be used to evaluate the facility’s cooling system for availability
of spare capacity for the future. It is recommended that these tests be conducted on
a regular basis.
54  Facility Temperature and Humidity Measurement

Figure 4.1 Measurement points in aisle.

4.1.1 Aisle Measurement Locations


Establish temperature and humidity measurement locations in each aisle that has
equipment air inlets. Standard temperature and humidity sensors mounted on walls
and columns are not deemed adequate for this testing. Lacking more elaborate
arrays of temperature and humidity sensors placed at the intakes of individual pieces
of equipment, manual measurement and recording of ambient temperature and
humidity is recommended.
Use the following guidelines to establish locations for measuring aisle ambient
temperature and humidity. It is suggested that points be permanently marked on the
floor for consistency and ease in repetition of measurements.

• Establish at least one point for every 3 to 9 m (10 to 30 ft) of aisle or every
fourth rack position, as shown in Figure 4.1.
• Locate points midway along the aisle, centered between equipment rows, as
shown in Figure 4.2.
• Where a hot aisle/cold aisle configuration is employed, establish points in
cold aisles only1, as shown in Figure 4.3

Points picked should be representative of the ambient temperature and humid-


ity. Telcordia GR-63-CORE (2012) suggests measuring aisle temperature at 1.5 m
(4.9 ft) above the floor, which can be useful in some equipment configurations. This
will depend on the type of cabinet or rack deployed near the area where the measure-
ment is being observed. Lacking a more elaborate measurement system, this is
considered a minimum measurement.

1. Hot-aisle temperature levels do not reflect equipment inlet conditions and, therefore, may
be outside the ranges defined in Table 2.3. Hot-aisle temperature levels may be measured
to help understand the facility, but significant temperature variation with measurement
location is normal.
Thermal Guidelines for Data Processing Environments, Third Edition 55

Figure 4.2 Measurement points between rows.

Figure 4.3 Measurement points in a hot-aisle/cold-aisle configuration.


56  Facility Temperature and Humidity Measurement

The objective of these measurements is to ensure that the aisle temperature and
humidity levels are all being maintained within the recommended operating condi-
tions of the class environment, as noted in Table 2.3 (see Chapter 2).

4.1.2 HVAC Operational Status


Measure and record the following status points at all units, as applicable:

• Operating status of unit: ON, OFF


• Supply fan: status (ON/OFF) and fan speed if variable.
• Temperatures: supply air temperature, return air temperature.
• Humidity: supply air humidity, return air humidity.

Automatic logging of HVAC equipment parameters can provide valuable


insight into operational trends and may simplify data collection. The objective of
these measurements is to confirm proper HVAC operation.

4.1.3 Evaluation

4.1.3.1 Aisle Temperature and Humidity Levels


The temperature and/or humidity of any aisle with equipment inlets that is found to
be outside the desired operating range for the class environment should be investi-
gated and the resolution fully documented. The investigation should involve iden-
tification of the source of the out-of-range condition and possible corrective action.
The corrective action could be as simple as minor air balancing or more complex,
involving major rework of the cooling system. A decision to take no action must be
made with the recognition that prolonged operation outside of the recommended
operating ranges can result in decreased equipment reliability and longevity.

4.1.3.2 HVAC Unit Operation


Temperature and humidity levels at the HVAC unit should be consistent with design
values. Return air temperature significantly below room ambient temperatures is
indicative of short-circuiting of supply air, which is a pathway that allows cold
supply air to bypass equipment and return directly to an HVAC unit. The cause of
any short-circuiting should be investigated and evaluated for corrective action.

4.2 EQUIPMENT INSTALLATION VERIFICATION TESTS


Equipment installation verification tests are used to ensure proper installation of
equipment into the room environment. The objective of this step is to ensure that the
bulk temperature and humidity in front of the cabinet or rack is acceptable.

• Measure and record temperature and humidity at the geometric center of the
air intake of the top, middle, and bottom racked equipment at 50 mm
(approximately 2 in.) from the front of the equipment. For example, if there
are 20 servers in a rack, measure the temperature and humidity at the center
of the first, tenth or eleventh, and twentieth server. Figure 4.4 shows example
Thermal Guidelines for Data Processing Environments, Third Edition 57

Figure 4.4 Monitor points for configured racks.

monitoring points for configured racks For configurations with three pieces
of equipment or less per cabinet, measure the inlet temperature and humidity
of each piece of equipment at 50 mm (approximately 2 in.) from the front at
the geometric center of each piece of equipment, as shown in Figure 4.4.
• All temperature and humidity levels should fall within the specifications for
the class environment specified in Table 2.3 (see Chapter 2). If any measure-
ment falls outside of the desired operating condition as specified by
Chapter 2, the facility operations personnel may wish to consult with the
equipment manufacturer regarding the risks involved.

Facilities managers will sometimes use Telcordia GR-63-CORE to measure and


record the temperature at 1.5 m (4.9 ft) high and 380 mm (15 in.) from the front of
the frame or cabinet. However, this measurement method was not designed for
computer equipment. It is instead recommended that the preceding tests be used to
verify your installation.

4.3 EQUIPMENT TROUBLESHOOTING TESTS


Equipment troubleshooting tests are used to determine if the failure of equip-
ment is potentially due to environmental effects.

• This test is the same as that in paragraph 1 of Section 4.2 above, except that
the temperature and humidity across the entire intake of the problematic piece
of equipment are monitored. The objective here is to determine if air is being
drawn into the equipment within the allowable conditions specified for the
class environment shown in Table 2.3 (see Chapter 2).
58  Facility Temperature and Humidity Measurement

Figure 4.5 Monitor points for 1U to 3U equipment.

Figure 4.6 Monitor points for 4U to 6U equipment.

• Case A: For equipment that is 1U to 3U in height, arrange the monitor


points as shown in Figure 4.5.
• Case B: For equipment that is 4U to 6U in height, arrange the monitor
points as shown in Figure 4.6.
• Case C: For equipment that is 7U and larger in height, arrange the
monitor points as shown in Figure 4.7.
• Case D: For equipment that has a localized area for inlet air, arrange
the monitor points in a grid pattern on the inlet as shown in Figure 4.8.
• Case E: For equipment cabinets with external doors, monitor the tem-
perature and humidity with the cabinet in its normal operational
mode, which typically will be with the doors closed.
• All temperature and humidity levels should fall within the specifications for
the class environment specified in Table 2.3 (see Chapter 2). If all measure-
ments are within limits, equipment failure is most likely not the result of poor
environmental conditions. If any measurement falls outside the recommended
operating condition, the facility operations personnel may wish to consult with
Thermal Guidelines for Data Processing Environments, Third Edition 59

Figure 4.7 Monitor points for 7U and larger equipment.

Figure 4.8 Monitor points for equipment with localized cooling.

the equipment manufacturer regarding the risks involved or to correct the out-
of-range condition.

Note: In some facilities, in particular pressurized facilities that control humidity


levels prior to the introduction of air into the data center, the absolute humidity in the
space is typically uniform. This is because significant humidity sources do not usually
exist inside data centers. If this is the case, humidity measurements would not have
to be measured at every point because they can be calculated as a function of the local-
ized temperature and the (uniform) absolute humidity in the space at large.
ASHRAE Handbook—Fundamentals (2009a) provides the equations that relate
temperature and absolute humidity to the relative humidity and/or dew-point values
needed to determine compliance with Table 2.3 of this guide (most psychrometric
charts could be used to perform the same calculations).
5
Equipment Placement and
Airflow Patterns
Chapter 5 provides airflow guidelines to align equipment manufacturers with facility
designers, operators, and managers with the placement of data processing and
communication equipment. Aisle pitch and equipment placement in aisles are also
addressed. It is important to note that this document focuses on developing funda-
mental airflow protocols and the general concept of hot aisle/cold aisle. Detailed or
best practices engineering is covered by other books in the ASHRAE Datacom Series.
Note: Airflow in a high-density environment is a complex and often nonintui-
tive phenomenon. Following this guideline does not guarantee adequate equipment
cooling, as detailed airflow design and fluid dynamics are beyond the scope of this
document. Facility managers must perform the appropriate engineering analysis to
include the effects of static pressure, dynamic (velocity) pressure, occupancy,
delta T, turbulence, etc. (For example, for an underfloor supply air system, raised
floor height is a critical parameter, and locating floor grilles near “downflow”
computer room air-conditioning units often has a negative impact.) In addition,
emerging technologies enable localized equipment cooling that may or may not be
compatible with these guidelines. Such technologies require further analysis.

5.1 EQUIPMENT AIRFLOW


This section addresses the recommended location of the air intake and air exhaust
for electronic equipment.

5.1.1 Airflow Protocol Syntax


The airflow protocol used here adopts the syntax detailed in Telcordia GR-3028-
CORE (2001) on how the air intake and air exhaust shall be specified, and it is
consistent with Figure 5.1. The same document also defines levels that help describe
the location of an air inlet and exhaust.

5.1.2 Airflow Protocol for Equipment


In order to be consistent with and to complement a hot-aisle/cold-aisle configuration
in an equipment room, it is advantageous to design equipment using one of the three
airflow protocols shown in Figure 5.2.
The front of the equipment is typically defined as the surface that has cosmetic
skin and/or display. Rack-mounted equipment should follow the F-R protocol only,
and cabinet systems can follow any of the three protocols shown. The recommended
airflow protocols for data center equipment in Figure 5.2 closely follow those
recommended for telecom equipment in Telcordia GR-3028-CORE.
62  Equipment Placement and Airflow Patterns

Figure 5.1 Syntax of face definitions.

Figure 5.2 Recommended airflow protocol.

Per Telcordia GR-63-CORE (2012), forced-air-cooled equipment is required


to utilize only a rear aisle exhaust. If approved by exception, top exhaust airflow
equipment may be deployed in support of specialized airflow requirements.
Forced-air-cooled equipment should use a front aisle air inlet. Forced-air-cooled
equipment with other than front-aisle-to-rear-aisle airflow may be approved for use
when fitted with manufacturer provided air baffles/deflectors that effectively re-
route the air to provide front-aisle-to-rear-aisle airflow. Equipment requiring air
baffles/deflectors for airflow compliance shall be required to be tested by the manu-
facturer for compliance to the GR-63-CORE with such hardware in place. Forced-
air-cooled equipment may be approved for use with other than front-aisle air inlet
and shall not sustain any damage or deterioration of functional performance during
its operating life when operated at elevated air inlet temperatures, defined as front-
aisle inlet air temperature plus 10°C (18°F), including increased front-aisle air
Thermal Guidelines for Data Processing Environments, Third Edition 63

Figure 5.3 View of a hot-aisle/cold-aisle configuration.

ambient temperatures specified for NEBS 96-hour emergency operation. Non-


front-aisle air inlet equipment is required to be tested by the manufacturer for
compliance with GR-63-CORE at the specified increased air inlet temperatures.

5.1.3 Cabinet Design


Blanking panels should be installed in all unused rack and cabinet spaces to maxi-
mize and improve the functionality of the hot-aisle/cold-aisle air system. The blank-
ing panels should be added to the front cabinet rails, thereby preventing the
recirculation of hot air to the equipment inlet vented front, and rear doors for the
cabinet must be nonrestrictive to airflow to reduce the load on ITE fans, which can
cause undesired ITE power consumption. Generally, 60% open ratio or greater is
acceptable. To assist with hot-aisle/cold-aisle isolation, solid-roofed cabinets are
preferred.

5.2 EQUIPMENT ROOM AIRFLOW


In order to maximize the thermal and physical capabilities of the equipment room,
the equipment and the equipment room need to have compatible airflow schemes.
The following sections address guidelines that should be followed.

5.2.1 Placement of Cabinets and Rows of Cabinets


For equipment that follows the airflow protocol outlined in Section 5.1.2, a hot-aisle/
cold-aisle layout is recommended. Figure 5.3 shows the recommended layout of
aisles to meet the hot-aisle/cold-aisle configuration. The arrows in the cold aisle and
the hot aisle depict the intake airflow and the exhaust airflow, respectively. The intent
of the hot-aisle/cold-aisle concept is to maximize the delivery of cooled air to the
64  Equipment Placement and Airflow Patterns

Figure 5.4 View of a hot-aisle/cold-aisle configuration.

Figure 5.5 Example of hot and cold aisles for nonraised floor.

intakes of the electronic equipment and allow for the efficient extraction of the
warmed air discharged by the equipment.
Recirculation can be reduced through tight cabinet placement and the use of
equipment blanking panels, as described in Section 5.1.3. It is the responsibility of
the facility operations personnel to determine the best way to implement hot-aisle/
cold-aisle configurations. Figure 5.4 shows an example of this configuration using
underfloor cooling found in a typical data center.
Figure 5.5 shows a typical non-raised-floor implementation. The overhead
ventilation system uses multiple air diffusers that inject cool air vertically (down-
ward) into the cold aisles.

5.2.2 Cabinets with Dissimilar Airflow Patterns


It is important to emphasize that the risks of not deploying cabinets with a front-to-
back airflow design in a hot-aisle/cold-aisle configuration are significant, especially
in rooms with high heat densities. The complexities of airflow dynamics are difficult
Thermal Guidelines for Data Processing Environments, Third Edition 65

Figure 5.6 Seven-tile aisle pitch, equipment aligned on cold aisle.

to predict without training and tools. To make the task easier, keep equipment with
the same type of airflow patterns together, with all exhausts in the same direction
toward the hot aisle.
In implementations that do not use the hot-aisle/cold-aisle configuration,
warmed air discharged from the rear of one cabinet can be drawn into the front of
a nearby cabinet. This warmed air can be further warmed by the next row of equip-
ment and so on. This can create a potentially harmful situation for the equipment in
the cabinets farther to the rear. If not addressed, this condition would contribute to
increased equipment failures and system downtime. Therefore, place cabinets that
cannot use hot-aisle/cold-aisle configurations together in another area of the data
center, being careful to ensure that exhaust patterns from various equipment sections
are not drawn into equipment inlets. Again, the temperature measurements can
document the effect of recirculated hot air and should be compared to the recom-
mended and allowable temperature ranges.

5.2.3 Aisle Pitch

Aisle pitch is defined as the distance between the center of the reference cold aisle
and the center of the next cold aisle in either direction. A common aisle pitch for data
centers is seven floor tiles, based on two controlling factors. First, it is advisable to
allow a minimum of one complete floor tile in front of each rack. Second, maintain-
ing a minimum of three feet in any aisle with wheelchair access may be required by
Section 5.2 of the Americans with Disabilities Act (ADA), Document 28, CFR Part
36 (DOJ 1994). Based on the standard-sized domestic floor tile, these two factors
result in a seven-tile pitch, allowing two accessible tiles in the cold aisle, 914.4 mm
(3 ft) in the hot aisle, and reasonably deep rack equipment, as shown in Figure 5.6.
Table 5.1 lists potential equipment depth for a seven-tile pitch.
66  Equipment Placement and Airflow Patterns

Table 5.1 Aisle Pitch Allocation


Maximum
Aisle Pitch Nominal
Region

Space Allocated for Hot Aisle


Tile Size (cold aisle to Cold Aisle
Equipment with Size
cold aisle)a Sizeb
No Overhangc

610 mm 4267 mm 1220 mm 1067mm 914 mm


U.S.

(2 ft) (14 ft) (4 ft) (42 in.) (3 ft)


Global

600 mm 4200 mm 1200 mm 1043 mm 914 mm


(23.6 in.) (13.78 ft) (3.94 ft) (41 in.) (3 ft)

a. If considering a pitch other than seven floor tiles, it is advised to increase or decrease the pitch in
whole tile increments. Any overhang into the cold aisle should take into account the specific design
of the front of the rack and how it affects access to and flow through the tile.
b. Nominal dimension assumes no overhang; less if front door overhang exists.
c. Typically a one metre rack is 1070 mm deep with the door and would overhang the front tile 3 mm
for a U.S. configuration and 27 mm for global configuration.

Figure 5.7 Seven-tile aisle pitch, equipment aligned on hot aisle.

Some installations require that the rear of a cabinet line up with the edge of a
removable floor tile to facilitate underfloor service, such as pulling cables. Adding
this constraint to a seven-tile pitch results in a 4 ft-wide hot aisle and forces a cold
aisle of less than 4 ft, with only one row of vented tiles and more limited cooling
capacity, as shown in Figure 5.7.
For larger cabinet sizes and/or higher power density equipment, it may be
advantageous to use an eight-tile pitch. Similarly, smaller equipment, especially
telecom form factors, can take advantage of tighter pitches. For example, a published
ANSI standard defines a Universal Telecommunication Framework (UTF) (ANSI
Thermal Guidelines for Data Processing Environments, Third Edition 67

2009), where the baseline depth of the frame shall be 600 mm (23.6 in.); deeper
equipment may be permitted in special deeper lineups of 750 mm (29.5 in.) or 900
mm (35.4 in.) depths. All configurations need to be examined on a case-by-case
basis.
Aisle pitch determines how many perforated floor tiles can be placed in a cold
aisle. The opening in the tile together with the static pressure in the raised floor
plenum determines how much supply airflow is available to cool the ITE. Chapter
6 discusses how to determine the volume of airflow required by the design of the ITE.
6
Equipment Manufacturers’
Heat and Airflow Reporting
Chapter 6 provides guidance to users for estimating heat release from information
technology equipment (ITE) similar to what was developed by Telcordia in GR-
3028-CORE (2001) for the telecom market. Some ITE manufacturers provide
sophisticated tools to more accurately assess power and airflow consumption. When
available, the manufacturer should be consulted and data from their tools should be
used to provide more specific information than may be available in the thermal
report.

6.1 PROVIDING HEAT RELEASE AND AIRFLOW VALUES


This section contains a recommended process for ITE manufacturers to provide heat
release and airflow values to end users that result in more accurate planning for data
center air handling. It is important to emphasize that the heat release information is
intended for thermal management purposes.
Note: Nameplate ratings should at no time be used as a measure of equipment
heat release. The purpose of a nameplate rating is solely to indicate the maximum
power draw for safety and regulatory approval. Similarly, the heat release values
should not be used in place of the nameplate rating for safety and regulatory purposes.
Please refer to the definition of power in Section 1.4 (see Chapter 1).
In determining the correct equipment power and airflow characteristics, the
goal is to have an algorithm that works with variations in configurations and that is
reasonably accurate. The actual method of algorithm development and the defini-
tions of the equipment configurations are up to the manufacturer. The algorithm can
be a combination of empirically gathered test data and predictions, or it may consist
only of measured values. During equipment development, the algorithm may consist
only of predictions, but representative measured values must be factored into the
algorithm by the time the product is announced.
Heat release numbers, in watts, should be based on the following conditions:

• Steady state
• User controls or programs set to a utilization rate that maximizes the number
of simultaneous components, devices, and subsystems that are active
• Nominal voltage input
• Ambient temperature between 18°C and 27°C (64.4°F and 80.6°F)
• Air-moving devices at nominal speed

Airflow values should be reflective of those that would be seen in a data center.
Representative racking, cabling, and loading should be taken into account in airflow
70  Equipment Manufacturers’ Heat and Airflow Reporting

reporting. Some ITE manufacturers employ variable-speed fans, which can result in
a large variance in airflow due to equipment loading and ambient conditions.
Airflow reporting should be based on the following conditions:

• Representative mounting (i.e., inside rack with doors shut)


• Representative cabling (cabling commensurate with the configuration level)
• Steady state
• User controls or programs set to a utilization rate that maximizes the number
of simultaneous components, devices, and subsystems that are active
• Nominal voltage input
• All normally powered fans operating
• Ambient temperature between 18°C and 27°C (64.4°F and 80.6°F)
• Sea level: report airflow values at an air density of 1.2 kg/m3 (0.075 lb/ft3)
(this corresponds to air at 18°C [64.4°F], 101.3 kPA [14.7 psia], and 50%
RH)

For equipment with variable-speed fans, in addition to the nominal airflow


value, it is recommended that a maximum airflow value be given for each configu-
ration. The conditions that yield the reported maximum flow values should be indi-
cated in the report. An example is shown in Table 6.1.
Once representative configurations have been tested, other values may be
obtained through a predictive algorithm. For predicted heat release and airflow
values, the accuracy should adhere to the following guidelines:

• The values predicted for tested configurations are within 10% of the measured
values.
• When the predicted values vary by more than 10% from the measured values,
the predictive algorithm is updated and revalidated.

6.2 EQUIPMENT THERMAL REPORT


The report should include the following items (see Table 6.1 for an example thermal
report):

• Power for representative configurations: A table of configuration options


should always be included. The table of configuration options may be repre-
sentative or exhaustive, but it should span from minimum to maximum con-
figurations. Listed options should only be those that are orderable by the
customer. The table should include each of the following for each listed con-
figuration:
• Description of configuration.
• Steady-state heat release for equipment in watts for conditions defined in
Section 6.1. A calculator may also be provided at the discretion of the
manufacturer.
Thermal Guidelines for Data Processing Environments, Third Edition 71

• Dimensions of configurations: height, width, and depth of the rack-


mountable or stand-alone equipment in I-P and SI units.
• Weight for configuration: weight in pounds and kilograms of the rack-
mountable or stand-alone equipment.
• Airflow characteristics of each configuration in cfm and m3/h for condi-
tions defined in Section 6.1.
• Airflow diagram showing intake and exhaust of system (side, top, front or
back). Specify scheme using syntax defined in Figures 5.1 and 5.2 (see
Chapter 5).
• Applicable ASHRAE environmental class designation(s): Compliance with a
particular environmental class requires full-performance operation of the
equipment over the entire allowable environmental range, based on nonfailure
conditions.

6.3 EPA ENERGY STAR REPORTING


ASHRAE TC 9.9 has recommended better thermal reporting for a number of years.
Recently the United States Environmental Protection Agency (EPA) has adopted
many of ASHRAE’s recommendations into their ENERGY STAR program, partic-
ularly the recent development of the ENERGY STAR requirements for servers.
Note that not all servers are required to meet these documentation requirements,
only those that the manufacture desires to have an ENERGY STAR rating. The
ENERGY STAR program is constantly being refined, so the reader is encouraged
to check the EPA website for the latest information. The current version (as of this
writing) is 1.1 and can be found on the ENERGY STAR Web site (DOE 2012b).
The ENERGY STAR Rev 1 requirements for typical 1U and 2U servers include
the use of a Power and Performance Data Sheet (PPDS), which, at a minimum, must
include the following:

1. Model name and number, identifying SKU, and/or configuration ID


2. System characteristics (form factor, available sockets/slots, power specifica-
tions, etc.)
3. System configuration(s) (including maximum, minimum, and typical configu-
rations for product family qualification)
4. Power data for idle state and full load, estimated energy consumption in kWh/
year, and link to power calculator (where available)
5. Additional power and performance data for at least one benchmark chosen by
the partner
6. Available and enabled power saving features (e.g., power management)
7. Information on the power measurement and reporting capabilities of the
computer server
8. Select thermal information from the ASHRAE thermal report
9. A list of additional qualified SKUs or configuration IDs, along with specific
configuration information (for product family qualification only)
Table 6.1 Example Thermal Report

72 Equipment Manufacturers’ Heat and Airflow Reporting


XYZ Co. Model abc Server: Representative Configurations

Condition

Overall System Dimensions b


Weight
Description

(W × D × H)
Typical Heat Release Airflowa, Airflow,
(@110 V) Nominal Maximum @ 35°C (95°F)

W cfm m3/h cfm m3/h lbs kg in. mm


Configuration Configuration Configuration
Minimum

1765 400 680 600 1020 896 406 30 × 42 × 72 762 × 1016 × 1828
Full

10,740 750 1275 1125 1913 1528 693 61 × 40 × 72 1549 × 1016 × 1828
Typical

5,040 555 943 833 1415 1040 472 30 × 40 × 72 762 × 1016 × 1828
Table 6.1 Example Thermal Report (Continued)

XYZ Co. Model abc Server: Representative Configurations

Airflow Diagram
Minimum Configuration 1 CPU-A, 1GB, 2 I/O
Cooling Scheme F-R

Thermal Guidelines for Data Processing Environments, Third Edition73


8 CPU-B, 16 GB, 64 I/O
Full Configuration
(2 GB cards, 2 frames)

ASHRAE
Class
A1, A2
4 CPU-A, 8 GB, 32 I/O
Typical Configuration
(2 GB cards, 1 frame)

a Airflow values are for an air density of 1.2 kg/m3 (0.075 lb/ft3). This corresponds to air at 18°C (64.4°F), 101.3 kPA (14.7 psia), and 50% RH.
b Footprint does not include service clearance or cable management, which is zero on the sides, 46 in . (1168 mm) in the front, and 40 in . (1016 mm) in the rear.
74  Equipment Manufacturers’ Heat and Airflow Reporting

A template for the current recommended PPDS can be found on the ENERGY
STAR Web site (DOE2012a).
Note that in item 8 above, the ASHRAE thermal report is called out directly.
The EPA requires that, beyond the server configuration, the manufacturer also
report the following:

• Total power dissipation in watts (Note this is not the same value as the electri-
cal nameplate data, which should not be used for sizing cooling systems.)
• Delta temperature at 35°C (95°F) inlet
• Airflow (cfm) at max fan speed at max inlet temperature (35°C [95°F])
• Airflow (cfm) at nominal fan speed at nominal temperature (18°C to 27°C
[64.4°F to 80.6°F])

The version 1.0 ENERGY STAR specification is built around a Class A2 server.
The version 2.0 specification is still under development but can be expected to add
additional reporting requirements to what is already in place.

6.3.1 Server Thermal Data Reporting Capabilities


The EPA has also mandated that the server self report data while operational to
provide greater benefit and information in the power and cooling areas. The version
1.0 specification (see link above) also requires that the server provide, through stan-
dard nonproprietary reporting protocols, the following information:

• Input power in watts, with recommended accuracy at the system level of


±10% with a cutoff of ±10 W (i.e., accuracy is not required to be better than
±10 W).
• Inlet air temperature in degrees Celsius, with accuracy of ± 3°C (5.4°F).
• Estimated processor utilization for each logical CPU that is visible to the OS.
This data shall be reported to the operator or user of the computer server
through the operating environment (operating system or hypervisor).

As more ENERGY STAR servers (or any servers with the above capability)
become more prevalent, the ability to provide a higher level of integration between
IT management and the building management systems will allow the data center
designer and operator to more fully optimize the data center for maximum efficiency.
References and Bibliography

ACGIH. 1992. 1992–1993 Threshold Limit Values for Chemical Substances and
Physical Agents and Biological Exposure Indices. Cincinnati: American Con-
ference of Governmental Industrial Hygienists.
ANSI. 1997. ANSI T1.304-1997, Ambient Temperature and Humidity Require-
ments for Network Equipment in Controlled Environments. New York: Amer-
ican National Standards Institute.
ANSI. 2009. Engineering Requirements for a Universal Telecommunication
Framework (UTF), Standards Committee T1 Telecommunications, Working
Group T1E1.8, Project 41. New York: American National Standards Institute.
ASHRAE. 2004. Thermal Guidelines for Data Processing Environments. Atlanta:
ASHRAE.
ASHRAE. 2005a. Datacom Equipment Power Trends and Cooling Applications.
Atlanta: ASHRAE.
ASHRAE. 2005b. Design Considerations for Datacom Equipment Centers.
Atlanta: ASHRAE.
ASHRAE. 2006. Liquid Cooling Guidelines for Datacom Equipment Centers.
Atlanta: ASHRAE.
ASHRAE. 2008. Thermal Guidelines for Data Processing Environments, Second
Edition. Atlanta: ASHRAE.
ASHRAE. 2009a. ASHRAE Handbook—Fundamentals. Atlanta: ASHRAE.
ASHRAE. 2009b. Design Considerations for Datacom Equipment Centers, Sec-
ond Edition. Atlanta: ASHRAE.
ASHRAE. 2009c. Particulate and Gaseous Contamination in Datacom Environ-
ments. Atlanta: ASHRAE.
ASHRAE. 2009d. Weather Data Viewer Software, Version 4. Atlanta: ASHRAE.
ASHRAE. 2010. ANSI/ASHRAE/IES Standard 90.1-2010, Energy Standard for
Buildings Except Low-Rise Residential Buildings. Atlanta: ASHRAE.
ASHRAE. 2011. ASHRAE Handbook—HVAC Applications. Atlanta:
ASHRAE.
ASHRAE. 2012. Datacom Equipment Power Trends and Cooling Applications,
Second Edition. Atlanta: ASHRAE.
Atwood, D., and Miner J. 2008. Reducing data center cost with an air economizer.
Brief, Intel Corporation, Santa Clara, CA. http://www.intel.com/content/
www/us/en/data-center-efficiency/data-center-efficiency-xeon-reducing-data-
center-cost-with-air-economizer-brief.html.
Blinde, D., and L. Lavoie. 1981. Quantitative effects of relative and absolute
humidity on ESD generation/suppression. Proceedings of EOS/ESD Sympo-
sium, vol. EOS-3, pp. 9–13.
134  References and Bibliography

Comizzoli, R.B., R.P. Frankenthal, R.E. Lobnig, G.A. Peins, L.A. Psato-Kelty,
D.J. Siconolfi, and J.D. Sinclair. 1993. Corrosion of electronic materials and
devices by submicron atmospheric particles. The Electrochemical Society
Interface 2(3):26–34.
Cohen, J.E., and C. Small. 1998. Hypsographic demography: The distribution of
human population by altitude. Proceedings of the National Academy of Sci-
ences of the United States of America 95(24):14009–14.
DOE. 2012a. ENERGY STAR Enterprise Servers. U.S. Environmental Protection
Agency, U.S. Department of Energy, Washington, D.C. http://www.energys-
tar.gov/index.cfm?c=archives.enterprise_servers.
DOE. 2012b. ENERGY STAR Server Data Sheet. U.S. Environmental Protection
Agency, U.S. Department of Energy, Washington, D.C. http://www.energys-
tar.gov/ia/partners/prod_development/new_specs/downloads/servers/
Server_Data_Sheet_04-24-09.xls?88d8-ab69.
DOJ. 1994. Code of Federal Regulations, Standards for Accessible Design, 28
CFR Part 36, Section 4.3.3 Width. Revised as of July 1, 1994. U.S. Depart-
ment of Justice, ADA Standards for Accessible Design, Washington, D.C.
http://www.usdoj.gov/crt/ada/adastd94.pdf.
EIA. 1992. EIA-310, Revision D, Sept. 1, 1992, Racks, Panels and Associated
Equipment. Electronic Industries Association.
ETSI. 2009. ETSI EN 300 753 V1.2.1 (2009-03), Equipment Engineering (EE);
Acoustic Noise Emitted by Telecommunications Equipment. http://
webapp.etsi.org/workprogram/Report_WorkItem.asp?WKI_ID=3392.
ETSI. 1994. ETSI 300 019-1-0 (1994-05), Equipment Engineering (EE); Environ-
mental Conditions and Environmental Tests for Telecommunications Equip-
ment Part 1-0: Classification of Environmental Conditions, European
Telecommunications Standards Institute, France.
ETSI. 1999. ETSI EN 300 019-2-3 V2.1.2 (1999-09), Equipment Engineering
(EE), Environmental Conditions and Environmental Tests for Telecommunica-
tions Equipment; Part 2-3: Specification of Environmental Tests; Stationary
Use at Weather Protected Locations. European Telecommunications Stan-
dards Institute, France.
ETSI. 2009. ETSI EN 300 019-1-3 V2.3.2, Equipment Engineering (EE); Envi-
ronmental Conditions and Environmental Tests for Telecommunications
Equipment; Part 1–3: Classification of Environmental Conditions; Stationary
Use at Weatherprotected Locations. European Telecommunications Standards
Institute, France.
Europa. 2003. Directive 2003/10/EC on the minimum health and safety require-
ments regarding the exposure of workers to the risks arising from physical
agents (noise). European Agency for Safety and Health at Work, Bilbao,
Spain. http://osha.europa.eu/en/legislation/directives/exposure-to-physical-
hazards/osh-directives/82.
GPO. 1976. U.S. Standard Atmosphere. U.S. Government Printing Office, Wash-
ington, D.C.
Hamilton, P., G. Brist, G. Barnes Jr., and J. Schrader. 2007. Humidity-dependent
loss in PCB substrates. Proceedings of the Technical Conference, IPC Expo/
APEX 2007, February 20–22, Los Angeles, CA.
Thermal Guidelines for Data Processing Environments, Third Edition 135

Herrlin, M.K. 2005. Rack cooling effectiveness in data centers and telecom central
offices: The rack cooling index (RCI). ASHRAE Transactions 111(2).
Hinaga, S., M.Y. Koledintseva, J.L. Drewniak, A. Koul, and F. Zhou. 2010. Ther-
mal effects on PCB laminate material dielectric constant and dissipation fac-
tor. IEEE Symposium on Electromagnetic Compatibility, July 25–30, Fort
Lauderdale, FL.
HSE. 1992. Workplace Health, Safety and Welfare; Workplace (Health, Safety and
Welfare) Regulations 1992; Approved Code of Practice. Liverpool, Mersey-
side, England: Health and Safety Executive.
IEC. 1999. IEC 60950, Safety of Information Technology Equipment, 3d ed.
Geneva, Switzerland: International Electrotechnical Commission.
IEEE. 1997. IEEE Standard 1156.4-1997, IEEE Standard for Environmental Spec-
ifications for Spaceborne Computer Modules. New York: Institute of Electri-
cal and Electronics Engineers.
ISO 1988. ISO 9296, Acoustics—Declared Noise Emission Values of Computer
and Business Equipment. Geneva, Switzerland: International Organization for
Standardization.
ISO 2010. ISO 7779, Acoustics Measurement of Airborne Noise Emitted by Infor-
mation Technology and Telecommunications Equipment, Third Edition.
Geneva, Switzerland: International Organization for Standardization.
Jones, D.A. 1996. Principles and Prevention of Corrosion, 2nd Edition. Upper
Saddle River, NJ: Prentice Hall.
Montoya. 2002. Sematech electrostatic discharge impact and control workshop,
Austin, TX. http://ismi.sematech.org/meetings/archives/other/20021014/mon-
toya.pdf.
NIOSH. 2012. NIOSH Publications and Products, Working in Hot Environments.
Centers for Disease Control and Prevention, National Institute for Occupa-
tional Safety and Health (NIOSH), Atlanta, GA. http://www.cdc.gov/niosh/
docs/86-112/.
OSHA. 1980. Noise Control: A Guide for Workers and Employers. U.S. Depart-
ment of Labor, OSHA, Office of Information, Washington, D.C. http://
www. nonoise.org/hearing/noisecon/noisecon.htm.
OSHA. 1999. OSHA Technical Manual, Section III, Chapter 4, “Heat stress.”
Directive #TED 01-00-015, U.S. Department of Labor, Occupational Safety
and Health Administration, Washington, D.C. http://www.osha.gov/dts/osta/
otm/otm_iii/otm_iii_4. html.
OSHA. 2012a. Occupational Heat Exposure. U.S. Department of Labor, Occupa-
tional Safety and Health Administration, Washington, D.C. http://
www.osha.gov/SLTC/heatstress/index.html.
OSHA. 2012b. Occupational Safety and Health Act of 1970, General Duty Clause,
Section 5(a)(1). U.S. Department of Labor, Occupational Safety and Health
Administration, Washington, D.C. http://www.osha.gov/pls/oshaweb/owa-
disp.show_document?p_id=3359&p_table=OSHACT.
Patterson, M.K. 2008. The effect of data center temperature on energy efficiency.
Proceedings of Itherm Conference, Orlando, Florida.
136  References and Bibliography

Patterson, M.K., D. Atwood, and J.G. Miner. 2009. Evaluation of air-side econo-
mizer use in a compute-intensive data center. Interpack’09, July 19–23, San
Francisco, CA, pp. 1009–14.
Sauter, K. 2001. Electrochemical migration testing results—Evaluating printed
circuit board design, manufacturing process and laminate material impacts on
CAF resistance. Proceedings of IPC Printed Circuits Expo, Anaheim, CA.
Simonic, R. 1982. ESD event rates for metallic covered floor standing information
processing machines. Proceedings of the IEEE EMC Symposium, Santa
Clara, CA, pp. 191–98.
Singh, P. 1985. Private communication with the author, Poughkeepsie, NY.
Singh, P., G.T. Gaylon, J.H. Dorler, J. Zahavi, and R. Ronkese. 1992. Potentiody-
namic polarization measurements for predicting pitting of copper in cooling
waters. Corrosion 92, The NACE Annual Conference and Corrosion Show,
Nashville, TN.
Sood, B. 2010. Effects of Moisture Content on Permittivity and Loss Tangent of
PCB Materials. Webinar, Center for Advanced Life Cycle Engineering
(CALCE), University of Maryland, College Park.
Statskontoret. 2002. Technical Standard 26:5, Acoustical Noise Emissions of
Information Technology Equipment. Stockholm, Sweden. http://www-
05.ibm.com/se/ibm/environment/pdf/buller-TN26-5.pdf.
Statskontoret. 2004. Technical Standard 26:6, Acoustical Noise Emission of Infor-
mation Technology Equipment. Stockholm, Sweden. http://arkiv.edelega-
tionen.se/verva/upload/publikationer/2004/2004-TN26-6-Acoustical-Noice-
Emission.pdf.
Telcordia. 2001. GR-3028-CORE, Thermal management in telecommunications
central offices. Telcordia Technologies Generic Requirements, Issue 1,
December 2001. Piscataway, N.J.: Telcordia Technologies, Inc.
Telcordia. 2012. GR-63-CORE, Network equipment—Building system (NEBS)
requirements: Physical protection. Telcordia Technologies Generic Require-
ments, Issue 4, April 2012. Piscataway, N.J.: Telcordia Technologies, Inc.
Turbini, L.J., and W.J. Ready. 2002. Conductive anodic filament failure: A materi-
als perspective. School of Materials Science and Engineering, Georgia Insti-
tute of Technology, Atlanta, GA. http://130.207.195.147/Portals/2/12.pdf.
Turbini, L.J., W.J. Ready, and B.A. Smith. 1997. Conductive anodic filament
(CAF) formation: A potential reliability problem for fine-line circuits. The
Reliability Lab School of Materials Science and Engineering, Georgia Insti-
tute of Technology.
Van Bogart, W.C., 1995. Magnetic tape storage and handling: A guide for libraries
and archives. National Media Laboratory, Oakdale, MN. http://www.clir.org/
pubs/reports/pub54/5premature_degrade.html.
Appendix A
2008 ASHRAE Environmental Guidelines
for Datacom Equipment—Expanding the
Recommended Environmental Envelope
Most of the discussion in Appendix A is taken from the 2008 second edition of Ther-
mal Guidelines. The information here has been updated for the current edition.
The recommended environmental envelope for IT equipment (ITE) is listed in
Table 2.1 of ASHRAE’s Thermal Guidelines for Data Processing Environments
(2004) (see Table A.1 for a comparison between the 2004 and 2008 versions). These
recommended conditions, as well as the allowable conditions, refer to the inlet air
entering the datacom equipment. Specifically, the 2004 edition lists for data centers
in ASHRAE Classes 1 and 2 a recommended environmental range of 20°C to 25°C
(68°F to 77°F) dry-bulb temperature and a relative humidity (RH) range of 40% to
55% (refer to Thermal Guidelines for details on data center type, altitude, recom-
mended vs. allowable, etc.). The allowable and recommended envelopes for Class 1
are shown in Figure A.1.
To provide greater flexibility in facility operations, particularly with the goal of
reduced energy consumption in data centers, ASHRAE TC 9.9 underwent an effort
to revisit the recommended equipment environmental specifications (ASHRAE
2008), specifically the recommended envelope for Classes 1 and 2 (the recom-
mended envelope is the same for both of these environmental classes). The result of
this effort, detailed in this appendix, was to expand the recommended operating envi-
ronment envelope as shown in Table A.1. The purpose of the recommended envelope
was to give guidance to data center operators on maintaining high reliability and also
operating data centers in the most energy-efficient manner.
The allowable envelope is where IT manufacturers test their equipment in order
to verify that it will function within those environmental boundaries. Typically,
manufacturers perform a number of tests prior to the announcement of a product to
verify that it meets all the functionality requirements within this environmental enve-
lope. This is not a statement of reliability but one of functionality of the ITE.

Table A.1 Comparison of 2004 and 2008 Versions of


Recommended Envelopes

2004 Version 2008 Version

Low-end temperature 20°C (68°F) 18°C (64.4°F)

High-end temperature 25°C (77°F) 27°C (80.6°F)

Low-end moisture 40% RH 5.5°C (41.9°F) DP

High-end moisture 55% RH 60% RH and 15°C (59°F) DP


76  2008 ASHRAE Environmental Guidelines for Datacom Equipment

However, the recommended envelope is a statement of reliability. IT manufacturers


recommend that data center operators maintain their environment within the recom-
mended envelope for extended periods of time. Exceeding the recommended limits
for short periods of time should not be a problem, but running near the allowable
limits for months could result in increased reliability issues. In reviewing the avail-
able data from a number of IT manufacturers, the 2008 expanded recommended
environmental envelope became the agreed-upon envelope that is acceptable to all
IT manufacturers, and operation within this envelope does not compromise overall
reliability of ITE.
The previous and 2008 recommended envelope data are shown in Table A.1.
This envelope was created for general use across all types of businesses and condi-
tions. However, different environmental envelopes may be more appropriate for
different business values and climate conditions. Therefore, to allow for the potential
to operate in a different envelope that might provide even greater energy savings, this
third edition provides general guidance on server metrics that will assist data center
operators in creating a different operating envelope that matches their business
values. Each of these metrics is described in Chapter 2. Through these guidelines,
the user will be able to determine what environmental conditions best meet their
technical and business needs. Any choice outside of the recommended region will
be a balance between the additional energy savings of the cooling system versus the
deleterious effects that may be created on total cost of ownership (TCO) (total site
energy use, reliability, acoustics, or performance).
Neither the 2004 nor the 2008 recommended operating environments ensure that
the data center is operating at optimum energy efficiency. Depending on the cooling
system, design, and outdoor environmental conditions, there will be varying degrees
of efficiency within the recommended zone. For instance, when the ambient temper-
ature in the data center is raised, the thermal management algorithms within some
datacom equipment increase the speeds of air-moving devices to compensate for the
higher inlet air temperatures, potentially offsetting the gains in energy efficiency due
to the higher ambient temperature. It is incumbent upon each data center operator to
review and determine, with appropriate engineering expertise, the ideal operating
point for their system. This includes taking into account the recommended range and
site-specific conditions. The full recommended envelope is not the most energy-effi-
cient environment when a refrigeration cooling process is being used. For example,
the high dew point at the upper areas of the envelope result in latent cooling (conden-
sation) on refrigerated coils, especially in direct expansion (DX) units. Latent cooling
decreases the available sensible cooling capacity for the cooling system and, in many
cases, makes it necessary to humidify to replace moisture removed from the air.
The ranges included in this document apply to the inlets of all equipment in the
data center (except where IT manufacturers specify other ranges). Attention is
needed to make sure the appropriate inlet conditions are achieved for the top portion
of ITE racks. The inlet air temperature in many data centers tends to be warmer at
the top portion of racks, particularly if the warm rack exhaust air does not have a
direct return path to the computer room air conditioners (CRACs). This warmer air
also affects the RH, resulting in lower values at the top portion of the rack.
Thermal Guidelines for Data Processing Environments, Third Edition 77

Figure A.1 2008 recommended environmental envelope (new Class 1


and 2).

Air temperature generally follows a horizontal line on the psychrometric chart,


where the absolute humidity remains constant but the RH decreases.
Finally, it should be noted that the 2008 change to the recommended upper
temperature limit from 25°C to 27°C (77°F to 80.6°F) can have detrimental effects
on acoustical noise levels in the data center. See the section “Acoustical Noise
Levels” later in this appendix for a discussion of these effects.
The 2008 recommended environmental envelope is shown in Figure A.1. The
reasoning behind the selection of the boundaries of this envelope is described below.
DRY-BULB TEMPERATURE LIMITS
Part of the rationale in choosing the new low- and high-temperature limits stemmed
from the generally accepted practice for the telecommunication industry’s central
office, based on NEBS GR-3028-CORE (Telcordia 2001), which uses the same dry-
bulb temperature limits as specified here. In addition, this choice provides prece-
dence for reliable operation of telecommunications electronic equipment based on
a long history of central office installations all over the world.
Low End
From an IT point of view, there is no concern in moving the lower recommended
limit for dry-bulb temperature from 20°C to 18°C (68°F to 64.4°F). In equipment
78  2008 ASHRAE Environmental Guidelines for Datacom Equipment

with constant-speed air-moving devices, a facility temperature drop of 2°C (3.6°F)


results in about a 2°C (3.6°F) drop in all component temperatures. Even if variable-
speed air-moving devices are deployed, typically no change in speed occurs in this
temperature range, so component temperatures again experience a 2°C (3.6°F) drop.
One reason for lowering the recommended temperature is to extend the control
range of economized systems by not requiring a mixing of hot return air to maintain
the previous 20°C (68°F) recommended limit. The lower limit should not be inter-
preted as a recommendation to reduce operating temperatures, as this could increase
hours of chiller operation and increase energy use. A non-economizer-based cooling
system running at 18°C (64.4°F) will most likely carry an energy penalty. (One
reason to use a non-economizer-based cooling system would be a wide range of inlet
rack temperatures due to poor airflow management; however, fixing the airflow
would likely be a good first step toward reducing energy.) Where the setpoint for the
room temperature is taken at the return to cooling units, the recommended range
should not be applied directly, as this could drive energy costs higher from over-
cooling the space. The recommended range is intended for the inlet to the ITE. If the
recommended range is used as a return air setpoint, the lower end of the range (18°C
to 20°C [64.4°F to 68°F]) increases the risk of freezing the coils in a DX cooling
system.
High End
The greatest justification for increasing high-side temperature is to increase hours
of economizer use per year. For non-economizer systems, there may be an energy
benefit by increasing the supply air or chilled-water temperature setpoints.
However, the move from 25°C to 27°C (77°F to 80.6°F) can have an impact on the
ITE’s power dissipation. Most IT manufacturers start to increase air-moving device
speed around 25°C (77°F) to improve the cooling of the components and thereby
offset the increased ambient air temperature. Therefore, care should be taken before
operating at the higher inlet conditions. The concern that increasing the IT inlet air
temperatures might have a significant effect on reliability is not well founded. An
increase in inlet temperature does not necessarily mean an increase in component
temperatures. Figure A.2 shows a typical component temperature relative to an
increasing ambient temperature for an IT system with constant-speed fans.
In Figure A.2, the component temperature is 21.5°C (37.8°F) above the inlet
temperature of 17°C (62.6°F), and it is 23.8°C (42.8°F) above an inlet ambient
temperature of 38°C (100.4°F). The component temperature tracks the air inlet
ambient temperature very closely.
Now consider the response of a typical component in a system with variable-
speed fan control, as depicted in Figure A.3. Variable-speed fans decrease the fan
flow rate at lower temperatures to save energy. Ideal fan control optimizes the reduc-
tion in fan power to the point that component temperatures are still within vendor
temperature specifications (i.e., the fans are slowed to the point that the component
temperature is constant over a wide range of inlet air temperatures).
This particular system has a constant fan flow up to approximately 23°C
(73.4°F). Below this inlet air temperature, the component temperature tracks closely
Thermal Guidelines for Data Processing Environments, Third Edition 79

Figure A.2 Inlet and component temperatures with fixed fan speed.

Figure A.3 Inlet and component temperatures with variable fan speed.

to the ambient air temperature. Above this inlet temperature, the fan adjusts flow rate
such that the component temperature is maintained at a relatively constant temper-
ature.
This data brings up several important observations:

• Below a certain inlet temperature (23°C [73.4°F] in the case described above),
IT systems that employ variable-speed air-moving devices have constant fan
power, and their component temperatures track fairly closely to ambient tem-
perature changes. Systems that don’t employ variable-speed air-moving
devices track ambient air temperatures over the full range of allowable ambi-
ent temperatures.
• Above a certain inlet temperature (23°C [73.4°F] in the case described
above), the speed of the air-moving device increases to maintain fairly con-
80  2008 ASHRAE Environmental Guidelines for Datacom Equipment

stant component temperatures and, in this case, inlet temperature changes


have little to no effect on component temperatures and thereby no effect on
reliability, since component temperatures are not affected by ambient temper-
ature changes.
• The introduction of ITE that employs variable-speed air-moving devices has
minimized the effect on component reliability as a result of changes in ambi-
ent temperatures and allowed for the potential of large increases in energy
savings, especially in facilities that deploy economizers.

As shown in Figure A.3, the IT fan power can increase dramatically as it ramps
up speed to counter the increased inlet ambient temperature. The graph shows a typi-
cal power increase that results in the near-constant component temperature. In this
case, the fan power increased from 11 W at ~23°C (73.4°F) inlet temperature to over
60 W at 35°C (95°F) inlet temperature. The inefficiency in the power supply results
in an even larger system power increase. The total room power (facilities + IT) may
actually increase at warmer temperatures. IT manufacturers should be consulted
when considering system ambient temperatures approaching the upper recom-
mended ASHRAE temperature specification. See Patterson (2008) for a technical
evaluation of the effect of increased environmental temperature, where it was shown
that an increase in temperature can actually increase energy use in a standard data
center but reduce it in a data center with economizers in the cooling system.
Because of the derating of the maximum allowable temperature with altitude for
Classes 1 and 2, the recommended maximum temperature is derated by 1°C/300 m
(1.8°F/984 ft) above 1800 m (5906 ft).
MOISTURE LIMITS
High End
Based on extensive reliability testing of printed circuit board laminate materials, it
was shown that conductive anodic filament (CAF) growth is strongly related to RH
(Sauter 2001). As humidity increases, time to failure rapidly decreases. Extended
periods of RH exceeding 60% can result in failures, especially given the reduced
conductor-to-conductor spacing common in many designs today. The CAF mecha-
nism involves electrolytic migration after a path is created. Path formation could be
due to a breakdown of inner laminate bonds driven by moisture, which supports the
electrolytic migration and explains why moisture is so key to CAF formation. The
upper moisture region is also important for disk and tape drives. In disk drives, there
are head flyability and corrosion issues at high humidity. In tape drives, high humid-
ity can increase frictional characteristics of tape and increase head wear and head
corrosion. High RH, in combination with common atmospheric contaminants, is
required for atmospheric corrosion. The humidity forms monolayers of water on
surfaces, thereby providing the electrolyte for the corrosion process. Sixty percent
RH is associated with adequate monolayer buildup for monolayers to begin taking
on fluid-like properties. Combined with humidity levels exceeding the critical equi-
librium humidity of a contaminant’s saturated salt, hygroscopic corrosion product
is formed, further enhancing the buildup of acid-electrolyte surface wetness and
greatly accelerating the corrosion process. Although disk drives do contain internal
Thermal Guidelines for Data Processing Environments, Third Edition 81

means to control and neutralize pollutants, maintaining humidity levels below the
critical humidity levels of multiple monolayer formation retards initiation of the
corrosion process.
A maximum recommended dew point of 15°C (59°F) is specified to provide an
adequate guard band between the recommended and allowable envelopes.

Low End
The motivation for lowering the moisture limit is to allow a greater number of hours
per year where humidification (and its associated energy use) is not required. The
previous recommended lower limit was 40% RH. This correlates on a psychrometric
chart to 20°C (68°F) dry-bulb temperature and a 5.5°C (41.9°F) dew point (lower
left) and a 25°C (77°F) dry-bulb and a 10.5°C (50.9°F) dew point (lower right). The
dryer the air is, the greater the risk of electrostatic discharge (ESD). The main
concern with decreased humidity is that the intensity of static electricity discharges
increases. These higher-voltage discharges tend to have a more severe impact on the
operation of electronic devices, causing error conditions requiring service calls and,
in some cases, physical damage. Static charges of thousands of volts can build up on
surfaces in very dry environments. When a discharge path is offered, such as a main-
tenance activity, the electric shock of this magnitude can damage sensitive electron-
ics. If the humidity level is reduced too far, static dissipative materials can lose their
ability to dissipate charge and then become insulators.
The mechanism of the static discharge and the impact of moisture in the air are
not widely understood. Montoya (2002) demonstrates, through a parametric study,
that ESD charge voltage level is a function of dew point or absolute humidity in the
air and not a function of RH. Simonic (1982) studied ESD events across various
temperature and moisture conditions over a period of a year and found significant
increases in the number of events (20×) depending on the level of moisture content
(winter vs. summer months). It was not clear whether the important parameter was
absolute humidity or RH.
Blinde and Lavoie (1981) studied electrostatic charge decay (vs. discharge) of
several materials and showed that it is not sufficient to specify environmental ESD
protection in terms of absolute humidity; nor is a RH specification sufficient, since
temperature affects ESD parameters other than atmospheric moisture content.
The 2004 recommended range includes a dew-point temperature as low as 5.5°C
(41.9°F). Discussions with ITE manufacturers indicated that there have been no
known reported ESD issues within the 2004 recommended environmental limits. In
addition, the referenced information on ESD mechanisms (Montoya 2002; Simonic
1982; Blinde and Lavoie 1981) does not suggest a direct RH correlation with ESD
charge creation or discharge, but Montoya (2002) does demonstrate a strong corre-
lation of dew point to charge creation, and a lower humidity limit, based upon a mini-
mum dew point (rather than minimum RH), is proposed. Therefore, the 2008
recommended lower limit is a line from 18°C (64.4°F) dry-bulb temperature and
5.5°C (41.9°F) dew-point temperature to 27°C (80.6°F) dry-bulb temperature and a
5.5°C (41.9°F) dew-point temperature. Over this range of dry-bulb temperatures and
a 5.5°C (41.9°F) dew point, the RH varies from approximately 25% to 45%.
82  2008 ASHRAE Environmental Guidelines for Datacom Equipment

Another practical benefit of this change is that process changes in data centers
and their HVAC systems, in this area of the psychrometric chart, are generally sensi-
ble only (i.e., horizontal on the psychrometric chart). Having a limit of RH greatly
complicates the control and operation of the cooling systems and could require
added humidification operation at a cost of increased energy in order to maintain an
RH when the space is already above the needed dew-point temperature. To avoid
these complications, the hours of economizer operation available using the 2004
guidelines were often restricted.
ASHRAE has a research project to investigate moisture levels and ESD with the
hope of driving the recommended range to a lower moisture level in the future. ESD
and low moisture levels can result in drying out of lubricants, which can adversely
affect some components. Possible examples include motors, disk drives, and tape
drives. While manufacturers indicated acceptance of the environmental extensions
documented here, some expressed concerns about further extensions. Another
concern for tape drives at low moisture content is the increased tendency to collect
debris on the tape and around the head and tape transport mechanism due to static
buildup.

ACOUSTICAL NOISE LEVELS


The ASHRAE 2008 recommendation to expand the environmental envelope for
datacom facilities may have an effect on acoustical noise levels. Noise levels in
high-end data centers have steadily increased over the years and are becoming a seri-
ous concern for data center managers and owners. For background and discussion
on this, see Chapter 9, “Acoustical Noise Emissions,” in ASHRAE’s Design Consid-
erations for Datacom Equipment Centers (2005). The increase in noise levels is the
obvious result of the significant increase in cooling requirements of new, high-end
datacom equipment. The increase in concern results from noise levels in data centers
approaching or exceeding regulatory workplace noise limits, such as those imposed
by OSHA in the U.S. or by EC directives in Europe. Empirical fan laws generally
predict that the sound power level of an air-moving device increases with the fifth
power of rotational speed. This means that a 20% increase in speed (e.g., 3000 to
3600 rpm) equates to a 4 dB increase in noise level. While it is not possible to predict
a priori the effect on noise levels of a potential 2°C (3.6°F) increase in data center
temperatures, it is not unreasonable to expect to see increases in the range of 3–5 dB.
Data center managers and owners should, therefore, weigh the trade-offs between
the potential energy efficiencies with the recommended new operating environment
and the potential increases in noise levels.
With regard to the regulatory workplace noise limits, and to protect employees
against potential hearing damage, data center managers should check whether
potential changes in noise levels in their environments will cause them to trip various
“action level” thresholds defined in local, state, or national codes. The actual regu-
lations should be consulted, because they are complex and beyond the scope of this
document to explain fully. For instance, when levels exceed 85 dB(A), hearing
conservation programs are mandated, which can be quite costly and generally
involve baseline audiometric testing, noise level monitoring or dosimetry, noise
hazard signage, and education and training. When levels exceed 87 dB(A) (in
Thermal Guidelines for Data Processing Environments, Third Edition 83

Europe) or 90 dB(A) (in the U.S.), further action, such as mandatory hearing protec-
tion, rotation of employees, or engineering controls, must be taken. Data center
managers should consult with acoustical or industrial hygiene experts to determine
whether a noise exposure problem will result from increasing ambient temperatures
to the 2008 upper recommended limit.
Data Center Operation Scenarios for
ASHRAE’s 2008 Recommended Environmental Limits
The recommended ASHRAE guideline is meant to give guidance to IT data center
operators on the inlet air conditions to the ITE for the most reliable operation. Four
possible scenarios where data center operators may elect to operate at conditions
that lie outside the recommended environmental window are listed as follows.

1. Scenario #1: Expand economizer use for longer periods of the year where
hardware fails are not tolerated.
• For short periods of time, it is acceptable to operate outside this recom-
mended envelope and approach the allowable extremes. All manufactur-
ers perform tests to verify that the hardware functions at the allowable
limits. For example, if during the summer months it is desirable to oper-
ate for longer periods of time using an economizer rather than turning on
the chillers, this should be acceptable, as long as the period of warmer
inlet air temperatures to the datacom equipment does not exceed several
days each year; otherwise, the long-term reliability of the equipment
could be affected. Operation near the upper end of the allowable range
may result in temperature warnings from the ITE.
2. Scenario #2: Expand economizer use for longer periods of the year where
limited hardware fails are tolerated.
• All manufacturers perform tests to verify that the hardware functions at
the allowable limits. For example, if during the summer months it is
desirable to operate for longer periods of time using the economizer
rather than turning on the chillers, and if the data center operation is such
that periodic hardware fails are acceptable, then operating for extended
periods of time near or at the allowable limits may be acceptable. This, of
course, is a business decision of where to operate within the allowable
and recommended envelopes and for what periods of time. Operation
near the upper end of the allowable range may result in temperature
warnings from the ITE.
3. Scenario #3: Failure of cooling system or servicing cooling equipment.
• If the system was designed to perform within the recommended environ-
mental limits, it should be acceptable to operate outside the recom-
mended envelope and approach the extremes of the allowable envelope
during the failure. All manufacturers perform tests to verify that the hard-
ware functions at the allowable limits. For example, if a modular CRAC
unit fails in the data center, and the temperatures of the inlet air of the
nearby racks increase beyond the recommended limits but are still within
the allowable limits, this is acceptable for short periods of time until the
failed component is repaired. As long as the repairs are completed within
84  2008 ASHRAE Environmental Guidelines for Datacom Equipment

normal industry times for these types of failures, this operation should be
acceptable. Operation near the upper end of the allowable range may
result in temperature warnings from the ITE.
4. Scenario #4: Addition of new servers that push the environment beyond the
recommended envelope.
• For short periods of time, it should be acceptable to operate outside the
recommended envelope and approach the extremes of the allowable enve-
lope. All manufacturers perform tests to verify that the hardware func-
tions at the allowable limits. For example, if additional servers are added
to the data center in an area that would increase the inlet air temperatures
to the server racks above the recommended limits but adhere to the allow-
able limits, this should be acceptable for short periods of time until the
ventilation can be improved. The length of time operating outside the rec-
ommended envelope is somewhat arbitrary, but several days would be
acceptable. Operation near the upper end of the allowable range may
result in temperature warnings from the ITE.

You might also like