Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

International Journal of Numerical Methods for Heat & Fluid Flow

Rack level transient CFD modeling of data center


Yogesh Fulpagare, Yogendra Joshi, Atul Bhargav,
Article information:
To cite this document:
Yogesh Fulpagare, Yogendra Joshi, Atul Bhargav, "Rack level transient CFD modeling of data center", International Journal
of Numerical Methods for Heat & Fluid Flow, https://doi.org/10.1108/HFF-10-2016-0426
Permanent link to this document:
https://doi.org/10.1108/HFF-10-2016-0426
Downloaded on: 30 January 2018, At: 03:55 (PT)
References: this document contains references to 0 other documents.
To copy this document: permissions@emeraldinsight.com
The fulltext of this document has been downloaded 17 times since 2018*
Access to this document was granted through an Emerald subscription provided by emerald-srm:333301 []
For Authors
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service
information about how to choose which publication to write for and submission guidelines are available for all. Please
visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of
more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online
products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication
Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.

*Related content and download information correct at time of download.


RACK LEVEL TRANSIENT CFD MODELING OF DATA CENTER

1. Introduction
About 2% of the total electricity generated is consumed by data centers in USA [1]. The power
consumption in data centers is dominated by the servers and cooling units [2]. Server power
consumption is vital, and hence cannot be minimized but cooling unit consumption can be
reduced. Cooling units of raised floor plenum data centers provide airflow to the servers. Data
center environment involves the circulation of hot and cold air flows that makes the system
highly dynamic. An efficient way to understand these dynamics is through the response
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

experiments and Computational Fluid Dynamics (CFD) modeling.

1.1 Literature background


Raised floor plenum (RFP) data centers have a distinctive pattern of airflow to supply the cold
air to each server in the rack. Computer Room Air Conditioning (CRAC) units provide cold air to
the server racks via perforated tiles on the plenum. Heated air collected towards the ceiling
from the rear side of the racks and directed at the top of CRAC. This airflow pattern creates
alternate cold & hot aisle zones inside RFP data center. Because of such pattern, there are
chances of recirculation of hot air into cold aisle, and hence a proper ceiling is necessary [3].
Therefore, various parameters such as rack inlet-outlet temperatures, tile airflow rates, and
CRAC air supply temperature have been studied for different data center configurations using
steady state CFD analysis [4]–[10].

Thermal conditions within a data center are dynamic due to many parameters and hence
cannot be captured through steady state analysis alone [11]. The first transient numerical
analysis was performed by Beitelmal et al. [12] to study the impact of CRAC failure on the
temperature variations within the data center. The thermal mass characteristics with detailed
transient boundary conditions for data center system-level simulations were highlighted by
Ibrahim et al [13], [14]. However, the airflow rate entering rack cabinet can vary based on the
pressure difference across the servers. This can be quantified based on the characterization of
server fans [15]. Tracking of transient performance inside the data center is essential especially
during cooling failure. Based on such an experimental study, Hussam et al have recommended
the modifications in ASHRAE guidelines [16]. Another room level transient CFD study was
presented to capture the effect of rack shutdown based on the relation of the time constant of
server and its thermal mass by Erden et al [17] recommending to investigate the unexplored
data center dynamics.

The CRAC set points play a very important role to achieve the dynamic thermal management of
data center. This was achieved by HP Laboratories by proposing the control scheme that
controls the CRAC set points based on the sensor data in the system [18]. Another study with
CFD base ThermoStat tool, developed for thermal profiles of servers by Choi et al [19] to design
the cooling system for optimized thermal management of the system. However, the dynamic
response study and thermal control for different operating conditions were suggested as a
future research scope by making ThermoStat as an open source tool.

A transient numerical model of data center clusters for power variation and CRAC failure were
presented to demonstrate that steady state CFD couldn’t predict the dynamic behavior of the
system [20]. With similar data center layout, the cold aisle containment configurations were
studied by Alkharabsheh et al [21]. The server heat capacity, rack frames, and CRAH were
updated continuing with the same work by Alkharabsheh et al [22] and found that the modeling
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

details of heat capacity & CRAH affect the system significantly compared to rack frame
modeling.

Erden H. S. et al had studied the transient thermal response of air-cooled data center using
experiments and CFD simulations [23]. The server characteristics were determined
experimentally and used for developing models. The transient response of server inlet air
temperatures was recorded with various cases viz. rack shutdown, chilled water interruption,
CRAH airflow change & repeated CRAH fan failure. A room level transient CFD results were
validated with the experiments along with lumped capacitance model [17][24]. Data center
room level transient study was conducted by Zhang et al & found the significant impact on
energy consumption by transient variation of environmental conditions & system load [22].

Therefore, if the server and rack level dynamics are well known, the system level transient
study can be understood easily. In this scenario, the accuracy of the numerical model of server
rack becomes imperative. Zhang et al studied the effect of rack modeling details in a data
center test cell [25] and found that the numerical details of the rack have a significant impact
on CFD predictions. The conclusion was based on steady-state CFD investigations with leakage
& turbulence flow modeling.

A black box compact server model with its physical properties was demonstrated first time by
VanGilder et al based on the experiments and numerical modeling [26], [27]. The correlation of
server capacitance with server thermal effectiveness (ratio of actual heat transfer to maximum
heat transfer between air stream & thermal mass) was demonstrated for specific cases of
servers to provide the server thermal mass properties. The porous media approach for the
server was successfully implemented and validated with experiments by Almoli et al [28].

As a summary, the dynamic study was performed on data center for the server characteristics,
CRAC failure, rack shut down to observe the effect on the rack inlet air temperature. We have
not found a study that has highlighted the response of rack outlet air temperature. The majority
of the transient studies were focused on the server and room level and a very few on the rack
level. The focus on developing the details of the boundary conditions especially server
characteristics in the past three to four years have added significant contribution in the
literature. However, the rack level local dynamics is yet to be understood and hence in the
recent literature, it is highlighted to investigate the dynamics of the data center and reframe
the ASHRAE guidelines. In this context, the rack level transient study is the focus of the paper to
track the local dynamics in the data center.

1.2 Research objective and approach


Based on the literature survey especially on the transient study of data center, the dynamic
study is imperative to achieve the self-sensing of the system. The extensive transient study on
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

rack level would add some more insight in the existing literature. Therefore, the main objective
of the paper is to study the rack level dynamics in the raised floor plenum data center.

To fulfill the objective mentioned above the research approach is as follows. 1) Dynamic
response experiments: This section highlights the lab facility, design of experiments &
measurement details in a raised floor plenum data center. In this study, the transient variation
of rack air temperatures on server simulator rack is captured for heat load variation. 2) CFD
modeling: Two computational domain were considered for transient CFD modeling viz. single
rack and three racks with their inflow and outflow aisles. Based on the rack outlet air
temperature response and airflow uniformity the single rack domain extended to three rack. 3)
The insight of the study: The thermal behavior and the model validation have been discussed
based on the experiments and numerical simulation.

2. Experimental details
This section highlights the experimental lab layout, measurement details & design of the
experiment. Server rack from the raised floor plenum data center room was considered for the
analysis (Figure 1 [29]). The 42U (1U = 4.45 cm) rack houses four server simulators each having
a height of 10U with its control settings of heating load and fan flow rates [29] [30]. One down-
flow CRAC 2 was operational during the experiments, and other two were off. The tests were
performed on the simulator and other two neighboring racks with its cold and hot aisle zone as
highlighted in Figure 1.

Sensor placement plays a key role in the experiments. We placed eight (total 48) T-type
calibrated thermocouple sensors (uncertainty ± 0.4%) on the front and rear side of each rack in
a way that temperature in the vicinity of all the servers would be covered (Figure 2). The
sensors were placed in connection with wireless modules to an OSIsoft PI system network. The
rack inlet, as well as outlet air temperature, were recorded until the system reaches steady
state condition. The heat loads of the simulator rack were incremented as a step input once the
steady state was achieved.

2.1 Measurement details


To capture the effect of server heat load on the rack inlet and outlet air temperature, several
parameters were held constant such as total heat load of all the racks, CRAC supply air
temperature, server fan speed and perforated tile airflow rates. All the racks were kept at 10
kW constant heat load before starting the experiments. All four simulators were kept at 2.5 kW
heat load with 0.15 m3/s fan speed. CRAC supply air temperature was kept at 17 °C with CRAC
blower speed of ~3 m3/s. The perforated tile airflow rates were almost constant with an
average velocity of 1.54 m/s.
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

The anemometric tool with perforated tile size was used to measure the tile air flowrate
measurements as shown in Figure 3. The tool has four flexi glass walls with open top and
bottom sides. The top side was covered by a stainless-steel wire grid (4 × 4) with sixteen non-
directional thermal anemometers (UAS 1200LP) of accuracy ±5 % of the measured value
(operating range: 0.5-5 m/s, temperature range: 0-70 °C) [31].

Though we have not measured the airflow rates across the server simulator in this study, the
measurement of average fan speed as per the dial setting is highlighted in Figure 4. The optical
tachometer used to measure the average server simulator fan speed for different dial settings
(Figure 4 (b)). The uncertainty for simulator fan speed were ±2.5 % [32]. The CRAC unit (model:
FH740CMAJEIS055) was maintained at constant 38% of blower speed. The corresponding
reading in terms of CRAC flowrate was computed based on the manufacture data [33].

2.2 Design of experiments


To record the response of rack inlet and outlet air temperatures, the initial conditions
mentioned in the above section kept constant until steady state. The variation of input heat
source for perturbation is one of the crucial steps in designing the experiment. The input heat
source was stepped and ramped as shown in Figure 5. Each input was held constant for an hour
after perturbation. The experimental results are highlighted in the results & discussion section.

3. Computational model
This section highlights the mathematical modeling, meshing, and boundary condition details of
the computational domain considered for transient CFD analysis. To perform the rack level
transient CFD analysis, we have selected two computational domain viz. single simulator rack &
three racks with their inflow and outflow aisles. The governing equations, boundary conditions
are same for both the computational domain.
3.1 Governing equation & boundary conditions

The mathematical model for fluid flow in the computational domain was formulated from an
incompressible form of Navier-Stokes equation with k-epsilon turbulence model [34]. Three-
dimensional, transient CFD analysis of current model involves momentum forces coupled with
complex thermo-fluid interaction. The governing equations were discretized using finite volume
method. The commercial tool STAR CCM+ from Siemens was used for the analysis [35]. The
hexahedral mesh with cell count around 0.9 & 2 million was used based on the mesh sensitivity
analysis for single & three rack domain respectively [36] (Figure 6).

The transient CFD runs were started for time step of 1 millisecond with the initial constant
boundary conditions. The solution was stabilized at ~ 24 hours on 2.5 GHz i5 processor with
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

parallel runs on 4 cores. After getting the steady state solution of around 2 hours of total
solution time, the input perturbation initialised as shown in Figure 5 using Algebric Multigrid
Method (AMG).

AMG method focuses on the relationship between grid width & frequency that helps to
increase the computational speed, especially for nonlinear equations by increasing the
convergence rate [37]. The AMG solver loses its efficiency taking longer to converge, but AMG
with acceleration takes fewer cycles to converge. Sakaino has shown that the use of the
multigrid method as an accelerated solver helps in saving computational time [38] by
comparing the multigrid method with MICCG (the conjugate gradient method with modified
incomplete Cholesky preconditioning) in conjunction with GPU. Therefore, we have
implemented the AMG solver with acceleration in our case to enhance the computational
speed. Though the experiments were performed with one simulator rack, the computational
domain was considered with single simulator rack and then extended to three racks that
include two neighboring server racks with its cold and hot aisle as shown in Figure 6.

The cold air inflow passes through perforated tiles from plenum chamber with tile flow rate
specified by average inlet velocity of 1.54 m/s & 17 °C temperature. To reduce the
computational complexity, the plenum chamber was replaced by applying the vertical velocity
flow inlet at perforated tiles. The top surface of the rear domain is modeled as exhaust with
zero gauge pressure. All the server racks and simulators were modeled assuming porous media
with 75% porosity. The method for determining the inertial and viscous resistance for the
porous media was adopted from [28] which exactly replicates the flow conditions as that of the
actual flow inside the servers [Table 1]. Each server simulator was assigned total heat source of
2500 W that contributes 10 kW heat on the rack. The interface between the rear side of the
rack and the outflow aisle was modeled as server fan provided by the manufacturer [29]. The
buoyancy effects were included using a Boussinesq approximation.
4. Results and discussion

This section highlights the results of the experiment and transient CFD analysis for single rack
and three racks with its cold and hot aisle computational domains.

4.1 Experimental results


The responses were recorded for about 12 hours of continuous runs for stepped and ramped
heat load after the steady state. The rack inlet air temperature variation was negligible because
of no mixing of hot air into cold aisle.

The most affected rack outlet air temperature response [Highlighted in Figure 7] has shown the
increase in temperature from 23 to 54 °C. This significant increase of more than 30 °C
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

temperature shows the need of rack level dynamic study. In general, with an increase in
heating, the server fan speed increases to enhance the convective heat transfer. Here, the
server simulator fan speed was kept constant, and there was no way to enhance the convective
heat transfer with an increase in heat load. Therefore, such a large temperature difference has
been observed after repetition of the same experiment. However, the similar responses on
neighboring two racks have shown an increment of ~2 °C temperatures for the same input.

4.2 Transient CFD results


The flow and thermal dynamics of the first system (only simulator rack) were captured for
horizontal and vertical sections of the domain as shown in Figure 8. It is observed that
temperature increases with height and ~10°C maximum temperature difference between the
top and bottom section. Also, the rear side of the rack found to be at higher temperature than
front section. The flow is passing through servers from tiles and then to the ceiling as shown in
velocity streamlines in the second part of Figure 8.

From the velocity flow direction, three distinct zones have been highlighted. The inlet aisle
temperatures are in a narrow band of ~3 °C with low-velocity recirculation. It was observed that
the high-temperature zone at the bottom of the outlet aisle is due to low-velocity zone in the
same area. The similar analysis performed on three rack regions that have shown more
uniformity of flow and temperature in the domain than the previous model [Figure 9].

The experimental and computational responses were compared to simulator rack on input heat
source as shown in Figure 10. The initial simulation results were underperformed due to the
small domain with its inlet-outlet aisle. The experimental temperature response is a sudden
increase in stepped input, but the simulation resulted in curvature response. It is because of the
inertial resistance of the server mass contributing to the slower response than experiments.
Due to the restricted airflow within one rack domain, ~ 5-6 °C higher temperatures have been
observed in steady as well as transient variations. However, in the three-rack domain, the
simulation results were validated with experiments. This validation within the acceptable range
has been achieved by expansion of the computational domain as well as improving the server
thermal characteristics (Table 1).

The best design for data center environments recommends the temperature of 5 – 45 °C [39].
However, in the current study, the maximum of 55 °C temperature was observed which is
beyond the specified range. In such situations, the system’s hot aisle thermal profile will
indicate the high alert to decrease the supply air temperature and ultimately increase in power
consumption. Thus, the major contribution of this study is understanding the effect of rack
outlet air temperature due to heat load variations. Therefore, the air temperature should be
monitored at the outlet of the server racks apart from rack inlet air & cooling unit temperatures
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

to understand the dynamic effects.

The rack level dynamic can be summarized further by monitoring the rack inlet and outlet air
temperature inside a data center with stepped or ramped heat loads after the steady state of
the system. The same domain can be easily simulated in CFD with the extended domain
(including neighboring racks) to understand the other parameter effects such as cold air flow
rates, cold air temperature & different server configurations which are not possible via
experiments. This study can be extended to aisle level after understanding these effects on a
single rack. Though we have the standard safe variable range for RFP data center provided by
ASHRAE, understanding the effective critical limit of the system variable is imperative. This
study not only provides the response experimental details but also provides simple steps to
understand the rack level dynamics.

5. Conclusions
The dynamics of the data center depends on the server workload (heat released due to
workload) and cold air flow rate through CRAC units. The rack level transient effects were
studied through CFD as well as response experiments to characterize rack outlet air
temperatures with a variation of the server rack heat load. During the experimental run, it was
found that the system stabilizes in an hour. It has been observed that the rack level dynamics
have a significant effect on the thermal and fluid characteristics in hot and cold aisles. The
velocity of air increases as it flows through the server that enhances the convective heat
transfer at the rear end of the rack. The additional workload or heating do not affect the
neighboring thermal profiles significantly but majorly affects on the same rack. The rack outlet
air temperature rise was significant and may cross the critical limit. Also, the airflow at the
exhaust of the servers is influenced by the server fans, as well as the temperature differential.

The computational study on three racks with its cold and hot aisle have shown better
performance compared with a single domain. The porous media approach with thermal mass
characteristics has improved the computational results and validated with experiments. The
multigrid method helped to reduce the computational time by increasing the convergence rate.
This validated transient CFD results can be further explored to observe the effect of perforated
tile airflow rate on the rack inlet and outlet air temperatures. The response data with all
possible input variations will form the basis for data-driven modeling to be used in automation
of the system.

References

[1] R. Brown et al., “Report to Congress on Server and Data Center Energy Efficiency : Public
Law 109-431 Environmental Energy Technologies Division Alliance to Save Energy ICF
Incorporated,” 2008.
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

[2] J. G. Koomey, “Growth in data center electricity use 2005 to 2010 . Oakland, CA: Analytics
Press.”

[3] Y. U. Makwana, A. R. Calder, and S. K. Shrivastava, “Benefits of properly sealing a cold


aisle containment system,” Thermomechanical Phenom. Electron. Syst. -Proceedings
Intersoc. Conf., pp. 793–797, 2014.

[4] R. R. Schmidt, E. E. Cruz, and M. Iyengar, “Challenges of data center thermal


management,” IBM J. Res. Dev., vol. 49, no. 4.5, pp. 709–723, Jul. 2005.

[5] S. Shrivastava, B. Sammakia, R. Schmidt, and M. Iyengar, “Comparative Analysis of


Different Data Center Airflow Management Configurations,” Adv. Electron. Packag. Parts
A, B, C, pp. 329–336, 2005.

[6] Y. Fulpagare and A. Bhargav, “Advances in data center thermal management,” Renew.
Sustain. Energy Rev., vol. 43, pp. 981–996, 2015.

[7] Y. Fulpagare, G. Mahamuni, and A. Bhargav, “Effect of Plenum Chamber Obstructions on


Data Center Performance,” Appl. Therm. Eng., vol. 80, pp. 187–195, 2015.

[8] X. Zhang, J. W. VanGilder, M. Iyengar, and R. R. Schmidt, “Effect of rack modeling detail
on the numerical results of a data center test cell,” in 2008 11th Intersociety Conference
on Thermal and Thermomechanical Phenomena in Electronic Systems, 2008, pp. 1183–
1190.

[9] J. Rambo and Y. Joshi, “Multi-scale modeling of high power density data centers,” in
IPACK’03, Kauai, HI, 2003, pp. 1–7.

[10] S. Bhopte, M. K. Iyengar, B. Sammakia, R. Schmidt, and D. Agonafer, “Numerical


Modeling of Data Center Clusters - Impact of Model Complexity,” in Proceedings of
IMECE2006 November 5-10, Chicago, Illinois, USA, 2006, pp. 1–10.
[11] M. Jonas, R. R. Gilbert, J. Ferguson, G. Varsamopoulos, and S. K. S. Gupta, “A transient
model for data center thermal prediction,” 2012 Int. Green Comput. Conf., pp. 1–10, Jun.
2012.

[12] A. H. Beitelmal and C. D. Patel, “Thermo-fluids provisioning of a high performance high


density data center,” Distrib. Parallel Databases, vol. 21, no. 2–3, pp. 227–238, 2007.

[13] M. Ibrahim, S. Bhopte, B. Sammakia, B. Murray, M. Iyengar, and R. Schmidt, “Effect of


transient boundary conditions and detailed thermal modeling of data center rooms,”
IEEE Trans. Components, Packag. Manuf. Technol., vol. 2, no. 2, pp. 300–310, 2012.

[14] M. Ibrahim, S. Bhopte, B. Sammakia, B. Murray, M. Iyengar, and R. Schmidt, “Effect of


Thermal Characteristics of Electronic Enclosures on Dynamic Data Center Performance,”
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

in IMECE2010 November 12-18, Vancouver, British, Columbia, Canada, 2010, pp. 1–10.

[15] K. Nemati, H. A. Alissa, B. T. Murray, B. G. Sammakia, and M. Seymour, “Experimentally


Validated Numerical Model of a Fully-Enclosed Hybrid Cooled Server Cabinet,” in ASME
2015 International Technical Conference and Exhibition on Packaging and Integration of
Electronic and Photonic Microsystems InterPACK, 2015.

[16] H. A. Alissa et al., “Chip to chiller experimental cooling failure analysis of data centers:
The interaction between IT and facility,” IEEE Trans. Components, Packag. Manuf.
Technol., vol. PP, no. 99, pp. 1–18, 2016.

[17] H. S. Erden, H. E. Khalifa, and R. R. Schmidt, “Room-Level Transient CFD Modeling of Rack
Shutdown,” in ASME 2013 International Technical Conference and Exhibition on
Packaging and Integration of Electronic and Photonic Microsystems InterPACK, 2013, pp.
1–8.

[18] C. E. Bash, C. D. Patel, and R. K. Sharma, “Dynamic Thermal Management of Air Cooled
Data Centers,” Therm. Thermomechanical Proc. 10th Intersoc. Conf. Phenom. Electron.
Syst. 2006. ITHERM 2006., pp. 445–452, 2006.

[19] J. Choi, Y. Kim, A. Sivasubramaniam, J. Srebric, Q. Wang, and J. Lee, “A CDF-Based Tool for
Studying Temperature in Rack-Mounted Servers,” IEEE Trans. Comput., vol. 57, no. 8, pp.
1129–1142, 2008.

[20] S. Gondipalli et al., “Numerical modeling of data center with transient boundary
conditions,” 2010 12th IEEE Intersoc. Conf. Therm. Thermomechanical Phenom. Electron.
Syst., pp. 1–7, Jun. 2010.

[21] S. Alkharabsheh, M. Ibrahim, S. Shrivastava, R. Schmidt, and B. Sammakia, “Transient


Analysis for Contained-Cold-Aisle Data Center,” in Proceedings of the ASME 2012
International Mechanical Engineering Congress & Exposition IMECE2012, pp. 1–8.

[22] S. Alkharabsheh, B. Sammakia, S. Shrivastava, and R. Schmidt, “Dynamic models for


server rack and CRAH in a room level CFD model of a data center,” Thermomechanical
Phenom. Electron. Syst. -Proceedings Intersoc. Conf., pp. 1338–1345, 2014.

[23] Hamza Salih Erden, “PhD Thesis: Experimental and Analytical Investigation of the
Transient Thermal Response of Air Cooled Data Centers,” Syracuse University, 2013.

[24] H. S. Erden, H. E. Khalifa, and R. R. Schmidt, “A hybrid lumped capacitance-CFD model for
the simulation of data center transients,” HVAC&R Res., vol. 20, no. 6, pp. 688–702,
2014.

[25] X. Zhang, J. W. VanGilder, M. Iyengar, and R. R. Schmidt, “Effect of rack modeling detail
on the numerical results of a data center test cell,” 2008 11th Intersoc. Conf. Therm.
Thermomechanical Phenom. Electron. Syst., pp. 1183–1190, 2008.
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

[26] J. W. VanGilder, C. M. Healey, Z. M. Pardey, and X. Zhang, “A compact server model for
transient data center simulations,” ASHRAE Trans., vol. 119, no. PART 2, pp. 358–370,
2013.

[27] Z. M. Pardey, D. W. Demetriou, J. W. Vangilder, H. E. Khalifa, and R. R. Schmidt, “Proposal


for Standard Compact Server Model for Transient Data Center Simulations,” ASHRAE, vol.
121, pp. 413–422, 2015.

[28] A. Almoli, “PhD thesis: Air flow management inside data centers,” University of Leeds,
2013.

[29] G. Nelsonpartial, “Development of an experimentally-validated compact model of a


server rack: Master Thesis,” Georgia Institute of Technology, 2007.

[30] V. K. Arghode, P. Kumar, Y. Joshi, T. Weiss, and G. Meyer, “Rack Level Modeling of Air
Flow Through Perforated Tile in a Data Center,” J. Electron. Packag., vol. 135, no. 3, p.
30902, 2013.

[31] V. K. Arghode, T. Kang, Y. Joshi, W. Phelps, and M. Michaels, “Measurement of Air Flow
Rate Through Perforated Floor Tiles in a Raised Floor Data Center,” J. Electron. Packag.,
vol. 139, no. 1, p. 11007, 2017.

[32] V. Arghode and J. Yogendra, Air Flow Management in Raised Floor Data Centers.
Springer, 2016.

[33] Liebert, “Liebert Deluxe System/3 - Chilled Water - System Design Manual,” 2007.

[34] K. Karki, A. Radmehr, and S. Patankar, “Use of Computational Fluid Dynamics for
Calculating Flow Rates Through Perforated Tiles in Raised-Floor Data Centers,” HVAC&R
Res., vol. 9, no. 2, pp. 153–166, Apr. 2003.

[35] Siemens PLM Software, “STAR CCM+: Commercial CFD software.” .


[36] I. B. Celik, U. Ghia, P. J. Roache, C. J. Freitas, H. Coleman, and P. E. Raad, “Procedure for
estimation and reporting of uncertainty due to discretization in CFD applications,” J.
Fluids Eng., vol. 130, no. 7, p. 78001, 2008.

[37] W. L. Briggs, V. E. Henson, and S. F. McCormick, “A Multigrid Tutorial,” Appl. Opt., vol. 37,
no. January, pp. 7587–95, 2000.

[38] H. Sakaino, “A Dynamic Stochastic Optimization Control Model for Data Centers b ased
on Numerical Modeling,” in iTHERM 2016, 2016.

[39] ASHRAE, “Data Center Networking Equipment – Issues and Best Practices,” Ashrae TC9.9,
2013.
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

Author Biographies

Yogesh Fulpagare has completed B.Tech Mechanical, M.Tech Thermal & Fluids Engineering from Dr.
Babasaheb Ambedkar Technological University Lonere. He is pursuing his PhD in Mechanical Engineering
from IIT Gandhinagar in Energy Systems Research Laboratory as INSPIRE Research Fellow from
Government of India. His research projects focus on Thermal Management of Data Center. He had
worked as a doctoral research visitor at CEETHERM Laboratory in Georgia Institute of Technology
Atlanta as an overseas research fellow. His research interests include heat transfer & fluid flow
applications including system thermal management.

Dr. Yogendra Joshi is a Professor at Georgia Institute of Technology, Atlanta, USA. He has B.Tech IIT
Kanpur, MS State University of New York Buffalo & PhD University of Pennsylvania. He worked
University of Maryland, Naval Postgraduate School & in the semiconductor assembly industry on
process thermal model development. He was named to the McKenney/Shiver Chair in 2004. His
research deals with transport processes associated with emerging technologies in general such as
electronic devices, packages and systems. He is a recipient of ASME Electronic and Photonic Packaging
Division Outstanding Contribution Award in Thermal Management. He is a founding Co-Director of the
Consortium for Energy Efficient Thermal Management (CEETHERM) & Emerging Technologies Thermal
Laboratory (METTL).

Dr. Atul Bhargav completed his B-Tech from IIT, Madras in 2002 and received his M.S. and PhD from
University of Maryland, College Park, USA in 2008 and 2010 respectively. He has worked on diesel-, NG-
and LPG-based fuel cell systems at Ballard Power Systems and University of Maryland. He has been a
faculty member at IIT Gandhinagar since 2011 and is currently leading research and development efforts
(through sponsored projects) in electronic cooling & hydrocarbon based fuel cell systems.
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)
Down-flow Up-flow
CRAC 1 CRAC

Inferno3: Inferno:
B4001-4042 A2001-2041

PDU
Inferno: Inferno:
A3001-3041 B2001-2041
28.5 ft

Inferno: Server
A4001-4041 Simulator

Inferno: Inferno2:
A1001-1041 B1001 - 1084
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

Down-flow CRAC 2

21 ft

EXPERIMENTAL ZONE

Figure 1. The overall system layout with highlighted area of interest for experimentation and
modeling. The details of server simulator rack with its control settings.

Simulator Rack
8
7
6
6’10”
5
4
3
2
1

2’ 2’ 2’

Figure 2. 3D experimental domain for three racks with detailed sensor placement
Figure 3. Tile airflow rate measurement tool with photograph and schematic [31].
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

(a) (b)

Figure 4. (a) Server simulator rack fan details, (b) measured fan speed [32]
Input heat source on 2nd simulator
6

Thousands
5
Heat source (kW)
4

1
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

0 3 6 9 12 15
Time (h)
Figure 5. The input heat source variation for the experiments.
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

Figure 6. Computational domain with meshing details for simulator rack and three rack models.
60

50
Temperature (°C)

40

30

20
Sensor 8
Sensor 6 Sensor 7 Sensor 8
10 Sensor 7
0 3 6 9 12 15 Sensor 6
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

Time (h)

Input heat source on 2nd simulator


2nd
6
Simulator
Thousands

5
Heat source (kW)

4
Rear Side of the rack
3

1
0 3 6 9 12 15
Time (h)
Figure 7. Rack outlet air temperature response for the heat input to 2nd simulator for the
highlighted temperature sensors.
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

Figure 8. Temperature and velocity contours on horizontal and vertical planes for one rack
domain. Each horizontal plane is a midsection of each server simulator.

Figure 9. Temperature and velocity contours on horizontal and vertical planes for three rack
domains. Each horizontal plane is a midsection of each server simulator & overhead.
60 60

50 50
Temperature (°C)

Temperature (°C)
40 40

30 Simulation 30

20 20
Simulation
Experiment Experiment
10 10
0 3 6 9 12 15 0 3 6 9 12 15
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

Time (h) Time (h)

Input heat source on 2nd simulator


6
Thousands

5
Heat source (kW)

1
0 3 6 9 12 15
Time (h)
Figure 10. The first & second graphs are experimental and CFD response comparison on server
outlet temperature [sensor 8 temperature as shown in Figure 7] for one rack and three racks
computational domain with the same heat source input on the 2nd simulator.
Table 1. Thermal property values assigned in computational model
Parameter Value Unit
Supply air temperature from tile 17 ℃
Tile air flow rates (velocity) 1.54 m s-1
Server simulator porous inertial resistance 0.91 Kg m-4
Server simulator porous viscous resistance 4.1 Kg m-3 s-1
Server specific heat 800 J kg-1 K-1
Downloaded by Cornell University Library At 03:55 30 January 2018 (PT)

Server density 2100 Kg m-3


Server thermal conductivity 150 W m-1 K-1
Server simulator heat load 2500 W
Neighboring rack heat load 10000 W

You might also like