Professional Documents
Culture Documents
Data Centre Design: Tel: 0800 0830 646 Fax: 0870 2429 825
Data Centre Design: Tel: 0800 0830 646 Fax: 0870 2429 825
Data Centre Design: Tel: 0800 0830 646 Fax: 0870 2429 825
Contents
1.0 - Purpose
2.0 - Disclaimer
3.0 - Introduction
9.0 - Heating, Ventilation and Air Conditioning (HVAC) within the data centre
12.0 - Fire detection, alarm and suppression within the data centre
1.0 - Purpose
This document is a design tool to assist designers to identify all the processes and activities required to fully define the
requirements of a data centre to industry standards and best practice parameters. It will allow a preliminary design stage to be
reached, with a client feedback loop, enabling full costed design proposals to be undertaken.
2.0 - Disclaimer
This document is intended for the use of persons qualified in the electrical, mechanical and construction requirements of a data
centre. This document quotes figures and extracts from international standards but this does not absolve the user from full
knowledge and usage of the original standards themselves. Every effort has been made to supply a complete and upto-date
technical précis of the current international, European and British standards and regulations concerned but the fitness-for-
purpose and final design remains the responsibility of the document user.
Except where other documents have been quoted, this document remains the copyright of Engineering Education Ltd and its
reproduction is forbidden under the Copyright, Designs and Patents Act 1988. Licences may be obtained from licenses@
engineeringeducation.co.uk.
3.0 - Introduction
A data centre is;
This design guide is based upon the requirements of TIA 942 Telecommunications Infrastructure Standard for Data Centers,
April 2005.
Although this is an American standard invoking other American standards and codes it is far more substantive than the
equivalent CENELEC EN 50173-5 Data centre standard, which is still at draft stage. However this document expands upon
the TIA 942 standard and incorporates all the requirements of European and British standards, Directives and Regulations.
These include EN 50173, EN 50174, EN 50310, BS 5839, BS 6701, BS 7671, the UK Building Regulations, the Disability
Discrimination Act and many others. They are all detailed in Appendix 1.
Many diverse areas need to be addressed to fully design and specify a data centre. It is essential that it is agreed at the start
of the project exactly who is responsible for every item or else the final build will be severely compromised if a vital design
element has been overlooked or is incompatible with other services.
A data centre design project can be split into the following sections;
1. Location.
2. Construction.
3. Definition of the spaces and size available.
4. Planning the layout of the computer room floor.
5. Designing the raised floor.
6. Calculating day one and future IT requirements.
7. Calculating the day one and future air conditioning requirements.
8. Deciding upon the type and location of the air conditioning units.
9. Calculating day one and future power supply requirements.
10. Sizing and location of UPS and standby generators.
11. Designing the earth bonding and signal reference grid.
12. Designing the power distribution system within the computer room and within the equipment racks.
13. Lighting, emergency lighting and signage.
14. Access control, security and CCTV requirements.
15. Fire detection, alarm and suppression system, including hand-held fire extinguishers.
16. Specifying and designing the structured cabling system and its containment system.
17. Organising connections to external telecommunications providers and the ‘Entrance room’.
18. Integration of Building Management Systems with other command and monitoring networks and their appearance at
a control room.
19. Project management issues, health & safety and ongoing operational and maintenance issues.
Data centre projects are either green field new-build projects or conversion/renovation projects. In either case it is advisable to
undertake a complete audit of what exists already or on the proposed designs.
Apart from meeting the day one designs and proposed expansion plans it is also necessary to decide upon which level of
backup or redundancy will be built in to the finished location. For data centres these levels are now designated as being of Tier
1, 2, 3, or 4, with Tier 4 being the highest level of redundancy.
The Tiering level is described in great detail in the TIA 942 standard which in turn has taken much of its philosophy from the
Uptime Institute. A very brief summary is given in the table below. In the terminology of redundant systems ‘N’ means enough
equipment to do the job, N+1 means one more additional unit to act as a redundant supply whereas 2(N+1) means two
independent paths to complete the job.
Data Centre
Computer Room
The ‘standard’ model has been defined by TIA 942, ASHRAE and other authorative sources as being based on a front-to-back
cooling regime based on rows of racks facing each other. Cold air is supplied to the front of these racks through air vents
placed in the raised floor in front of them. The chilled air is fed to these vents from air conditioning units blowing into the plenum
space formed by the raised floor. The vented aisle is thus known as the cold aisle and the cold air is drawn through the
equipment racks by the I.T. equipments’ own fans and expelled out of the back into what is now the hot aisle. The rising hot
air from this aisle finds its way back to the air conditioning unit to be chilled and then to repeat the cycle.
The fronts of the two facing racks are two whole floor tiles apart and when the depth of the rack and the necessary access
clearance space behind it is taken into account we can see that the minimum realistic pitch before the process repeats itself
is seven tiles.
Feeding cold air through standard 25% open floor vents into a rack with no additional cooling methods normally limits the heat
dispersion to about 2kW per rack, or about five average servers. Other upgrade paths are available to get more air through
the rack and this will be explained in more detail later.
A lot of communications equipment is designed for side-to-side cooling and so additional consideration needs to be given to
cope with this variation but in general the hot aisle/cold aisle, 7-tile pitch system is generally considered to be the ‘base’ model
by the relevant standards and industry sources.
A R R R R
I A A A A
R C C C C
C K K K K
O
N PLENUM SPACE UNDER RAISED FLOOR
The Property Services Agency (PSA) Method of Building Performance Specification 'Platform Floors (Raised Access Floors)',
MOB PF2 PS, became the de facto industry standard in the UK for about 20 years until the recent arrival of the BS EN
12825:2001 specification.
In July 2001 a European Standard EN 12825 Raised access floors, was approved by CEN as a voluntary specification for
private projects and mandatory for public projects.
For the floor strength the minimum distributed floor-loading capacity shall be 7.2kPA. The recommended distributed floor
loading capacity is 12kPA (TIA 942). From MOB PF2 PS and BS EN 12825 this means specifying ‘Heavy Duty’ or preferably
‘Extra Heavy Duty’ floor grade.
The plenum area formed under the raised floor must be clean, sealed, dust free, fitted with a vapour barrier and sealed to a
level of air permeability of at least 3m3/h/m2 at 50 Pa (Building Regs Part F).
An aspirating (early warning) smoke detection system shall be placed in the plenum zone (TIA 942). Where a need for a fire
suppression system in a sub floor space is deemed appropriate, consideration should be given to clean agent systems as a
means to accomplish this protection (TIA 942).
The under floor area must not be used for any other purpose other than the supply of air and the distribution of cables. Cables
must be fire rated according to the local jurisdiction and must be placed so as not to impede airflow. All redundant cables must
be removed (National Electrical Code 2002).
The main frame of the rack can be based on a four-post construction, i.e. to make a rectangular frame, or the space-saving
two-post system which is essentially two pieces of vertically placed metal spaced 19-inches apart (apologies for mixing
metric and imperial units here but that is the common practice!). A server rack needs to be a four-post enclosed unit.
8.1 - Size
Racks/cabinets are usually 600 mm wide and with a useable internal space of 42U for 19-inch rack-mounted equipment. This
gives a rack height of just over two metres. Slightly larger (and of course smaller) versions are available but 42U seems a
popular choice. Depth is at least 800 mm but may be up to 1.2 m. A one-metre depth allowance seems average.
Cabinets should have adjustable front and rear rails. The rails should provide 42 or more rack units (RUs) of mounting space.
Rails may optionally have markings at rack unit boundaries to simplify positioning of equipment. Active equipment and
connecting hardware should be mounted on the rails on rack unit boundaries to most efficiently utilize cabinet space.
If patch panels are to be installed on the front of cabinets, the front rails should be recessed at least 100 mm (4 in) to provide
room for cable management between the patch panels and doors.
8.2 - Ventilation
This is a key area of differentiation between ‘standard’ equipment racks and server racks. A server rack must cope with the
ventilation demands of many kilowatts worth of electrical equipment. A standard glass-fronted rack with horizontal fan tray
fitted can only cope with the cooling demands of less than a kilowatt.
It would appear that a suitably ventilated rack, supplied with adequate chilled air through a standard floor tile, can cope with
about two kilowatts of heat dissipation, where the motive force through the rack is only provided by the fans within the server
units themselves.
The amount of ventilation required is stated by several sources and is expressed as a ratio of ‘open’ space to overall door
area, e.g.;
• ...servers require that the front and back cabinet doors to be at least 63% open for adequate airflow. SUN
• One method of ensuring proper cooling is to specify rack doors that provide over 830 in2 (0.53 m2) of ventilation
area or doors that have a perforation pattern that is at least 63% open. APC
• Racks (cabinets) are a critical part of the overall cooling infrastructure. HP enterpriseclass cabinets provide 65
percent open ventilation using perforated front and rear door assemblies. To support the newer high-performance
equipment, glass doors must be removed from older HP racks and from any third-party racks. HP
• …the cabinet should either have no doors or, if required for security, doors with a minimum 60% open mesh for
maximum airflow and is best not equipped with top mounted fan kits. Chatsworth
• Ventilation through slots or perforations of front and rear doors to provide a minimum of 50% open space. Increasing
the size and area of ventilation openings can increase the level of ventilation. TIA 942
When the heat load goes above about 2 kW (about 5 average servers) then an escalation policy is required, which can take
the form of;
• Increasing floor tile vent size up to 75% open area.
• Replacing floor tiles with fan assisted grate tiles.
• Adding specialised fan units to the top and/or bottom of the rack.
• Using cabinets where the entire rear door is a fan unit.
The above solutions will take the heat dissipation capability up to about 6 kW per rack. Above that then more specialised racks
need to be used where the whole rack is fed by a chilled water supply. These designs can cope with loads in excess of 20
kW. New designs using liquid carbon dioxide claim cooling capacities of over 30 kW per rack.
It is also important that the front to back cooling scheme adopted in such racks is not compromised by gaps in the rack
allowing cooled air to mix with hot air drawn back through the gaps (Thermal Guidelines for Data Processing Environments
–ASHRAE). For this reason all gaps in the rack must be filled in with blanking plates. Also excessive gaps for cabling at the
side of the racks should be sealed with an air dam kit and any cable entry points at the bottom of the rack should also be
sealed with a brush strip.
8.3 - Power
The rack needs to be powered and in Europe this would generally be provided by a 16 or 32 amp, 230 V single phase feed
through an IEC 60309 connector. At least two feeds are required for redundancy and backup purposes so a dual 32 amp feed
would be counted as supplying 32 x 230 = 7.36 kVA (remember that useful power is measured in watts, which is amps x volts
x power factor).
For loads above 7 kVA then either more 32 amp feeds are supplied or a three-phase supply is provided which would
normally deliver at least 22 kW through a five-pin version of the IEC 60309 connector. For three-phase supply Regulation 514-
10-01 of BS 7671 requires a warning notice to be secured in such a position that the warning is seen before access is gained
to live
parts.
Within the rack the power is distributed by what is widely known as a power distribution unit, or PDU. There does not seem
to be a widely accepted definition of a PDU and at its simplest it is just a power strip of sockets that distributes the incoming
electricity to the rack equipment. However more functionality is available in the form of;
• Sequential start up.
• Automatic crossover switch between two supplies.
• Power line conditioning.
• Reporting function about status and power usage. This in turn may be a simple LED readout on the unit or part of an
IP addressable managed system.
Front
1
Cabinets
2
Rear
‘HOT’ AISLE
This Row of Tiles can be lifted
(Rear of Cabinets) 3
Rear
7-tile pitch location 4
Cabinets
Align Front and Rear of Cabinets 5
with Edge of Floor Tiles
Front
This Row of Tiles can be lifted 6
‘COLD’ AISLE
(Front of Cabinets)
This Row of Tiles can be lifted 7
Front
Align Front and Rear of Cabinets
with Edge of Floor Tiles
Cabinets
Rear
The 7-tile pitch requires that the front edges of the two facing cabinets are placed in line with the edge of a floor tile, and two
complete floor tiles, i.e. 1.2 m, separates the two facing cabinets, thus forming the cold aisle. The depth of the rack will cover
about one and a half floor tiles and so a complete floor tile is needed in the hot aisle for access. This arrangement means that
the set will repeat itself every seven tiles, or 4.2 metres.
Apart from the 7-tile arrangement TIA 942 also requires clearances of a minimum of 1 m of front clearance for installation of
equipment and a minimum of 0.6 m of rear clearance for service access at the rear although a rear clearance of 1 m (3 ft) is
preferable. Some racks have split rear doors to facilitate rear clearance.
IEEE 1100, referenced in TIA 942, suggests a clearance of two metres from building structural steel in case of lightning
flashovers.
All cables shall be neatly dressed and secured with minimum bend radii protected according to the standards or
manufacturers’ instruction. All cables must be adequately labelled as described in TIA 942, TIA 606 and elsewhere.
A vertical cable manager shall be installed between each pair of racks and at both ends of every row of racks. The vertical
cable managers shall be not less than 83 mm in width. Where single racks are installed, the vertical cable managers should
be at least 150 mm wide. The cable managers should extend from the floor to the top of the racks.
Horizontal cable management panels should be installed above and below each patch panel. The preferred ratio of horizontal
cable management to patch panels is 1:1.
The precision air conditioning facility must be available 24 hours a day, 365 days per year and connected to the standby
generator in the event of a mains failure.
The ambient temperature and humidity shall be measured after the equipment is in operation. Measurements shall be done at
a distance of 1.5 m above the floor level every three to six metres along the center line of the cold aisles and at any location
at the air intake of operating equipment. Temperature measurements should be taken at several locations of the air intake of
any equipment with potential cooling problems. Details are contained in Thermal Guidelines for Data Processing Environments.
Small to medium sized data centres tend to go for the direct expansion, DX, CRAC units placed in the computer room. Larger
facilities tend to go towards the centralized chiller and cold water distribution. Directly cooled racks have so far tended to be
an upgrade path when conventional room cooling runs out of capacity but there is no reason why they couldn’t be designed
in from the start, especially when floor space is at a premium.
The mathematics of air conditioning shows that to remove one kilowatt of heat and cool an item by around 11°C,
approximately 160 cfm (cubic feet per minute) or 74 litres/second of air needs to flow through that equipment.
The literature suggests that in practice an adequately constructed and sealed raised floor, supplied with adequate chilled air,
can supply about 320 cfm of air through a standard 25% floor vent, which implies that one floor vent, in these circumstances,
can cool around 2 kW of equipment if placed in front of an equipment rack. There are many variable in this equation, e.g.
• Are the CRAC units supplying a sufficient volume of air at the correct temperature?
• Is the underfloor plenum area deep enough and clutter free to allow free airflow?
• Is the underfloor plenum sealed enough to maintain the correct excess air pressure?
• Is the excess pressure evenly distributed around the floor area? This in turn depends upon the above factors, plus
depth of floor void and number, size and location of other floor vents.
Up until the early part of this century the average heat load developed in a rack was only around 1 kW and cooling did not
need to be a closely controlled activity, as simple whole room cooling would suffice. But now with 1U servers and blade servers
the potential heat generation is enormous. The average server has a running load of about 400 watts, meaning that a 2 kW
cooling capacity equates to only fiver servers per rack. Putting 42 of these servers in a rack, just because they fit, would
develop over 16 kW of heat, and blade servers would generate over 20 kW.
Underfloor plenum cooling can supply about six kW of cooling capacity by the use of one or more of the following upgrade
methods;
• Use a larger floor tile, up to 75% open area.
• Use a fan assisted floor grate.
• Use specialised blowers in the rack to bring more airflow into the rack and distribute it across the front face of the
equipment.
• Use rear doors on the racks that are full length blower units.
Beyond about six kW, underfloor plenum cooling of racks becomes impractical and the next stage is water-cooling of the entire
rack.
Water is much more effective at removing heat than air. A water-cooled rack can dissipate in excess of 20 kW of heat. These
racks need to be plumbed into an existing chilled water generation and distribution system that would need to be placed
outside of the equipment room. Liquid carbon dioxide cooling plants are also available now. CO2 is even more efficient than
water and can remove in excess of 30 kW of heat from a rack.
Directly cooled racks are thus much more efficient in terms of floor space used but they are more expensive to buy, need
plumbing in, and an external chiller plant still needs to be built.
For air conditioning applications for more than a medium sized rectangular computer room, it is advisable to use a
computational fluid dynamics software program to model the airflow and cooling capacity of an HVAC design.
Computer Room
Air Conditioning
unit - CRAC
The diagram above shows the CRAC unit as the source of the chilled air and pumping it into the underfloor plenum space. Air
escapes into the cold aisle through the floor vents, passes through the racks, cooling them on the way, and appears in the hot
aisle, where it rises. It then returns back to the CRAC unit to repeat the process. The CRAC units are located at the end of the
hot aisles to facilitate the shortest return path back to the CRAC. Once the room goes over a certain size it is advisable to
improve the return path by adding a ceiling plenum, with fans, to scavenge the hot air and direct it back to the CRAC units. It
has been suggested that this would be beneficial once the floor area extends beyond 400 m2, although a dedicated return
plenum would benefit any size computer room.
Another item to take into account is locating the floor vents at the correct distance from the CRAC unit. Too close and the air
velocity will cause a negative pressure at the vent relative to the air in the room above and suck in hot air instead of blowing
cold air out. The minimum distance is about two metres before effective cooling takes place. The maximum distance from the
CRAC unit again depends upon factors such as air volume from the CRAC unit, floor depth, obstructions, number and size of
floor vents etc., but a figure of ten metres seems to be commonly accepted.
Some items, particularly communications equipment, are not designed for front-to-rear cooling but side-to-side cooling, or
even both at the same time!
APC, a major supplier of IT air conditioning, offers the following estimating tool to help calculate the cooling capacity required
of a computer room. Note the usual running load should be used for the IT equipment, not the nameplate rating, which is
usually one third higher than the normal running load.
The battery/UPS calculation is only required if the battery/UPS system is in the same computer room. TIA 942 recommends
that UPS systems greater than 100 kVA be placed in another room.
Note that allowance should also be made for future expansion and redundancy in air conditioning calculations.
Fresh air
Even with air conditioning, the computer room needs to be ventilated. Air should be changed at least ten times per hour. British
building regulations also require an air supply of ten litres per second per person, doubling if printers or photocopiers are in
use.
Incoming air must be filtered with airborne particulate levels maintained within the limits of Federal Standard 209E, Airborne
Particulate Cleanliness Classes in Cleanrooms and Clean Zones, Class 100,000.
Air from sources outside the building should be filtered using High Efficiency Particulate Air (HEPA) filtration rated at 99.97%
efficiency (DOP Efficiency MIL-STD-282) or greater.
As the external temperature at British latitudes is below 22°C for about 70% of the year some of the huge electricity bills
associated with cooling data centres can be mitigated by taking even larger volumes of outside air during the autumn, winter
and spring months, with the minimum ventilation rate maintained for the summer months.
The power distribution system also needs to be planned in accordance with the Tier 1 to 4 requirements of TIA 942.
The first step is to understand the quantity of power required, at day one and when expansion plans are taken into account.
The normal I.T. running load is therefore the sum of all the nameplate ratings in all of the equipment, multiplied by about 0.67.
To size the power supply requirements however a number of conservative assumptions are made such as allowing for the
inrush current when the equipment starts and general overating factors as a margin of safety.
1. Add up all the nameplate ratings of all the equipment and multiply by 0.67, this is the day 1 running load.
2. Multiply this by whatever expansion factor is expected to apply to the data centre.
3. Add 50% to the above to allow for inrush current.
4. Add 32% to allow for the UPS inefficiency and battery charging requirement.
5. Add 21.5 W per square metre of floor space to allow for lighting.
6. Double the amount reached so far to allow for air conditioning power requirements.
7. Multiply the total so far by 1.25 to provide a further overating factor, so that cables aren’t expected to work at their
full safe load.
8. Add a figure; say 5% for power factor correction*. Modern I.T. equipment is usually power factor corrected, but there
will be some power factor loss.
The figure thus arrived at is the amount of power that needs to be available in the data centre, even though it is unlikely to
need this full amount under normal conditions. This figure also leads to correct choice of the standby generator.
Let’s take the example of a 200 square metre computer room with a day one nameplate load of 100 kW and a required
expansion capacity of 100%.
So we can see that the power supply to be designed in is more than ten times the day-one running load.
*Power factor. Remember that current times voltage equals volt-amperes, usually expressed as kVA. Useful work, or power, is
measured in watts, and volts x amps x power factor = watts. The power factor is the cosine of the phase difference between
the voltage and the current in an alternating current circuit. This phase separation is caused by a reactive, i.e. capacitive or
inductive, load. UPS systems are always measured in kVA output, as they do not know the power factor of the load they will
be connected to, and hence the real power, in watts, deliverable.
Remember that;
‘N’ means only enough items to do the task at hand. Any one point of failure will stop the system.
‘N+1’ means one more item than is necessary, thus allowing for one point of failure.
‘2N’ means two complete independent paths.
Going to 2N, or even better 2N+1, will give the required resilience that a data centre needs but obviously at some major cost,
and not surprisingly 2N costs at least twice that for the provision of the minimum required service.
An uninterruptible power supply system (UPS) needs to be defined to back up the power supply system. This is usually based
on batteries and a double conversion on-line UPS. In this method the incoming AC is rectified and permanently charges a
battery pack which is also connected in parallel back into an inverter, to make available the mains voltage AC again. This is a
very reliable method and also isolates the I.T. load from sags, surges, spikes and most harmonics coming in from the mains
supply. The downside of this method is that it is very inefficient with up to 12% of the input power wasted in the rectification-
inversion cycle.
Other kinds of UPS are available and one is based on the kinetic energy of a large rotating mass connected to a device which
acts as a motor when input power is available and a generator when the AC input fails. The kinetic energy stored in the
rotating flywheel will then produce electricity for a short time. Kinetic energy devices are smaller and cheaper and have less
maintenance associated with them but usually have back-up times measured in tens of seconds rather than the minutes
offered by a battery system.
Cabling may be fed into the top or bottom of racks, or both. Cabling run in the underfloor plenum space should be laid in the
cold aisle at low level. Cabling entering through the bottom of the rack should be sealed with a brush strip to prevent entry of
chilled air in an uncontrolled manner.
Cable should be terminated and presented on IEC 60309 connectors, of appropriate size for the current and suitable for
single or three phase connection as appropriate. Usual ratings are 16 or 32 amp. The higher power ratings of today’s servers
would suggest that two 32 amp feeds would be required, giving around 7 kW. Higher power rating would require a three-phase
connection, providing around 22 kW.
The power distribution units can extend beyond simple distribution of the power and may offer;
• Sequential start up to lower inrush current.
• Simple filtering.
• Monitoring of current with an LED readout.
• Automatic switching between feeds.
• Network reporting and remote control ability through a TCP/IP connection.
Other systems around take in mains voltage and distribute 48 V D.C around the rack to remove the need for each item of
equipment to have its own dedicated power supply.
Correct earthing is required by law and described in various standards such as;
• BS 6701 Telecommunication cabling and equipment installations.
• BS 7671 Requirements for electrical installations: IEE wiring regulations 16th Edition.
It is essential that all metallic elements are correctly earthed according to the most relevant standard above. This includes all
equipment racks, cable containment and the metallic sheaths and armour of communications cables.
Note that whereas ‘earthing’ means, “the connection of the exposed conductive parts of an installation to the main earthing
terminal of that installation (BS 7671)”, ‘bonding’ means, “the electrical connection putting various exposed conductive parts
and extraneous conductive parts at a substantially equal potential (EN 50174-2).” Thus the connection for bonding must be
capable of offering low enough impedance that a potential difference of not more then 1- volt rms can be maintained across
the frequency range of interest.
This leads on to the requirement for the Signal Reference Grid, SRG, or a System Reference Potential Plane, SRPP, as it is
referred to in CENELEC standards.
The SRG is there to offer a suitable low-impedance path to ground for high frequency interference signals that cannot be
achieved by simple ‘earthing’.
The SRG should therefore be constructed on the floor below the IT equipment and be constructed of copper tapes
approximately 50 mm wide. The dimensions of the grid have typically been 24 x 24 inches (610 x 610 mm), however this only
effectively gives protection up to around 30 MHz.
With gigabit Ethernet operating at up to 100 MHz this needs to be reduced to 200 mm to be effective whereas ten gigabit
Ethernet, operating at 500 MHz, would ideally need an almost complete surface. When using 50-mm copper tape a grid
spacing of about 100 mm is the practical limit.
The SRG must be effectively bonded to the building steel and the main electrical and telecommunications grounding busbar,
and all items on top or crossing the SRG must be connected to it.
The principle fire safety legislation in the UK is the Fire Precautions (Workplace) Regulations 1997/1999. This is obviously a
major subject and one subject to laws and building regulations. TIA 942 Telecommunications Infrastructure Standard for Data
Centers, April 2005, requires the following for a data centre;
12.1 - Detection
The recommended smoke detection system for critical data centers where high airflow is present is one that will provide early
warning via continuous air sampling and particle counting and have a range up to that of conventional smoke detectors.
…the system has four levels of alarm that range from detecting smoke in the invisible range up to that detected by
conventional detectors. The system at its highest alarm level would be the means to activate the pre-action system valve.
One system would be at the ceiling level of the computer room, entrance facilities, electrical rooms, and mechanical rooms as
well as at the intake to the computer room air-handling units.
A second system would cover the area under the access floor in the computer room, entrance facilities, electrical rooms, and
mechanical rooms.
A third system is also recommended for the operations center and printer room to provide a consistent level of detection for
these areas.
Call Points
Control Panel
Sounders
FIRE DETECTION CONTROL LOOP
Detectors
The cables must be fire survivable as described in BS5839-1: 2002 Clause (26.2 d & e) which invokes, amongst others;
• BS 60702-1: 2002 Mineral insulated cables and their terminations with a rated voltage not exceeding 750V.
• BS 6387:1994 Performance requirements for cables required to maintain circuit integrity under fire
conditions.
Fire detectors come in a number of guises such as ionising smoke detectors, optical detectors, flame and heat detectors etc,
but the smoke detection system recommended for computer rooms is a highly sensitive system that gives very early warning
and is known as Aspirating Smoke Detection, ASD.
BS 6266-2002 recommends, “A dedicated smoke detection system interfaced with the main building system, …and an
aspirating smoke detection to monitor return air flows,” for critical equipment areas such as “centralised computer facilities.”
BS 5839 describes many different types of smoke and flame detectors and most importantly, where they should be sited. The
siting of aspirating smoke detector inlets follows exactly the same rules as more conventional smoke detectors.
ASD is a high sensitivity, aspirating type laser-based optical smoke detection system that continually draws air within the
protected area through a network of pipes where it is passed through a calibrated detection chamber. It is capable of
providing very early warning of fire conditions thereby providing invaluable time to investigate and respond to a potential threat
of fire. ASD is very often referred to by a brand name, VESDA, Very Early Smoke Detection Apparatus. VESDA is a trademark
of Vision Products Pty Ltd of Australia.
A ‘VESDA’ system can detect a fire within 70 seconds and activate a fire suppression response in under two minutes. A
sprinkler system would take four to six minutes under the same circumstances.
Conclusion
Various standards, such as TIA 942 and BS 6266 recommend aspirating smoke detectors for data processing applications
such as data centres because of their quick reaction time. The detection system should be able to give various levels of alarm
and needs to be optimised for the different areas encountered within a data centre. A data centre should have two levels of
fire detection and suppression. An aspirating smoke detector linked to a gaseous fire suppression system as the first response
and a pre-action sprinkler system as the last resort.
According to the Fire Safety Advice Centre, (http://www.firesafe.org.uk/advicent.htm) the following methods are considered for
computer rooms;
The halon replacement market for clean agent gaseous suppression systems splits into inert gasses and halocarbon gasses.
Inert Gases
Inert gas agents are electrically non-conductive clean fire suppressants that are used in design concentrations of 35-50% by
volume to reduce ambient oxygen concentration to between 14 and 10%. Oxygen concentrations below 14% will not support
the combustion of most fuels (and human exposure must be limited).
The DETR has published a document to give guidance on halon replacements, Advice on Alternatives and Guidelines for Users
of Fire Fighting and Explosion Protection Systems. Although products are not officially approved or recognised by this route.
Inert gasses
Halocarbon gasses
In general, inert gas systems appear to take up more space and be slightly more expensive than the halocarbon alternatives.
Manual means of fire suppression system discharge should also be installed. These should take the form of manual pull
stations at strategic points in the room. In areas where gas suppression systems are used, there is normally also a means of
manual abort for the suppression system.
See also;
BS 6266:2002 Code of practice for fire protection for electronic equipment installations.
BS ISO 14520-1:2000 Gaseous fire-extinguishing systems. Physical properties and system design. General requirements.
The smoke detection system can set the first phase of the sprinkler system by letting water enter into the piping but it still need
the additional heat of the fire to set off the sprinklers themselves. This is sometimes known as a ‘double-knock’ system.
Conclusion
The fire safety plan is a multilayered approach that requires a coordinated plan for;
• Designing for low flammability and fire risk.
• Operating with low risk.
• Emergency exits.
• Emergency lighting.
• Emergency exit signage.
• Fire detection, appropriate to the area covered.
• Fire alarm.
• Multi-level automatic fire suppression.
• Manual fire alarm and portable fire extinguishers.
• Staff training and fire drills in place.
• Maintenance plan for all equipment involved.
The communications protocols within the data centre nowadys revolve mostly around Ethernet and Fibre Channel.
Communications speeds of at least 1 Gb/s should be designed for and now ten gigabit speeds need to be considered. Design
issues revolve around the selection of copper and/or optical fibre, grades of copper and fibre to be used, screened or
unscreened copper cabling and levels of redundancy and resilience to be built in to the cabling model.
Equip Dist Area Equip Dist Area Equip Dist Area Equip Dist Area
(Rack/Cabinet) (Rack/Cabinet) (Rack/Cabinet) (Rack/Cabinet)
TIA-942 EN 50173-5
Cross connect in the entrance room ENI (external network interface)
Main cross-connect in the MDA (main distribution area) MD (main distributor)
Horizontal cross-connect in the MDA or HDA (horizontal distribution area) ZD (zone distributor)
Zone outlet or consolidation point in the ZDA (zone distribution area) LDP (local distribution point)
Outlet in the EDA (equipment distribution area) EO (equipment outlet)
Horizontal cabling Zone distribution cabling
Backbone cabling (between MDA and HDAs) Main distribution cabling
Backbone cabling (from MDA to entrance room or from MDA to telecom room) Network access cabling
Telecommunications room Distributor
Alignment of terminology
Main Distribution
Cabling Subsystem
Horiz Dist Area
(LAN/SAN/KVM
Switches)
ZD ZD Horiz Dist Area Horiz Dist Area Horiz Dist Area
(LAN/SAN/KVM (LAN/SAN/KVM (LAN/SAN/KVM
Zone Dist Area Switches) Switches) Switches)
e Distribution
Zone Distr Equip Dist Area Equip Dist Area Equip Dist Area Equip Dist Area
abling Subsystem
Cabling stem (Rack/Cabinet) (Rack/Cabinet) (Rack/Cabinet) (Rack/Cabinet)
EO EO EO EO EO EO EO EO EO EO
EN 50173-5 recognises any of the cabling media addressed in EN 50173, e.g. Cat 5, Cat 6, Cat 7 etc, but Class E/Cat 6 is
recommended for the main distribution and zone distribution cabling.
It would seem that within the Data Centre/Computer Room, a cable less than Category 6 performance should not be used.
Note that the American standards do not recognise Category 7/Class F.
None of the standards discuss 10GBASE-T or the forthcoming Augmented Category 6 standard as this has not yet been
published, or even finalised at the time of writing, but is expected later in 2006.
Products claiming Cat6A performance are already on sale but whether unscreened (UTP) products can meet the Alien
Crosstalk requirements and EMC regulations when operating at the 500 MHz frequencies invoked by 10GBASE-T is still a
matter of debate within the industry. Certainly a screened Cat 6 or Cat6A system is going to cope much better with the EMC
and Alien Crosstalk issues.
• Optical fibre
- ISO 11801 and EN 50173 now classify optical fibres as OM1, OM2, OM3 and OS1. OM means multimode fibre
and OS means singlemode fibre.
- OM3 is a very high bandwidth fibre optimised for ten gigabit operation and is the obvious choice for new data
centre installations.
- Singlemode fibre, OS1, is not needed within the data centre but it may be needed to connect to the outside world
of telecommunications and should be put in place to allow for direct high speed communications from routers
and SAN devices.
- Optical connectors must also be specified. There are many to choose from and are Standards recognised. The
market leader for high speed data communications is now the LC connector.
Preconnectorised cabling is most popular when time on site is at an absolute premium. This may be in a new build, such as
a data centre, where time scales are critical and many different trades are vying for the right to work on any particular bit of
floor space at any time.
Other time-critical areas are live sites that need additional cabling but where the costs and implications of downtime are
horrendous, such as a trading floor or call centre. Such a facility may want to have all its cabling upgraded or extended in one
overnight operation.
Busy city centre facilities will also suffer from a lack of parking and loading bays, on-site storage restrictions and security
worries associated with cable installers needing weeks of access time to the site.
Preconnectorised cabling should reduce time needed on site by around 75% compared to traditional installation.
Quality of the terminations should also be improved by allowing sophisticated Category 6 copper and optical fibre terminations
to be made in a clean factory environment by skilled people. Each cable assembly can be 100% checked in the factory and
whatever is sent to site is known to be of the highest quality.
There are no particular disadvantages to preconnectorised cabling, and it should be costneutral to the end–user, however
accurate surveys need to be carried out to ensure correct cable lengths are made up and installed.
A0102
A0103
A0104
A0105
A0106
A0107
A0108
A0109
A0110
A0111
A0112
Panel to
panel link
Desk Pod
Cable A CD014
Cab le C
Desk 01
Panel 02 CD013
CD012
CD011
Panel to
floor link
BF014
Cab el B
Flo or 01
BF013
BF012
BF011
A0101
A0102
A0103
A0104
Floor Box
1 2 3 4
All cabling, patch panels, earthing and containment system must be adequately labelled and marked and records kept. This
aspect of cabling is described in the following;
EN 50174-1 Information technology – cabling installation – Part 1:Specification and quality assurance.
ISO/IEC 14763-1: Information Technology – Implementation and operation of customer premises cabling –
Part 1:Administration.
TIA 942 Telecommunications Infrastructure Standard for Data Centers
and
… that all cables and components be suitably marked to uniquely identify them. The durability of all labelling must
also be suitable for the rigours of the environment in which they are placed and the expected timescale of
the installation, usually in excess of ten years.
The cables need to be contained and protected and separated from other services. For example EN 50174-2 requires a
separation of at least 200 mm between unscreened data and unscreened power cables, although distances can come down
if any of the cables are screened. BS6701 requires a 50 mm separation at all times between cables unless there is a non-
metallic divider separating the two groups. In the UK, BS6701 and EN 50174-2 requirements need to be overlaid and the
worst-case separation distances used for a correct installation.
TIA 942 defines the cabling hierarchy for data centres and states the permissible range of cables. TIA 942 only invokes other
American standards such as ANSI/TIA/EIA-568-B.
EN 50174 parts 1,2 and 3 describe installation and quality assurance techniques.
EN 50310 describes the equipotential bonding system for information technology installations.
EN 50346 describes the testing methodology to prove compliance of the installed cabling.
Link to Telecoms
Room and External
Network Interface
5 x 12 C6 6 x 8mm OM3
1 x 8F OM3 Main Distribution Frame 1 x 8 sm
5 x 12 C6
Modular designs and cluster concepts are bound to be more popular as the rate of change in Data Centres increases. The
cluster concept incorporates the air conditioning as well by rating each rack with a minimum 2 kW load dissipation and a
planned upgrade path up to 20 kW per rack.
CCTV requirements.
Individual systems should remain in operation upon failure of the central Building Management System (BMS) or head end.
Consideration should be given to systems capable of controlling (not just monitoring) building systems as well as historical
trending. 24-hour monitoring of the Building Management System (BMS) should be provided by facilities personnel, security
personnel, paging systems, or a combination of these. Emergency plans should be developed to enable quick response to
alarm conditions.
We can consider a Data Centre as being in three layers for the BMS requirement;
• Incorporation into a larger and pre-existing site BMS.
• A BMS dedicated to the Data Centre facility.
• Rack level monitoring and control.
With IP based networks more and more of these systems come together with one common cabling system. The exception
is the fire detection loop cabling which must be dedicated and fire survival grade. Many of the control systems rely on
automation protocols such as LONWorks and BACNET to communicate and control the end equipment but the higher levels
of communication between controllers is now reliant upon TCP/IP and Ethernet.
CCTV
Access Control & Monitoring
Building Based
Fire Alarms
Room Based
BMS -
Rack Based
HVAC & Lighting
Environmental Monitoring
Common IP Cabling
Dedicated Cabling
Local Alarm/Control
Remote Alarm/Control
N - Base requirement
System meets base requirements and has no redundancy.
N+1 redundancy
N+1 redundancy provides one additional unit, module, path, or system in addition to the minimum required to satisfy the base
requirement. The failure or maintenance of any single unit, module, or path will not disrupt operations.
2N redundancy
2N redundancy provides two complete units, modules, paths, or systems for every one required for a base system. Failure or
maintenance of one entire unit, module, path, or system will not disrupt operations.
2(N+1) redundancy
2(N+1) redundancy provides two complete (N+1) units, modules, paths, or systems. Even in the event of failure or maintenance
of one unit, module, path, or system, some redundancy will be not be disrupted.
Safety Audit
The installation must be audited for safety both at design stage, project handover and routine inspection. The requirements of
the fire safety programme are already outlined in section 12. Additional safety audit points are;
For the last point it is worth noting that sound levels at work in Europe were reduced in February 2006. The EC Noise at Work
Directive 2003/10/EC was made on 6th February 2003 and repeals and replaces 86/188/EC as from (mainly) 15th February
2006.
Where is the money likely to go in a Data centre? Management Fees Electrical Design
and Insurance 2%
An American example. 11% Architects Fees Facilities inc.
4% Cabling Design
Control Room 3%
3% Raised Floor
BMS System 5% Other Building
1% Works
6%
Security System
1% Plumbing
1%
Data Cabling
4% Sprinkler and
FM200
Suppression
4%
Appendix I