Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

DATA CENTER AND SERVER ROOM STANDARDS

PURPOSE: 

The purpose of the Data Center and Server Room Standards is to describe the minimum requirements
for designing, installing, securing, monitoring, maintaining, protecting, and decommissioning a data
center or server room at the University of Kansas.
APPLIES TO: 

University employees (faculty, staff, and student employees), students, and other covered individuals
(e.g., University affiliates, vendors, independent contractors, etc.) in their access and usage of
University technology resources during the course of conducting University business (administrative,
financial, teaching, research, or service).
CAMPUS: 
Lawrence
TABLE OF CONTENTS: 
1. Physical Plant Layout and Management
1. HVAC
2. Electrical Systems
3. Access Control and Safety
4. Raised Floor Systems
5. Server Cabinet Systems
6. Cable Plant
2. Support Services
1. Server Installation
2. Network Layout
3. Server Removal
4. Emergency Response Management
5. Procedure and Policy Development
6. Management of Site Support Tools and Equipment
POLICY STATEMENT: 
I. Physical Plant Layout and Management
I. HVAC
I. CRAC (Computer Room Air Conditioner) Units:
I. Cooling and related equipment must be sized to account for:
I. The size of the cooling load of all equipment.
II. The size of the cooling load of the building (lighting,
power equipment, personnel, building envelope).
III. Over sizing to account for humidification effects.
IV. Over sizing to account for redundancy should a unit fail.
V. Over sizing to account for appropriate future growth
projections.
II. All cooling equipment must be designed, installed, and
maintained by qualified technicians that meet local and state codes. All
cooling equipment must follow the vendor’s recommended maintenance
schedule.
III. Air filtration media should be installed at air intake points. Media
should be replaced on a regular schedule based on the manufacturer
recommended filter lifespan.
II. Humidity/temperature control:
I. Humidity and temperature must be maintained at a level that is
compliant with the equipment installed on the data center floor.
II. Humidity injection units must have separate drains and be fed
by conditioned water.
III. Cooling towers:
I. Units must be maintained by qualified maintenance technicians
following factory guidelines.
II. Units must be in a secure mechanical yard.
III. Units should be designed and installed to eliminate single point
of failure.
IV. Tower restart after power failure must be automatic.
V. Towers must have a redundant power source to allow time for a
controlled shutdown of supported areas.
IV. Pump systems:
I. Units must be located in a secure mechanical room.
II. Units should be designed and installed to eliminate single point
of failure.
III. Pumps must restart automatically after a power failure.
IV. Pumps must have an emergency power source to allow time for
a controlled shutdown of supported areas.
V. Pipe system:
I. Pipe must be constructed of high quality rust- and coolant-
resistant material.
II. Pipe loops must have valves in several locations that allow
sections of the loop to be isolated without interruption to the rest of the loop.
III. Pipe loops must have isolation valves for each CRAC unit.
VI. Air delivery and return management:
I. Cold air delivery must be managed such that the required
amount of air can be delivered to any necessary equipment location.
II. Hot air return must be managed to extract air directly to CRAC
units without mixing with cold air delivery.
VII. System monitoring:
I. All infrastructure systems supporting machine space services
must be monitored on a continual basis.
II. Monitoring must be at a central location such as a Network
Operations Center.
III. Monitoring system must support a master reporting console that
can also be accessed remotely (including history logs) and must notify
support staff of alarms at central and remote sites.
II. Electrical Systems
I. Main and step down transformers:
I. Must be located in a secure mechanical room.
II. Must have HVAC systems to support heat load and correct
humidity levels for each unit.
III. Must be maintained by a qualified technician to factory
standards and be supportable by extended factory warranty.
II. Main power control panel and PLC ( Program Logic Control ):
I. Must be maintained by a qualified technician to factory
standards.
II. Must be located in a secure mechanical room.
III. Must have HVAC systems to support heat load and correct
humidity levels for each unit.
IV. Must have surge suppression sufficient to prevent large surges
from damaging panels and equipment supported by panel.
V. PLC must have password security.
VI. PLC must have UPS support for power failure.
III. Motor control panels:
I. All controls must have automatic restart after power failure.
II. Must be maintained by a qualified technician to factory
standards.
III. Must be located in a secure mechanical room.
IV. Must have HVAC systems to support heat load and correct
humidity levels for each unit.
IV. UPS systems:
I. UPS systems in the data center must be sized to meet current
and future needs, with sufficient battery backup to allow for a controlled
shutdown of primary servers.
II. UPS systems must be designed, installed and maintained by
authorized electricians and technicians and housed in a secure location. UPS
systems follow manufacturer’s recommended maintenance schedule.
III. UPS systems must have bypass capability to allow for periodic
maintenance.
V. Backup batteries:
I. Must follow manufacture’s recommendations for system to be of
sufficient quality and capacity to ensure a long life thus limiting breaks in the
battery strings.
II. Must be located in secure area with proper ventilation as
required.
III. Must be installed and maintained by authorized technicians.
IV. Must be approved for use in computer equipment UPS systems.
VI. Sub-panels:
I. Must be sized to meet current and future needs.
II. Must be located in the data center to minimize power runs to
desired equipment.
III. Panel maps must be maintained to reflect their most current
usage.
IV. Sub-panels must never be opened at the face plate by anyone
other than qualified electricians.
V. All materials must be at least three feet away from sub-panels.
VII. RPP (Remote Power Panel) units:
I. Must be located to maximize ease of distribution to equipment.
II. Must comply with BS/IEC/EN 60439-1.
VIII. Power strips:
I. Must be sized to meet the power requirements of the cabinet in
which they are installed.
II. Power receptacles for power strips must be installed by
qualified electricians.
III. Monitoring systems must be IP capable.
IX. Power cable layout:
I. The power pathways must maintain a minimum separation from
data cable pathway in accordance with ANSI/TIA-469-B Standards and the
University of Kansas Design and Construction Standards Division 27 for
Telecommunication Systems.
II. Equipment power cables should be the minimum required length
and slack/strain management must be employed.
III. Cables must be arraigned to minimize air flow disruptions.
X. Grounding systems:
I. All data center equipment must be grounded in compliance with
state and local codes.
II. Data center equipment grounds must be independent of all other
building grounds (such as lightning protection systems).
III. All metal objects must be bonded to ground including cabinets,
racks, PDUs, CRACs, cable pathway, and any raised floor systems.
IV. Ground resistance should be < 1 Ohm.
XI. Monitoring system:
I. All electrical equipment must be monitored.
II. Monitoring systems must be IP capable.
III. System must have a central monitoring console located in an
area such as a NOC and be remotely accessible.
IV. System must be able to report alarms at the central and remote
consoles by email and send recorded cell phone messages.
V. Monitoring system must have analysis and reporting function.
VI. System must be able to retain log files of equipment
performance and incident history.
XII. Generator management:
I. Generator must be start tested and run for at least one hour
once a month.
II. A full load test and switching test must be conducted at least
yearly.
III. Maintenance logs must be kept on all tests and reflect all
maintenance performed.
IV. All maintenance must be performed by a qualified technician to
factory specifications.
V. Management must include remote alarm panel (enunciator
panel).
XIII. Maintenance and testing:
I. All electrical system components should be regularly inspected.
II. Main power switches, transformers, automatic transfer switches,
and other major electrical system equipment must be maintained by qualified
technicians per factory specifications and recommendations for service
cycles.
III. Access Control and Safety
I. Door security:
I. Door access control must be maintained 24/7 and should
conform to ISO-27001 standards.
II. An electronic access control system should be in place and log
all access to secure data center areas.
III. Access logs should be maintained for a minimum of one year or
longer as specified by site security policy.
IV. Enforcement of strict polices and sign in/out logs is mandatory.
V. Review of procedures and sign in/out logs must be done on a
regular basis.
VI. Secured doors must fail open in a fire emergency.
II. Video security:
I. Allows for local and remote surveillance of secured and public
spaces.
II. Recording device (tape or hard disk) must be located in a secure
area.
III. Recording must be done on a regular basis to ensure proper
operation of the video security system.
IV. All security recordings must be saved for no less than 30 days.
III. Granting security access:
I. Data center locations must have a visitor/non-essential staff
access policy.
II. Access must only be granted to essential personnel.
III. Visitors must be signed in and out and be supervised at all
times.
IV. Visitor logs should be maintained for a minimum of one year or
longer as specified by site security.
IV. Emergency procedures:
I. All sites must maintain published emergency procedures that
address:
I. Emergency contact information
II. Various and the respective site’s planned responses
III. Ongoing testing and staff awareness
V. Fire alarm and suppression systems:
I. Must be designed specifically for use in data centers.
II. Must comply with all state and local building codes.
III. Suppression systems must use chemicals that do not damage
sensitive equipment.
IV. Suppression systems must not pose harm to building
occupants.
V. Must be maintained by qualified technicians.
IV. Raised Floor Systems
I. Under floor space management:
I. Must remain clean and corrosion free.
II. Constant air pressure must be maintained at all times.
III. Must remain obstruction free for proper air flow.
II. Cleaning:
I. Must be done with vacuum cleaners equipped with HEPA/S-
class filters.
II. Must be done on a continual basis.
III. Floor structure maintenance:
I. Must be corrosion and rust free.
II. Damaged pedestals, cross members, tiles, or missing fasteners
must be replaced immediately to maintain floor integrity.
IV. Floor grounding:
I. Must be separate from building ground.
II. Must comply with all state and local codes.
V. Server Cabinet Systems
I. Cabinet standards:
I. Data center rack enclosures must have 42U vendor neutral
mounting rails that are fully adjustable and compatible with all EIA-310
(Electrical Industry Alliance Standards) compliant 19” equipment.
II. Cabinets must have access points for power and data pathways
at the top and bottom of the cabinet.
III. The data center site must have a standardized set of cabinets
tailored to the site’s specific needs.
II. Cabinet layout:
I. The cabinets will be configured in a standard hot aisle cold aisle
configuration.
II. The cold aisle edge of the equipment enclosures must line up
with the edge of the floor tiles.
III. Hot and cold aisles must be wide enough to insure adequate
access to equipment and safe staff work space.
IV. In cases where vented floor tiles alone are insufficient to heat
load for an area, additional cooling measures will be used.
V. Blanking panels will be installed in any unused rack space to
minimize cold/hot air mixing.
III. Cabinet security:
I. All cabinets must be lockable.
II. All cabinets must reside in a secure area within the data center.
IV. Cabinet loading:
I. Rack loading must not exceed the weight rated capacity for the
location’s raised floor.
II. Rack heat load must not exceed the cooling capacity of the
location.
III. Large servers and equipment must be installed at the bottom of
the rack.
VI. Cable Plant
I. Overhead delivery system cable layout:
I. The data room must have a system to support overhead delivery
of data connections to the equipment cabinets.
II. The data pathways must maintain a minimum separation from
high voltage power and lighting in accordance with ANSI/TIA-469-B Standards
(American National Standards Institute/Telecommunications Industry
Association) and the University of Kansas Design and Construction Standards
Division 27 for Telecommunication Systems.
II. Fiber standards:
I. Fiber installation must use 50 micron OM3 Laser optimized fiber.
II. All fiber installations must be labeled and comply with the KUIT
Labeling Standard.
III. Copper standards:
I. Copper jumpers must be CAT6 with Booted RJ45 connectors.
II. All copper data cables must be labeled and comply with the
KUIT Labeling Standard.
IV. Grounding:
I. All cabinets and cable delivery pathways must be grounded in
compliance with the University of Kansas Design and Construction Standards
Division 27 for Telecommunication Systems.
II. Support Services
I. Server Installation
I. Power:
I. Systems with redundant power supplies must have their power
cords plugged into separate power strips.
II. Power must be isolated from data cables.
III. Power cords must be factory certified.
IV. Power cords must be clearly labeled and comply with the KUIT
Labeling Standard.
II. Rack space:
I. Servers must be installed from the bottom up in the rack
enclosures.
II. Equipment must be clearly labeled and comply with the KUIT
Labeling Standard.
III. Data connections:
I. Cable must not exceed required length by more than one foot.
II. Must be isolated from the system and rack power delivery
system.
III. Must be clearly labeled and comply with the KUIT Labeling
Standard.
IV. Fiber connections:
I. Fiber must not exceed required length by more than one meter.
II. Must be clearly labeled and comply with the KUIT Labeling
Standard.
III. Must not exceed minimum bend radius as specified by the
manufacturer.
II. Network Layout
I. Standard switch layout:
I. All networking equipment will be installed by KUIT staff
regardless of ownership.
II. Switches must be installed in a fashion to minimize the length of
data cables required to provision a data connection.
II. Highly critical system switch layout and redundancy:
I. In the case of highly critical systems where network path
redundancy is required, the systems must have redundant data circuits that
connect to separate switches.
II. Redundant switches must be plugged into separate power
strips.
III. Server Removal
I. Power reclaim:
I. The asset management database must be used to create a
removal list of all hardware, power, and connections related to the server(s).
II. All equipment to be removed must be powered down before
removal.
III. The power management database entries must be updated.
IV. The PDU (Power Distribution Unit)/Wall Breaker Panel map must
be updated.
V. All breakers must be turned off.
II. Removal from rack:
I. All power, data circuits, management circuits, and fiber
connections must be reclaimed and removed.
II. All power cords, fiber and copper cables, and management
system connection parts must be inspected and returned to inventory if in
acceptable condition.
III. All management and support software entries must be updated.
IV. Blanking panels must be installed in the vacated rack space.
V. All servers and components must be labeled, inventoried, and
properly bundled for delivery to owner or eWaste.
III. Documentation:
I. A change request documenting removal must be completed and
approved before work begins.
II. The asset database and all other records relating to this server
must be updated to reflect the change.
III. If this unit will go to eWaste, all inventory removal and eWaste
forms must be completed.
IV. Disposition:
I. The disposition of the server after removal must be documented
before the process starts.
II. All components must be inventoried and a list created for the
history file and turnover to client or eWaste service.
III. All university asset removal/repurpose forms must be
completed.
IV. All items will be processed using eWaste procedures for the
disposal of electronic equipment.
IV. Emergency Response Management
I. On call policy:
I. A policy must be in place for each Data Center/Server Room
defining staff call back requirements.
II. Policy will include call back information for all support staff that
might be needed to reach a solution.
III. Policy will define call back authorization needed to request
billable support.
II. Emergency procedure maintenance:
I. Policies and procedures must be developed to define areas of
responsibility, interactions with vendors and other support teams, standard
recovery methods, and problem documentation.
II. Policies and procedures will be reviewed yearly.
III. Policies and procedures will be stored in a central repository,
which can be accessed remotely.
III. Emergency equipment management:
I. Response kits must be available to support staff equipment and
tool needs for each site.
II. Kits will be stored on site in a secure location.
III. A master list of all site kits and their locations will be created
and a copy kept at each site.
V. Procedure and Policy Development
I. Process documentation development:
I. Each site will have policies defining roles, responsibilities, and
performance standards.
II. Each site change will require a review and update of all
documentation.
III. Site Books will be developed for each site covering all tasks and
responsibilities required to support that site. This will include all policies, site
standards, and procedures.
II. Review, update, and replacement of existing documentation:
I. Policies and procedures will be reviewed and updated yearly.
II. Each policy and procedure will have an author responsible for
maintaining the documents.
VI. Management of Site Support Tools and Equipment
I. Definition of equipment required:
I. Each site will create an inventory of support equipment required
for that site.
II. Each site’s needs will be evaluated to determine if support
equipment can be shared between sites.
II. Storage, maintenance, and update of equipment:
I. Procedures will be developed for maintenance of site
equipment.
II. Site support equipment needs will be reviewed yearly.
III. Each site will have a defined area for storage of site equipment.
IV. A list of all sites and their equipment will be kept at each site to
allow quick location of equipment that can be shifted in case of need.
EXCLUSIONS OR SPECIAL CIRCUMSTANCES: 

Exceptions to these standards shall be allowed only if previously approved by the KU Information
Technology Security Office and such approval documented and verified by the Chief Information
Officer.
CONSEQUENCES: 

Faculty, staff, and student employees who violate these University standards may be subject to
disciplinary action for misconduct and/or performance based on the administrative process
appropriate to their employment.

Students who violate these University standards may be subject to proceedings for non-academic
misconduct based on their student status.

Faculty, staff, student employees, and students may also be subject to the discontinuance of specified
information technology services based on standards violation.
CONTACT: 
Chief Information Officer
345 Strong Hall
1450 Jayhawk Blvd
Lawrence, KS 66045
785-864-4999
kucio@ku.edu
APPROVED BY: 
Provost and Executive Vice Chancellor
APPROVED ON: 
Thursday, December 10, 2009
EFFECTIVE ON: 
Thursday, December 10, 2009
REVIEW CYCLE: 
Annual (As Needed)
BACKGROUND: 

The attached standards are designed to represent the baseline to be used by the Data Center and
Server Rooms located on the University of Kansas main and satellite campuses. While specific-
standards organizations are referenced for examples of best practices, it should be noted that site
conditions, special requirements, and cost of modification will be taken into consideration when
implementing the final configuration of a site. These standards will be regularly reviewed and
updated based on new industry standards, new technology, and lessons learned.
RELATED STATUTES, REGULATIONS, AND/OR POLICIES: 
Data Center and Server Room Policy
Data Classification and Handling Policy
Data Classification and Handling Procedures Guide
Information Access Control Policy
Virtual Private Network (VPN) Remote Access Procedure
Virtual Private Network (VPN) Service on the University of Kansas Data Network
Information Technology Security Policy
Network Policy
Security Policy: Assessment for Local IT Environments and Outline for Risk and Vulnerability
Assessments
Telecommunications Physical Infrastructure
Telecommunications Wiring Policy
Wireless Local Area Network (LAN) Systems Policy
DEFINITIONS: 

These definitions apply to these terms as they are used in this document.

CAT 6: Category 6 cable, commonly referred to as Cat-6, is a cable standard for Gigabit Ethernet
and other network protocols that feature more stringent specifications for crosstalk and system noise.
CRAC: Computer room air conditioner
Data Center: A large facility designed to support large numbers of servers in a large conditioned
room. Data Centers are usually composed of a large number of racks (25 or more) and are manned
24/7/365.
EMF: Electro magnetic fields
eWaste: Electronic equipment disposal service provided by Information Technology on the KU
campus
HEPA/S: High Efficiency Particulate Air filters are used in vacuums cleaners in computer rooms to
collect fine dust particles.
Hot/Cold Aisles: A method of arranging computer racks which focuses cold air delivery at the front
intake of a rack and expels hot air at the back. Rack rows are arranged so the backs of rows face each
other and hot air is collected above the row by a ceiling plenum, which returns the air to the CRAC
unit directly. The fronts of the racks face each other in a row that has vented tiles in the raised floor
to deliver cold air to the rack fronts from the CRAC units.
HVAC: Heating, ventilation, and air conditioning
IP: Internet Protocol is a term used to indicate a connection to the network.
Level I information: University Information with a high risk of significant financial loss, legal
liability, public distrust or harm if this data is disclosed
Level II information: University Information with a moderate requirement for Confidentiality
and/or moderate or limited risk of financial loss, legal liability, public distrust, or harm if this data is
disclosed.
NOC: Network Operations Center is the location in the Data Center, which is staffed 24/7/365 and
monitors and responds to all incidents that affect service availability.
OM3: Fiber optic cable used to support high speed communication in the 10GB range
PDU: Power distribution unit
PLC: Program Logic Control is a computer-based control system used to manage main power
distribution switching panels.
Response Kit: A special tool kit used by Floor Space Planning technicians to support services on the
Data Center Machine Room Floor.
Server Room: Typically a small conditioned space designed to support computing equipment. These
are usually satellite processing centers supporting a specific department and not the entire enterprise.
A server room at KU can also be defined as any room containing a server or servers critical to the
support and operations of a unit or department and/or contains any Level I or II information as
defined by the KU Data Classification and Handling Policy and/or Procedures Guide.
SLA/VRLA: Sealed Lead Acid/Value Regulated Lead Acid are two types of batteries that are used
to support Data Center Machine rooms during loss of utility power. They are attached to a UPS
system.
University Information: Data collected or managed by the University to support University
activities. University Information may include records as well as other data and documents.
UPS: Uninterruptible Power Supply is a system used to condition utility power before it is fed to
computer systems and provides power failure ride-thru when the main utility fails. These systems
have a battery bank attached, which will provide a set number of minutes of ride-thru time. The UPS
monitors the batteries and keeps them at full charge. It reports on power and battery problems.

Standards Organizations:

ANSI: American National Standards Institute


TIA: Telecommunications Industry Association
BS/IEC/EN: British National Standard/International Electrical Commission/European
Standards.
The University of Kansas Design and Construction Standards Division 27 for
Telecommunication Systems: Campus construction standards
EIA: Electrical Industry Alliance
IEEE: Institute of Electrical and Electronics Engineers
ISO: International Organization for Standardization
KEYWORDS: 
data center, server room, server installation, server removal, physical plant, HVAC
CHANGE HISTORY: 

05/19/2015: Policy formatting cleanup (e.g., bolding, spacing).


CATEGORIES:
Information Access & Technology Categories: 
Information Technology
Privacy & Security

You might also like