Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

GROUP ASSIGNMENT

CT109-3-2 DCI

DATA CENTRE INFRASTRUCTURE

APU2F2202IT(NC)

HAND IN DATE: 30 OCTOBER 2022

WEIGHTAGE: 40%

INSTRUCTIONS TO CANDIDATES:

1 Submit your assignment at the administrative counter

2 Students are advised to underpin their answers with the use of references (cited using the
Harvard Name System of Referencing)

3 Late submissions will be awarded zero (0) unless Extenuating Circumstances (EC) are
upheld

4 Cases of plagiarism will be penalized

5 The assignment should be bound in an appropriate style (comb-bound or stapled).

6 Where the assignment should be submitted in both hardcopy and softcopy of the written
assignment and source code (where appropriate) should be on a CD in an envelope / CD
cover and attached to the hardcopy.

7 You must obtain 50% overall to pass this module.


CT109-3-2-DCI

DATA CENTRE INFRASTRUCTURE

GROUP ASSIGNMENT

NAME TP NUMBER
NAVITHA A/P PUSBANATHAN TP068096

CHAN CHUN YEW TP057374

GERARD SANJEEV RAJ TP056823


STEPHEN
Contents
1.0 INTRODUCTION 4
2.0 TECHNICAL GOALS OF DATA CENTER 5

2.1 SECURITY 5
2.2 SCALABILITY 5
2.3 MANAGEABILITY 6
2.4 AFFORDABILITY 6
2.5 CAPACITY 7
2.6 DATA INTEGRITY 8
2.7 RELIABILITY 8
2.8 PERFORMANCE 9
3.0 DATA CENTER DESIGN OVERVIEW 13
4.0 HARDWARE OVERVIEW 13
5.0 PHYSICAL DESIGN OVERVIEW 13
6.0 CONCLUSION 13
7.0 REFERENCES 13
1.0 INTRODUCTION

In this world of information and technology, data centers are in high demand. With the
increase in technology and modern equipment, the information and data generated are getting
larger and larger, so there is a need to protect data and information data centers. Through the
Quantum Telecommunications (QT) Data Centre at Selangor Hi-Tech Park Industries,
establish a centrally organised facility for IT operations and equipment where their data is
stored, managed and disseminated. Hardware infrastructure represents physical components
such as servers, storage systems, networking components, power supplies, cooling systems,
and anything else that needs to be supported for data center operations. Data centers have
grown rapidly in recent years, employing technologies such as virtualization to optimize
resource utilization and increase IT agility. As enterprise IT needs continue to evolve toward
on-demand services, many organizations are turning to cloud-based services and
infrastructure.
2.0 TECHNICAL GOALS OF DATA CENTER

2.1 SECURITY

The type of data that is anticipated to be processed by the QT data centre is customer or
client records. Therefore, the highest level of data security is required for the planned data
centre. Data security must combine physical security with cybersecurity measures to defend
against both internal and external threats. The data centre provides an integrated security
policy control mechanism to ensure data centre security. The benefits of a high-security data
centre will guarantee data integrity and maintain client confidence.

When picking a location for a data centre, security is a crucial factor. Guards and CCTV
monitoring are examples of passive and active deterrents used in physical security. To protect
the compound from dangers, a Defense-in-Depth approach should be used.

2.2 SCALABILITY

Future operations of Quantum Telkom (QT) may involve more personnel. Because of
this, the proposed data centre must be able to support ongoing scaling in addition to providing
colocation services. If scalability is not considered while setting up a data centre, it could
have an impact on the facility's architecture in the long run. The ability to grow and manage a
growth in the volume of data or users is important. Future upgrades to the data centre that
demand more space, gear, or other technological requirements must be effectively managed
without compromising the key elements now in operation.
2.3 MANAGEABILITY

When it comes to data centers setting up is the first hurdle but it can be accomplished
rather easily with the right plan. The next most tasking part of data centers is the
manageability of it. The servers of a data center are very sensitive, even a slight change in
temperature could mean a downfall of a server. So how do we manage it in the long run?
Management of a server entails a wide variety of activities. It's possible to have a single
person or a small team handle everything, depending on the application. In addition to
differentiating servers by their intended purposes, they can also be grouped according to the
physical locations in which they are housed. So for the data center to run without any hitches
we would suggest that Quantum Telekom(QT) have a team of onsite engineers to maintain
the servers 24/7 while also having on-call specialists as a backup in case of emergencies. This
way QT can have a data center running 24/7 without much downtime all year round.

2.4 AFFORDABILITY

Now affordability is one of the key factors when designing a data center. We call this
price to performance, the cheaper the hardware the lower the performance can be and
likewise if it is more on the expensive side , it will have far better performance. Our job as
consultants is to find the perfect sweet spot for Quantum Telekom, where we can build a data
center for them without making a hole in their bank. To understand affordability we can not
just find the cheapest hardware and call it a day, there are repercussions to this. First we must
understand how this affects QT in the long term. There are 5 important costs in data center
configurations. For the initial part we need to calculate how much it is to purchase a server,
from there we need to figure out the installation and configuration costs. The next cost would
be warranty costs, this is basically a precaution in the event a server is unusable or dies. The
fourth cost would be server maintenance. This is important in the long term as all hardware
needs to be maintained for optimum performance 24/7 and in a server case vital as servers
run twenty four hours a day everyday all year round. There can't be even 1 hour worth of
downtime. Lastly would be the cost to replace a server in the event of a server dying. It
doesn't matter how well you take care of your server, how much money you spend on a
warranty and support plan, or how reliable a service provider is; eventually, your server will
fail.

2.5 CAPACITY

The data centre of the Quantum Telekom data center covers an area of 1000 square
meters. The standard cabinet of the data center is 600MM*1200MM*2200MM. Considering
the smallest aisle between the two rows of cabinets and the area occupied by the air
conditioner in the computer room, the space occupied by one cabinet is generally about 3
square meters. . About 350 cabinets can be placed. In addition to the computer room, the data
center also has functional rooms such as monitoring rooms, corridors, conference rooms,
storage rooms, spare parts warehouses, and toilets. These rooms can be large or small, and
can be assessed according to the actual construction cost according to the actual needs. In
addition to the cabinet, the equipment in the data center room cannot run without electricity,
and the power supply problem is also an important factor that must be considered in the
construction of the data center. If calculated by server, the power of most servers is 180W/U,
and each cabinet will not exceed 30U when full of servers. The maximum power
consumption of such a cabinet will not exceed 5400W. A data center with 350 racks
consumes 2000 kW. The 95% of energy consumption of data center equipment is finally
converted into heat energy as the power consumption of air conditioners. The number of air
conditioners in the computer room can be set to 3 or 4. However, it is impossible to place
servers in all the cabinets in the data center. There are also a considerable number of network
equipment, security, storage, load balancing and other equipment. The price of 1U of these
equipment is higher than that of servers, and the power consumption of 1U will also be
higher.
2.6 DATA INTEGRITY

Data integrity is important in the data center and can save organizations the time,
effort and money needed to make critical decisions with incorrect or incomplete data. Data
integrity also protects the information and image of the data subject. For example, you may
collect personally identifiable information (PII) from customers, such as their full names,
social security numbers, addresses, and credit card information.

What I recommend is Distributed Data Center or DDC for short. It is a virtual super storage
server that uses the network to connect and combine thousands of storage servers. In addition
to the virtual super storage server, the distributed data center is also composed of Multiple
data centers are networked to form a multi-center service network. Change the IT
infrastructure of the data center by using the development of cloud computing. Due to the
factors of equipment and network organization, the traditional centralized data center is
gradually incapable of storage, processing, security, network delay, etc. Cloud computing can
solve such problems, enabling the development of a centralized architecture to a distributed
architecture. Distributed data centers greatly break through the limitation of scale. Through
cloud connection and cloud networking, multiple data centers are combined to realize data
sharing, multi-center operation, high business coverage, and greatly reduce operation and
maintenance. costs and reduce the risk of business interruption.

2.7 RELIABILITY

For most server farm suppliers, 100 percent accessibility is a key presentation pointer,
since the IT loads, they support are crucial and the effect and cost of free time is high - for the
suppliers, however for all organizations. Server farms today support practically the entirety of
the UK and worldwide advanced economy, significantly more so now that nearly everybody
has gone to online administrations for remote working and/or entertainment. Adaptability is
similarly essential for execution, as clients focus on administrators to give pretty much space
as and when required. This is especially key for colocation suppliers that are expected to flex
their arrangement close by their different clients' evolving prerequisites. For instance,
High-Performance Compute (HPC) conditions need a lot of force and the spryness to quickly
change utilization profile in accordance with request. Simultaneously, present day server
farms are supposed to keep power use inside the ecological prerequisites (Watkins, 2021).
2.8 PERFORMANCE

These are the main parameters of having a reliability data center. It comes with 4 tiers
with each having its characteristics.

Downtime: Personal time is the timeframe during which server farm servers are suspended.
The purposes behind free time may be specialized issues however can likewise be, for
instance, support of the server farm, during which the servers should be halted. It's deliberate
in hours of the year.

Availability: Accessibility is a mark of the dependability of the server farm. It decides the
level of servers' operability comparative with the absolute working season of the server farm

99.995% availability—25 minutes downtime


99.982% availability—1.6 hours downtime
99.749% availability—22.7 hours downtime
99.671% availability—28.8 hours downtime

Note that Tier III and Tier IV have considerably less free time. This is accomplished because
of the way that these server farms needn't bother with to be quit during fix and maintainence.
To figure out which boundaries a server farm meets, various elements are considered. The
primary ones are as per the following:
Overt repetitiveness, meaning the presence of excess parts for dynamic components of the
server farm (e.g., spare power supplies, generators, cooling frameworks, network gear, and so
on.)

Stream circulation like correspondence channels, refrigeration funneling, electrical hardware,


and so on.
The security level of the server farm — not just the security of putting away data on servers,
yet additionally the actual security of the server farm, including security frameworks or video
reconnaissance Consistence with building regulations and prerequisites
Tier I

This is the most minimal degree of dependability. In such server farms, a stand-still of work
might happen both during upkeep and fix of hardware, as well as if there should be an
occurrence of specialized disappointments.

There might be no reinforcement frameworks by any means, for example,

·        Uninterruptible power supplies (UPS)

·        Diesel generator sets (DGU)

·        Raised floor — a particular floor under which cooling frameworks, channeling, and
power supply are found

On the off chance that there are such frameworks in the server farm, they are normally the
most straightforward renditions with many weak spots.

Likewise, in server farms of this level, human blunder is logical. There is next to no
insurance from staff mistakes.

 
Tier II

The fundamental distinction between this level and Level I is the accessibility of N + 1 overt
repetitiveness of dynamic hardware, and that implies that every dynamic piece of gear has
one backup copy.

This likewise gives some in any event (yet negligible) defensive measures against human
mistake.

Notwithstanding this, Level II server farms are still very inclined to free time. To complete
booked support work, they should be halted.

  

Tier III
Beginning from this level of unwavering quality, server farms can be overhauled without
intruding on their work. This lessens margin time ten times.

Like Level II, server farms of the third level accommodate N + 1 overt repetitiveness of
dynamic parts.

Yet, there's something else:

·        Disseminated 2N streams, and that implies that each connection, line, wire, and other
comparable components have one duplicate, and the heap is equitably dispersed between the
two components, giving overt repetitiveness and diminishing channel load

·        Two power inputs rather than one

·        Assurance from practically every one of the most well-known mistakes by


administration faculty

Tier IV

This is the most extreme degree of unwavering quality. Like in Level III, booked upkeep
work in such server farms is done ceaselessly the frameworks. This server farm is
additionally ready to endure one accident.

This is accomplished through 2(N + 1) overt repetitiveness, as there are two equal
components that share the heap, and each has a reinforcement.

In addition, this server farm will have the accompanying:

·        A different region for fuel capacity designated to diesel generator sets

·        Security against all issues related with human mistake


·        Insurance from normal peculiarities, like quakes, typhoons, storms, floods, fires, and so
on.

·        Contemplations for the chance of psychological militant assaults

3.0 DATA CENTER DESIGN OVERVIEW

The factors and justifications for implementing a data centre are discussed in this
section. When creating a data centre, the following aspects are taken into account:

3.1 LOCATION

According to a research from the National Centers for Environmental Information,


weather and climate disasters cost the United States more than $1 billion in losses in 2014. In
2015, national centres for environmental information As a result, the location of the data
centre must be protected against weather and natural disasters. Analysis of local weather
patterns is necessary to identify variables like earthquake activity, rainfall patterns, and
tendency for flooding.

Examples of data that must be gathered while selecting a data centre location are
shown in the images below. FIGURE 1 depicts rainfall at a specific moment. The area's
seismic activity is depicted in FIGURE 2. According to this data's analysis, it is best to stay
away from Sabah and other seismically active regions. Additionally, places with a lot of rain,
like the Klang Valley.
3.2 RAISED FLOOR

While planning normalized, versatile particular spaces for Itself and interchanges
hardware, for example, server farm, server room, or PC room, more thought ought to be
given to racking area and power supply, correspondences, and air cooling. Standard models in
view of head-to-tail, rack-relative cooling components have been created by TIA942,
ASHRAE, and other legitimate norms. The cold air is moved to the front finish of the rack by
the front raised floor vents, and the air-cooled by the cooling component is shipped through
the vents to the high tension constrained cooling space framed by the raised floor. The air
channel is consequently called a virus air pipe, and the virus air is taken out from the
hardware rack by means of the virus demeanor of the IT gear itself and released in reverse to
frame a hot air conduit, and the hot air ascends back to the forced air system indoor unit
through the hot air channel. Refrigerate and go full circle. The two inverse front closures of
the rack are two entire floor tiles.
While considering the profundity of the rack and the fundamental section leeway
space, we see that the base ideal dispersing for this interaction cycle is 8 tiles. Area:
1000-2000 square meters - About 800 mm, this is the required area for a raised floor to be
implemented. In this case, Quantum Telekom has 1000sqm of area for the data center where
the entire area will be raised approximately 800mm. Ramps that will also be installed with a
minimum 300mm, TIA mm 569. The high-pressure constrained ventilation zone shaped
under the raised floor ought to be perfect, fixed, sans dust and outfitted with a fume screen.
The shut air porousness requires a strain of no less than 50 m 3 m 3/h/m 2.

The cooling can enter from the inadequately covered tiles and gadgets, coming about in:

- Consume more ability to renew the air

- Incapable to ship the expected progression of cold air to the floor vents

- Unnecessary pneumatic force changes in the floor bring about the failure to move cold air to
the vents

The entrance of unlocked gadgets in high-pressure ventilation regions (links/pipes,


and so on) is a fire peril and can cause fire spread inside and outside the machine room (see
Building Part B). Gas fire-dousing frameworks stifle the fire by diminishing oxygen levels
and protecting air.
As indicated by BS ISO 14520 PI: 2000 (E) "Gas Fire Dousing Framework" actual
attributes and generally framework plan prerequisites are tried like clockwork. As per TIA
942 ', Media transmission Foundation Guidelines for Server farms and inward breath
(cautioning) smoke alarm ought to be set in the high-pressure ventilation region. If a
fire-smothering framework is expected in an underground space, the method for cleaning the
media framework ought to be considered to finish the security (TIA 942). The space under
the floor ought not be utilized for different purposes with the exception of ventilation and
wiring.
4.0 HARDWARE OVERVIEW

4.1 SERVER RECOMMENDATION

We recommend that the Quantum Telekom (QT) Data Center's Blade Server be
housed in a structure with a single floor and a floor area of 1000 square metres. A blade
server is a modularly constructed server computer that is meant to utilise the least amount of
energy and physical space while packing a powerful punch. The possibility to place
numerous servers into a single rack for high processing power is provided by expand blade
servers in the future. Because the majority of blade servers are dedicated, we have more
control over how clients and customers are accessed and used, as well as how data is
transferred between machines. It refers to a blade server that is focused on a single
application.

As for the QT Data centre specifications, they added cooling system fans that cool
each blade separately in order to construct a superior and intelligent standard data centre. Due
to their stackability, the servers may also be kept in more compact, air-conditioned areas that
maintain the proper temperature for all of the mechanical components. The second is fluid
rack movement and minimum wiring. Reduced cabling for housing blade servers may be a
benefit for QT businesses who employ blade servers. The design is modular and compact,
allowing for simple movement of individual modules inside or across systems. Other
professions, such as IT administrators, can devote more effort to ensuring high availability.
Which means maximising server uptime even in the event of a disaster and spending less
time managing the QT data center's infrastructure.
5.0 PHYSICAL DESIGN OVERVIEW

5.1 COOLING SYSTEMS

The process of managing the temperature inside data centers in order to limit the
amount of heat generated by them is known as "data center cooling." The failure of a
company to properly regulate the heat and airflow within a data center can have catastrophic
consequences for the company. Not only is there a significant decrease in energy efficiency as
a result of the significant amount of resources that are spent on maintaining a lower
temperature, but there is also a quick increase in the chance of servers becoming overheated.

In order to reach the highest possible level of productivity, the cooling system in a
contemporary data center is responsible for regulating a number of parameters that guide the
flow of heat and cooling. These parameters may include, but are not restricted to, the
following items:

● Temperatures
● performance in regard to cooling
● Utilization of energy resources
● Peculiarities of the flow of the cooling fluid

Each individual component of the data center cooling system is connected to others within
the system and has an effect on the system's overall effectiveness. No matter what kind of
configuration you choose for your data center or server room, cooling is essential to
achieving a data center that is functional and available for use in operating a company.

This is why for Quantum Telekom we recommend the use of Liquid Cooling. When it comes
to the transfer of heat, water and other liquids are significantly superior to air. Depending on
the specific situation, they can be anywhere from fifty to thousand times more efficient.How
it works is such, data center liquid-cooling systems include direct-to-chip, rear-door, and
immersion. Direct-to-chip cooling integrates the cooling system into the computer's chassis.
Piped liquid cooled cold plates near CPUs, GPUs, and memory cards. Small tubes carry cool
liquid to each plate, where it absorbs heat from the components. Warm liquid is pumped to a
heat exchange or cooler. After cooling, liquid is pumped to cold plates. This concept works
similar to how a car radiator works. We believe that with the space allocated for the data
center that Quantum Telekom is using alongside the hardware for the servers, liquid cooling
would be the perfect cooling solution for their center.

5.2 FIRE FIGHTING

One of the most vital parts of operating a data center is having a viable fire suppression
system. A fire suppression system is vital in data centers as servers run at very high
temperatures which may cause the data center to be susceptible to fires. As there is a lot of
vital hardware in the center the fire suppression system should prevent fires while also not
causing damage to the servers. There are 3 areas of protection which are; 
1. Building Level
2. Room Level 
3. Rack Level

There are two options to consider , the first would be a water based sprinkler system and the
other would be Gaseous based systems. According to our research we recommend that
Quantum Telekom use a gaseous based fire suppression system. Inert gas, clean agent gas, or
chemicals are used in gaseous fire suppression to put out fires rapidly and effectively. The
three inert gasses  nitrogen, argon, and carbon dioxide are combined to create Ansul
INERGEN, a clean agent. Heat, oxygen, and a fuel source are the three components that
combine to create fires. The fire can be put out and its growth can be stopped if any one of
these factors is removed through the use of inert gasses like INERGEN.

This form of fire suppression system is completely safe to use around people and in any area
that has humans present. Additionally, it will not cause any damage to sensitive equipment
such as computers,servers, or other types of electronic technologies . Furthermore, it will not
leave any contaminants behind, and there will be no need for any cleanup once the incident
has occurred. INERGEN is also more cost-effective than other clean agent systems that are
analogous to it.

5.3 ELECTRICAL POWER 

As shown in the figure above, is a rough draft of the power consumption that will
take  place for a 1000sqm area for a data center. Assuming that there are approximately 200
servers running 24/7, populated by 100% 1-2 CPU sockets, with high internal and external
storage as well as servers that boot from SAN/NAS. The total power consumption will be
about 300kW with 44% of power is leading towards the main physical infrastructure of the
data center. 
The expression "AC three-stage system" implies that air conditioner three-stage
voltage is given by every one of the three separate curls of a transformer.
Transformers found in server farms are regularly three-stage (three separate curls) and step
down from a higher info voltage (otherwise called essential voltage) to a lower yield voltage
(otherwise called optional voltage). The expressions "3-wire" and "4-wire" are frequently
used to portray the electrical framework plan. The term 3-wire intends that there are 3 hot
conductors line 1, 2 and 3. 

Business structures generally utilize a three-stage system. More real power is


conveyed through the structure with the equivalent measured wires and circuit breakers,
prompting huge electrical expense investment funds. The standards are equivalent to
portrayed above, however stage adjusting is trickier.

Three-stage power administration is conveyed by three hot wires and one unbiased.
The voltage between any stage or hot wire and unbiased is as yet 120 V, very much like at
home or more seasoned server farms. Be that as it may, three-stage power moves the legs 120
degrees separated. The voltage between any two stages or hot legs is 208 V.
6.0 ADDITIONAL COMPONENTS

6.1 GREEN COMPUTING

Green computing is also known as green technology. Green computing uses


computers and other related resources such as monitors, printers, hard disks, floppy disks,
networks in a very efficient and environmentally friendly way with less impact on the
environment. Green computing enables more efficient use of computing and information
technology resources while maintaining or improving overall performance. So green
computing is very important for all types of systems. Most IT companies use green
computing to reduce the environmental impact of their IT operations.

The first and most important step in going green is to virtualize all technologies.
Virtualization is a daily thing in modern technology. Multiple applications and multiple
operating systems can be run simultaneously by using virtual memory technology.
Virtualization increases the utilization of computer systems by compressing applications into
a small number of servers. Therefore, the number of required servers will be reduced. It
reduces power consumption and cooling requirements in the data center.

Second, virtualize the storage system. Storage virtualization can take all available capacity
from multiple storage systems and enable it to be shared without changing storage hardware.
By enabling sharing, it avoids captured capacity, which is available but cannot be used by
applications.

Third, allocate the right amount of storage to thin provisioning. Can eliminate unused
capacity. Storage space can be used efficiently by allocating the required storage space when
needed and adjusting it with storage management software. Because storage is dynamically
allocated, storage can grow or shrink depending on the application being stored.
Finally, use the term "Data Deduplication" to check the hard drive or storage system and find
out the duplicates and remove them. It can effectively reduce the storage space allocated for
some data files and allocate it to other additional data and increase the capacity of the system.

6.2 BUILDING AUTOMATION

A building automation system (BAS) is an intelligent system of hardware and


software that connects heating, ventilation and air conditioning systems (HVAC), lighting,
security and other systems to communicate on a single platform. Automated systems can
provide important safety information about the operational performance of a building. The
main purpose of building automation systems is to reduce energy consumption, reduce
maintenance costs, and extend the life cycle of utilities.

Quantum Telekom can be incorporated into some of the BAS systems such as HVAC
systems, electrical systems (including lighting), security systems (including surveillance
cameras and sirens), plumbing and fire or other emergency systems.

The controller is the brain of the building automation system. They collect all the
information from the sensors and based on this information send commands to all connected
systems such as HVAC system, lighting system etc. Sensors are tracking the number of
people, humidity, Wendy and lighting levels to gather information to pass to the controller.
Therefore, when the sensor collects the information and transmits it to the controller, the
controller can issue commands through the data collected by the sensor.

When the controller issues a command, the command or request can be followed by a
specific relay or actuator. For example, turn on the air conditioner at 7am so that workers
don’t feel stuffy when they go to work at 9am. Or sensing an abnormal temperature in the
data center can also warn or check for problems with instructions.
7.0 CONCLUSION

In conclusion, despite the fact that a large-scale data centre may have uncomplicated exterior
elements, the building of such a facility and the management of such a facility are anything
but routine. It is essential to have a comprehensive understanding of the construction process,
including the requirements of the contract, in order to keep your project on track and reduce
the potential risks of a late completion, the malfunction or failure of equipment, or even the
possibility of a termination by a contractor.

In the same vein, in order to maximise the possibility of an energy efficient centre being
successful, one must give special consideration to the selection of the location, the layout, the
consumption of energy and its sustainability, the purchase of equipment, and the level of
safety. Even though all of these factors should be taken into account when designing a data
centre, the significance of each may vary depending on the kinds of information that are kept
at the facility. The key is to have perfect planning ahead for a smooth construction ahead.
8.0 REFERENCES

1. Fitzgibbons, L. (2019, September 11). blade server. SearchDataCenter. Retrieved October


19, 2022, from https://www.techtarget.com/searchdatacenter/definition/blade-serveR

2. Bose, N. C. (2011, November 18). 3 essential data center network cabling considerations.
ComputerWeekly.com. Retrieved October 19, 2022, from
https://www.computerweekly.com/tip/3-essential-data-center-network-cabling-considerations

3. Equipment Distribution Area (EDA). (n.d.). Smart Data Center Insights. Retrieved October
19, 2022, from https://dc.mynetworkinsights.com/tag/equipment-distribution-area-eda/

4.Borgini, J. (2022, May 3). Data Center cooling systems and technologies and how they
work. SearchDataCenter. Retrieved October 29, 2022, from
https://www.techtarget.com/searchdatacenter/tip/Data-center-cooling-systems-and-technologi
es-and-how-they-work

5.What is server management? definition, best practices, and best software. DNSstuff. (2021,
April 16). Retrieved October 29, 2022, from https://www.dnsstuff.com/server-management

You might also like