Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

Table of Contents

• Phase 1: Initiation
 business case
 approach and solution
 project charter
 migration methodology
 project plan and schedule
 Kick off
• Phase 2: Discovery
 Collect Existing Documentation
 Run Auto discovery
 Distribute Questionnaires
 Conduct Interviews
 Direct Manual Inventory
 Consolidate Existing Assets
 Ensure Compliance
 Confirm Continuity
• Phase 3: Planning
• Determine Disposition
• Plan Move Groups
• Design New Infrastructure
• Formulate Test Plans
• Develop Move Plans
• Mitigate Migrations Risks
• Prepare Target Location
• Phase 4: Execution
 Execute Pilot Migration(s)
 Run Backups
 Execute Migration
 Complete Post-Migration Testing
• Phase 5: Closeout

 Decommission Assets
 Decommission Space
 Update Contracts & SLA Agreements
 Conduct Debrief
 Close Project

(Kenneth 2010)
Requirements:
1. Setup a program structure
a. Show project structure within program
b. Show organization, plan kick-off, name responsibles, calculate budget needs, create high
level business case
c. Plan data center consolidation of datacenter France (as a pilot) and consolidation of
data center from Paris to Berlin:
2. Create and show migration approach- migration approach should be more in detail. What
steps are needed from analyzing the data center until full migration done

Business Case
Construction Machinery FS is one of the world’s leading manufacturer of construction & mining
equipment, diesel & natural gas engines. It is a provider of leasing, insurance and logistics services. It
currently operates 28 data centers worldwide in 15 countries, which results in a high proportion (38.7%)
of total IT-Cost related to Data Centers. It plans to consolidate all its data centers to 2 locations, Berlin
and Singapore.

The starting phase will be the migration from Paris to Berlin, the success of which will determine the
strategy for the rest of the migrations.

Background Information
 France, Brazil and Portugal still have major issues with the patch management
process.
 Vulnerability Management MVM successor: No candidate in Proof of Concept has
been rated as technically sufficient (functional and architectural issues). Vendors
have been requested for improvement.
 Germany and France both have low data center security

Business Demands

Market Trends

Competition

IT HQ Berlin DC
Development group DC Berlin to a highly available data center (active-active)
•Stabilization project Berlin in process until Q4/2016
•New building 2. DC in Berlin currently being planned, Go Live Q1/2019

 No. of DCs for Germany: 2


 Migration from Paris to Berlin
 28 data centers worldwide in 15 countries
 38.7% of total IT-Cost for CMFS is related to Data Centers
 France shares 31-60% of HQ IT Services
 Shift 3 DCs of France to Berlin
 an active data center located in Berlin (one building), which is to be joined by a second
building in Q1/2019

 Infrastructure Service
o E-Mail
o Storage
o Internet access
o Active Directory („phone book“)
o Vulnerability Scanner (for recognizing weak points)
 Data centres, Hosting & network
o Area
o Hardware
o Network
o Energy
o Cooling
o Building security

Berlin Strengths
 Considerably improved system stability (high priority incidents: -32% to previous year)

 In April 2016 trainings for enterprise architects in Berlin were used by 18 attendees
from 16 countries.
 Modernized hardware by extensive investments (additional 16 m EUR extra investments –80%
already used)
 Extensive improvement of technical building equipment in realization
 Inventory hardware, patch management und hardening of server addressed
 Professional approach for service provision –e.g. by transforming the delivery capacities to
global partners with precise SLA and penalty regulations
 Established IT operations on several locations -Singapore operating since 02/2015
 Improved IT security and compliance –undertook extensive audits, more than 30 IT security-and
compliance actions currently implemented

Berlin Weakness
 Approve technology roadmap (draft for infrastructure level available)
 Improve processes for cooperation of HQ-IT and subsidiaries
Problem Identification

 Offers no possibility of differentiation for subsidiaries


 Insufficient handling in subsidiary implies high risk for the whole group (esp.
compliance and IT-security)
 In future faster implementation of requirements are to be achieved also by HQ IT
–by precise service agreements and standardised target infrastructure
 Low IT security compliance of France DCs
 France and Germany both have red audit reports
 Every subsidiary has to ensure an decent skill level to run and enhance the IT
appropriate
 Permanent growing requirements to compliance and security
 Big technology wave is imminent (e.g. Private Cloud, Digitalization)
 It is unusual for financial service provider (FSP) to operate local DC‘s in every
subsidiary
 Decentral provision of IT infrastructure and DC services is counterproductive
 To some extend in local data centers are massive faults regarding IT compliance
and IT security
 IT Architecture and Standards and Data Center Security are at High Risk as of
2016 Q2
 Providing special know how is particularly for small subsidiaries challenging
 Achievement of a homogeneous level in 28 data center sis extensive
 HQ IT did strongly improve quality
 Other financial service provider consolidated their datacenters
 The implementation of SLAs with continuity requirements is a major issue.
 46% of OHB documents are outdated (Q1 2016: 51%)
 What will happen to the France employees?

DC Strategy of other FSP DC Strategy of CMFS


Improve risk profile Improvement of availability and service &
infrastructure quality
Implement compliance requirement Cost cutting by consolidation of IT
more easy systems
Simplify the implementation of new Critical operating and control systems
application and services stay local at the manufacturing base
Reduce and adapt running costs
Reduce “carbon footprint”

Objectives
 Target is a homogeneous appropriate level for compliance and IT-security
–Complex implementation in all 28 data centers
 First consolidate services (quick win), then applications
 Definition homogeneous target infrastructure
 Porting or replacement of applications (new development or using portfolio
applications) to target infrastructure
 Synergy from homogeneous technology
 If necessary investment for porting of applications or using of portfolio applications
 Hardware should be homogenous (but not to France)
 Software -> Standard and high maintainable / Service -> high availabilty and easy to
obtain / Hardware -> Standard and high maintainable
 Back up & Recovery: Standard for all this processes is very high in financial services
sector. Backup/Restore should be done within few minutes and data versions should be
saved for 7 years, most important for disaster recovery are two active-active
datacenters, Operation management should be able to maintain all services for all
countries but there are no special
Requirements

Success Factors
 Deep audit
 Dedicated project resources
 Using of portfolio application if porting not possible

Current Capacity

Country Regi Complianc numb numbe numb Total IT Total number number number amount of used
on e/IT er of r of FTE er of Costs Earning of of IDP of server storage (Tbyte)
security DCs overall IT FTE (Mio. €) Assets applicati applicati
Status (incl. ons ons
deposits)

France EU red 3 390 26 22.40 4,638 85 14 39 300


4
Budget Needs

Class Availability Redundancy* Description (examples) Op- Numb Examples


Risk er**
CM-FS
DC

A Low N •1 channel of supply for energy, High 3 France


low temperature distribution

•No redundant components


(server and networks)
D Very high 2N+1 In addition: Very 1 Berlin (starting
low 2019)
2N •Several utility provider

•multiple redundant
components (no Single-Point-
of-Failure)

Migration Approach (Recommended in Case)


 Applications: Application management will be provided by subsidiaries as before,
only for global solution portfolio –applications are centrally provided
 Infrastructure services: All subsidiaries use the globally provided infrastructure
services (e.g. E-mail, Active Directory, Vulnerability Scanner)

Intro
Since the mid 90’s, data consolidation and data integration became critical to address the business need for cheaper cost,
shorter time to the market as well as better business decision and control. (Chan 2009)
Data consolidation in China asks for low-cost, high performance database systems with high scalability,
availability and reliability on very large database volume.

Forrester 2007 research has shown that in excess of 70 percent of enterprise IT budgets is devoted
to maintaining existing infrastructure. Organizations need to accelerate innovation and reduce
operating expenses to increase their competitiveness or maintain their current market position,
CIOs are under pressure to identify and adopt best practices to lower their IT operating expenses
and redirect the savings in support of new investments. (Allaire, P. and et al. 2010)

Data Center Consolidation Statistics

 In an average server environment, 30% of the servers are “dead” only consuming energy, without being properly utilized. Their
utilization ratio is only 5 to 10 percent.
 Server consolidation through virtualization increases the utilization ratio up to 50% saving huge amount of energy.
 Energy costs are the most expensive part of a data center’s operating budget, often representing more
than 50% of the budget (Sulaiman, 2016).

Components of Data Center


Data center IT equipment consists of many individual devices like Storage devices, Servers, chillers, generators, cooling
towers and many more. (Daim et al., 2009)

Data Center Sample Specifications:

 Data Center Space: 1 million square feet of data center space


 No. of Racks: 25,000 racks
 No. of Servers: 120,000
 Data Storage: 11 petabytes
 Regions: Europe, Asia
 As a result, Experis IDS successful migrated more than 5,000 servers (mainframe systems, Wintel
and Unix-based servers and telecommunications equipment) spread across multiple sites to
three data centers during a two-year timeframe.
 Planning: two days
 Installation and setup: two hours
 Data copy: two hours and 30 minutes for one server
 Production cutover: minimal, performed in offline mode
 Vendor onsite: eight hours
 Professional services: three days

Business Challenge/As-Is
 Aging data center assets across multiple global locations
 Goal of replacing 15 percent of data center assets annually throughout a very large data center infrastructure
footprint
 Need to see detailed information on asset location, age, dependencies, power requirements and connections globally
on one screen
 Using an in-house, proprietary database to manage data center assets
 Technology refresh cycle missed due to in ability to accurately track assets
 Increased warranty and power costs as inefficient, aged assets with higher failure rates stayed in current data centers
longer
 Data centers struggled to keep pace with a growing infrastructure’s inherent demands. They lacked an
adequate power capacity, raised floor space, and the density reinforcement to accommodate new
systems.
 Chances for system failure grew along with increased support responsibilities as each data center rapidly
approached operations capacity.
 Lease expirations at smaller data centers
 An inability to rationalize real estate portfolios
 Retrofi ts/upgrades of existing data centers proving non-viable
 Cost estimates that exceeded build options
 A high risk of disruption to existing production operations
 Tactical hardening environments possible to support transition to new centers
 Space maximized by approximately 245,000 square feet of Tier 4 raised
 data center fl oor space, expandable to 290,000 square feet
 Improved foundation for control and fl exibility of our client’s infrastructures
 Addressed existing data center end-of-life and disaster recovery
 Consolidated environment with all systems in non-strategic centers migrated to new Strategic Data
Centers
 Fault-tolerant functionality with no impact to critical business systems, applications or customer
 Significantly improved system availability (less than six minutes of downtime per year)
 Ensured 100 percent technology refresh for mainframe systems
 An environment with redundant (mirror-image) system
 Components that allow concurrent maintenance activity with no disruption to critical business systems
and applications
 Enhanced information security to detect and prevent system outages and minimize risk for to business
and customers

Consolidation Objectives
 Identifying underutilized assets that could be consolidated
 Ability to view all data centers worldwide
 Map best placement of new resources
 Capability to see usage statistics to consolidate under-utilized assets
 Reduce server and storage tech refresh time by 35 percent
 $6.1 million in annual savings due to faster refresh cycle
 Reduction in maintenance, warranty, power and cooling costs by deploying newer, more efficient equipment more
quickly
 Delayed construction of new data center through asset consolidation
 Reduced asset failure, downtime and risk due to newer assets
 power and cooling costs decrease by $2.45 million
 It is important to determine the type of servers, their current status whether idle or busy, how much it
will cost to implement server consolidation, the type of technology needed to achieve the service levels
required and finally meet the security/privacy objectives. (Uddin, Rahman 2010)
 Improve overall efficiency of the system

Source: Nlyte Software. 2015. Banking Industry Case Study.

 Attempt to reduce operational cost of data centers (by at least 25%);


 Strive to increase server utilization into the OMB-provided target range of 60% – 70%;
 Increase (to 100%) the percentage of agency data centers independently metered or
advanced metered and monitored on a weekly basis;
Increase number of servers virtualized to 90%;
 Reduce the cost of data center hardware, software, and operations;
 Shift IT investments to more efficient computing platforms and technologies;
 Promote the use of Green IT by reducing the overall energy and real estate footprint of
government data centers; and
 Increase the overall IT security posture of the government.

Ref: DCCI Plan

Strategy Roadmap/Steps of Data Center Consolidation


 Identify Costs:
o Calculate the infrastructure costs of the system were calculated from project reports and
invoices. Compare to benchmark
 Stakeholder analysis:
o Conduct semi-structured interviews. Record and produce a transcript of each interview Each
transcript was read be two researchers (one present at the interview and one not) and a number
of issues were identified and agreed using a stakeholder impact analysis. Stakeholder impact
analysis is a method of identifying potential sources of benefits and risks from the perspectives of
multiple stakeholders, and is performed by analyzing interview transcripts. It comprises of:
o Identifying key stakeholders;
o Identifying changes in what tasks they would be required to perform and how they were to
perform them;
o Identifying what the likely consequences of the changes are with regards to stakeholders time,
resources, capabilities, values, status and satisfaction;
o Analyzing these changes within the wider context of relational factors such as tense relationships
between individuals or groups to which stakeholders belong;
o Determining whether the stakeholder will perceive the change as unjust (either procedurally or
distributively) based upon changes and their relational context.

During our on-going physical surveys of data centers, it was discovered that several
regions use multiple rooms to house servers and related IT equipment. When the
servers were consolidated into a single room, a return of approximately 1200
square feet of space was realized. At another regional location, the server room
was filled with equipment, boxes, obsolete equipment, and cables sitting on the
vented, air conditioning raised floor. These were removed or otherwise excessed.
The vents that were then cleared of obstruction now permit the air conditioning
equipment to operate at between 4 and 12 degrees cooler. GSA plans to survey
and apply these lessons learned throughout its data centers and regional
infrastructure centers (RICs). RICS house infrastructure switches as well as print
and file servers, but house no application servers. Similar planned strategies
based on Lessons Learned include:

 Raise Data Center temperatures from an average of 72 to 80 degrees. Based


on industry best practices it is estimated that GSA can save 4-5% in energy
costs for every one (1) degree increase in server inlet temperature;

 Rearrange server racks in a Hot/Cold isle configuration to better manage


condition air movement. The design uses Computer Room Air Conditioners
(CRACs), fans, and raised floor as a cooling infrastructure and focuses on
separation of the inlet cold air and the exhaust hot air;

 Rearranging servers within existing racks to reduce heat concentration, will


provide better air exchange and increase the longevity of the equipment.
Commercial studies indicate that re-racked equipment can reduce the air
conditioning necessary to cool the same equipment otherwise not arranged;
and

 Rearranging ventilated floor tile to the center of floor to optimize air flow to the
server racks to achieve more efficient cooling.
GSA OCIO is actively pursuing virtualization through use of VMware and has secured
licensing to support hundreds of virtual servers. The VMware hypervisor also offers
flexibility to manage the diverse servers and operating systems present across GSA
OCIO responsibilities. FAS plans to pursue virtualization strategies in coordination with
their hardware refresh cycles, primarily via use of VMware to create multiple logical
devices on a single physical server. FAS plans to consolidate servers for all of its core
applications. By 2013, 50% of FAS’s contractor-operated data centers are expected to
be virtualized.
A long-term GSA goal is to acquire high-end blade servers to support GSA’s
centralization and consolidation efforts. PBS is pursuing virtualization for local and
regional applications using VMware and Egenera BladeFrame technology for
enterprise applications. PBS has purchased hardware that will enable
virtualization of the majority of its Windows (Egenera) and Solaris (M9000) servers
over the next three years.

GSA is standardizing and applying cross-agency green IT best practices and measuring
its progress against a capability maturity model performance plan, the Green IT Plan. For
the past three years, GSA has successfully implemented our enterprise Green Power
Settings for all infrastructure workstations. Approximate utility savings for this effort
exceeds $3M. In addition, through the use of a service desk contract, GSA has
consolidated the infrastructure helpdesk into an enterprise service desk meeting our initial
ITIL goals and saving initially over $43M in contract fees over a five-year period.

GSA has installed advanced energy meters in the Chantilly Data Center and will
install advanced energy meters in the two remaining agency-operated data centers
in CY 2011 and will begin monitoring on a minute-by-minute interval basis.
Advanced metering provides real-time energy consumption data that allows for
adjustment to reduce peak energy demand and helps to identify and correct sub-
optimal energy performance.
GSA will apply greener and more energy-efficient tactics when procuring,
operating, maintaining, and disposing of data center infrastructure, including:
microprocessors, servers, storage devices, network equipment, power distribution
and cooling equipment, and facilities. For example, GSA will procure energy
efficient hardware using the Energy Star EPEAT-registry. When assets are to be
retired, GSA will dispose of IT assets through GSAXcess, which promotes re-use
by marketing surplus or excess government property to other Federal agencies
and schools and other non-profits, through the Computers for Learning Program.
The OCIO will also enforce green IT SLAs with co-location, outsourcing and cloud
service providers. Furthermore, GSA will establish a policy and process to require
third-party recipients of excessed equipment to follow R2 guidelines when they are
finished with donated government equipment.

Decommission Highly Underutilized Servers

According to industry research, an average of up to 30% of servers in data centers are


“dead” – operating at 3% average or peak CPU utilization – but still consuming significant
amounts of energy. Beyond energy costs, underutilized servers consume data center
space and incur expensive software licenses and hardware maintenance fees. With that
in mind, GSA is identifying and decommissioning inefficient, underutilized legacy servers
and IT equipment. GSA will replace these workloads, as necessary, on virtualized
infrastructure running on energy efficient Energy Star EPEAT-registered equipment.
Server Virtualization

Virtualization is a technique that allows a single server to be used for multiple software
applications, users, or functions. Virtualization technology allows a single, multi-purpose
server to replace several single-purpose servers, which reduces capital spending on new
servers and operating spending on hardware license fees, space, power, and cooling
costs. GSA continues its server virtualization initiative to further reduce overall energy
consumption and significant energy cost savings. Additional benefits from virtualization in
combination with data center consolidation include reduced maintenance and operations
costs of servers and facilities as well as improved automation for server management and
provisioning.

Physical Relocation Strategy


The success of any relocation event is having a complete and thorough inventory of your data center’s assets.
Working with your team, our engineers inventory applications, servers, storage, network and other hardware.
Dependencies between assets are discovered, and a dependency map of your specific environment is developed to
be used as the basis for planning the relocation events. As part of the planning process, all assets are tagged with
barcode labels which also maintain complete source and destination room layouts and rack elevation diagrams.
During move events every task is controlled and logged, providing the entire project team with real-time status for
all assets. A project manager or lead move engineer performs audits on rail installation, server locations and
cabling throughout the move, and a final audit is performed upon competition of the project to ensure move
accuracy. (Infrastructure and Data Solutions, 2008)

Virtual Relocation Strategy


Planning a “virtual” relocation begins in the same manner as planning for a physical relocation; we identify the
assets running in your virtual environment and their interdependencies. With the dependency maps and
information from current systems in hand, our engineers will analyze your current environment, design the new
one, and make recommendations on hardware and/or software purchases to run the new virtual environment. In
the case of a physical to virtual move, the next step is to install the hardware that will host the virtual servers, and
then perform test migrations. After a successful test migration, our engineers will do a complete conversion of
your physical servers to their virtual counterparts, as well as converting/migrating any existing virtual servers. For a
virtual to virtual migration, our engineers act as project managers for your staff to ensure a smooth transition from
your existing servers to their new environment. The final step for both processes is to test application performance
in the new environment.
Hybrid Strategy
Today, data centers aren’t what they used to be. In recent years, data centers have transformed from large
physical spaces filled with racks of gear and critical infrastructure into a broad variety of new forms of computing.
Virtualization, SaaS and cloud computing are now key elements of most organization’s data center environment
and absolutely must be considered and analyzed when deciding how to relocate your data center. It is worth
considering moving data off individual physical servers and into more efficient, and often less expensive,
computing environments. But how do you decide which applications to move and where to move them? How do
you ensure that business needs.

(Technology Support Products, Services and Solutions 2010)


(Ian Storkey 2011)
 The proposed strategy categorizes the servers into three resource pools that are innovation, production and
mission critical servers depending on their workload and usage. After categorizing server consolidation is
applied on all categories depending on their utilization ratio in the data center. This process reduces the
number of servers by consolidating the load of multiple servers on one server. (Uddin, Rahman 2010)
 All data centers are predominantly occupied by low cost underutilized volume servers also called x-86 servers.
(Uddin, Rahman 2010)
 Recognize underutilized volume servers to categorize them on the basis of their workloads they perform and
applications they execute (Uddin, Rahman 2010)
 Typical Composition of Servers:
o Mission Critical Servers: 15% (most powerful severs, normally lower in number)
o Production Servers: 35% (These are much more controlled and scalable servers deployed at locations
where there is a less chance of addition of new servers. The Service level requirements for different
applications and servers are more important than speed and flexibility.)
o Innovation Servers: 50% (Innovation servers are deployed at locations where there is huge potential of
inventing new products, modifying existing products, develop and enhance processes which are more
competitive and productive. Focus on speed and flexibility) (Uddin, Rahman 2010)
 The process of server consolidation always begins from innovation servers because these servers are mostly
underutilized and remain idle for long durations of time. The other most important reason for applying server
consolidation on innovation servers is that they are in huge number and require fewest computing resources. In
this paper, we analyzed a data center consisting of total 500 hundred servers. These servers are categorized
according to their workloads and applications they execute. (Uddin, Rahman 2010)
One of the important aspects that needs to be addressed and planned up-front is the rollout strategy. As Figure 2 (next page)
shows, there are three basic strategies to consider for a rollout plan:

> Big Bang: Migration is done in one single operation. It is usually undertaken over the weekend. This is preferred for low data
volumes.
> Phased: Data is moved to the target system in a phased manner. For new customers, records are created directly in the
target system.

> Parallel run: Transactions are posted on both the source and target system until the migration is executed fully.
Reconciliation is done at the end of each day until all the data is migrated.
Energy costs have dramatically increased in recent years and are to become a major factor in
the total cost of ownership of data centers (Filani et al., 2008; Orgerie et al., 2014). Therefore,
achieving energy efficiency in IT operations is a crucial task for enterprises in order to save
running costs. In some cases, 40-50% of the total data center operational budget is spent on
energy costs for IT components (Filani et al., 2008). Although energy efficiency of hardware has
been improved in recent years, at the same time, the energy consumption of data centers has
increased by 56% from 2005 to 2010 (Koomey, 2011; Splieth et al., 2015). This is mainly caused
by low average utilization levels of hardware resources in enterprise application environments
(Beloglazov and Buyya, 2010; Mi et al., 2010), since existing consolidation potential is not
exploited for the continuously growing number of servers. This growth, in turn, is caused by low
utilization rates due to a provisioning practice that is based on peak demands (Rolia et al.,
2003). The average utilization of data center servers was estimated by the Gartner Group in
2005 to be less than 20% (Speitkamp and Bichler, 2010). According to a newer Gartner report
from 2011, servers even run at average utilization levels less than 15% (Gartner, 2011). An
analysis of data collected from more than 5,000 production servers by (Beloglazov and Buyya,
2010) showed a capacity usage of usually 10-50% for a six-monthmeasurement period
(Beloglazov and Buyya, 2010). A case study that we performed preparatory (see Section 2) to
the work presented in this paper confirmed these observations and showed an average CPU
utilization of 33% across four enterprise data centers, comprising 206 physical servers and 311
business applications. Such low utilization rates adversely affect energy consumption, since
even idle servers, depending on their type and architecture, can consume up to 70% of their
peak power
(Beloglazov and Buyya, 2010). Thus, shutting down idle servers is beneficial for the total energy
efficiency, aiming at load concentration instead of load balancing (Petrucci et al., 2011). In
addition to that, cooling infrastructure is more effective with fewer servers, because effects of
overheating can be avoided (Beloglazov and Buyya, 2010). Thus, the consolidation of workloads
can achieve higher energy
savings (Xu and Fortes, 2010) by eliminating unused hardware resources, enabling IT service
providers to produce services more efficiently. However, the performance of IT systems must
not be degraded significantly (Beloglazov and Buyya, 2010) in order to support business
processes effectively. Therefore, IT Service Management (ITSM) frameworks such as ITIL and
ISO 20000 embed the task of balancing performance and operational costs into the capacity
management process. (Müller, H., Bosse, S., Turowski, K. 2016)

Stages
1. Business planning—Establish the objectives and planned end-state
2. Discovery mapping—Identify the numerous physical locations, technologies, organizations, services, people and
processes that will be impacted by the project
3. Dependency mapping—Chart the interdependencies and rippling impacts among these many elements
4. Execution planning—Acquire the different IT skills, hardware and other resources necessary to address the
issues and requirements identified in the mapping phase
5. Execution—Migrate servers and applications to the new data centers. (Newstrom 2004)

Critical to the success of this strategy is an organizational change management approach that helps guide
consolidation activities through every phase to ensure clear communication and stakeholder commitment.

1. Business planning—Establishing the business objectives and technology goals. This includes assessing the
agency’s or department’s business requirements for data processing, storage and back-up, continuity of
operations, future expansion, etc. Such planning also identifies the new organizational structures and processes
that will be needed to support the consolidated data centers.
2. Discovery mapping—Identifying the numerous facilities, technologies, organizations, services, people and
processes that will be impacted by the project. This includes understanding the business requirements for IT
services and the impact of the transition on mission-critical services.
3. Dependency mapping—Charting the interdependencies and rippling impacts among these many elements.
Some applications depend on other applications or infrastructure. Similarly, infrastructure may depend on certain
applications to function. All of the interdependencies must be mapped to create a transition schedule that
provides minimal disruption and ensures that mission-critical services are delivered as needed. Conducting due
diligence on service outages is a crucial but often overlooked planning activity.
4. Execution planning—Acquiring the different skills, hardware and other resources that will be necessary to
address all issues and requirements identified by the mapping phase. Expertise in virtualization technologies is
critical, as is expertise in capacity planning, performance management and data security. In addition, agencies and
departments may also need help with transferring data to the new data centers and, in some instances, physically
moving servers.
5. Execution—Carrying out the plan using effective organizational change management principles and practices to
gain buy-in and commitment throughout the organization. Often overlooked is the importance of scheduling dress
rehearsals or dry runs before the actual consolidation. These can provide valuable lessons and help agencies and
departments uncover and resolve unanticipated problems before embarking on the real data center consolidation.

 Study life-cycle costing and reduce the expenditures of data center operations;
 Shift IT investments to more efficient computing platforms and technologies;
 Increase overall IT physical posture by moving servers to secure facilities;
 Achieve optimal virtualization and utilization levels (servers, storage, workstations);
 Establish and implement standardized data center processes and best practices;
 Plan for data center business resiliency (disaster recovery/COOP); and
 Promote the use of Green IT to reduce overall energy consumption.

Change Management

1. Assess change—Identifying the key business issues regarding the agency’s or department’s readiness and
capability for change. Often cultural and bureaucratic processes supporting earlier practices must be relinquished;
thus, it is important to identify the new relationships and processes, so plans can be made to facilitate the
transition and reinforce new behaviors.
2. Align executives—Ensuring agreement among program, agency and/or department heads about the vision and
goals of the project. Often leaders from different agencies have differing—and even incompatible—ideas about
what they want to achieve. Aligning leaders requires agreement on the scope, nature and magnitude of change, on
how to define and measure success, and on how leadership will work together to achieve consolidation’s goals.
Data center consolidation projects cannot succeed without executive alignment across the enterprise.
3. Translate and communicate—Establishing the strategy for communicating the planned changes, including
training and strategies to keep both leaders and stakeholders aligned. Leaders must communicate a consistent
message—tailored to each stakeholder group and repeated as often as needed—regarding what the changes will
be, how they will impact each stakeholder group and how they will be carried out. The latter point is crucial
because organizations often become misaligned on how to carry out agreed-upon changes.

4. Execute plans—Coordinating the execution of strategies for consolidating data centers and transitioning to new
work processes while simultaneously performing day-to-day functions that support operations and mission
activities. Change management workshops can help managers handle resistance and lead their teams through the
transition stages.
5. Evaluate—Assessing the project’s progress and performance against the success model. Change is a dynamic
process. Agencies and departments must remain flexible and open to course correction throughout the transition,
using metrics, feedback from stakeholders and lessons learned to amend and improve the execution plan.
(Newstrom 2004)

Documentation Required

1. Enterprise & Network Assessment (The Migration Foundation)


What is the current configuration of the enterprise and network
How does the enterprise operate today
What are the business and technology driver that govern the enterprise
2. The Desired Post-Migration State (The Final Enterprise Configuration)
The DPMS document captures all planned improvements
The DPMS document details the post-migration business and technology drivers

3. Design Documentation* (The Roadmap)


All changes included (virtualization, technology refresh, loaner gear, etc.)
Changes in business rules/drivers (SLA’s, DR impact, application updates)

4. Implementation Plans (schedule)


Schedule, budget, resources, expected results, test plans, contingency
plans

Money Figures
 The composite organization analysis points to risk-adjusted total benefits of $1.62 million over three years
versus costs of approximately $250,000 per year, adding up to a risk-adjusted net present value (NPV) of
$737,944.

 Increased incremental profits and accelerated time to market due the faster rollout of new
technology.
 Decreased cost due to network outage avoidance – Cisco’s strategic guidance stopped incidents
that used to occur monthly prior to Cisco’s arrival and strategic guidance.
 Improved business and/or operational implementation success, reducing the costs of
amelioration.

This analysis translates to benefits of $523,625 in Year 1, $787,333 in Year 2, and repeat benefits of over $300,000
each year thereafter. Numerous qualitative benefits were also identified, such as transfer of knowledge and access
to industry-specific best practices from Cisco to the internal IT team, resulting in improved customer confidence in
their own strategic planning capabilities. (Tarbi 2013)

(Tarbi 2013)
Ref: Oracle
(IBM 2007)
(Server Central 2010)
Reference
Daim, T. Justice, J. Krampits, M., Letts, M., Subramanian, G., Thirumalai, M. (2009). “Data center metrics - An
energy efficiency model for information technology managers”, Management of Environmental Quality, Vol.20
No.6,

Cognizant 20-20 Insights. (2016). A Path to Efficient Data Migration in Core Banking.

Oracle. (2010). Put Your Data First or Your Migration Will Come Last. Retrieved from:
www.oracle.com/us/products/middleware/data-integration/enterprise-data-quality/overview/index.html

I. Sulaiman. 2016. Scoping Indonesia’s Data Center Growth to Meet High Energy Demands and Off-set Emission Growth of
New Digital Economy. Presentation to Energy Efficiency Accelerator Project—Update Meeting. Jakarta. 29 July.

Publication bibliography
Allaire, P. and et al. (2010): Reducing Costs and Risks for Data Migrations. Data Migration Best Practices
and Recommended Stora System- based Data Migration Techniques. In Hitachi Data Systems.
Chan, Chee Yong (Ed.) (2009): ACM SIGMOD International Conference on Management of Data. SIGMOD
2007; Beijing, China, June 12 - 14, 2007. Association for Computing Machinery; ACM SIGMOD
International Conference on Management of Data. Red Hook, NY: Curran.
Ian Storkey (2011): Operational Risk Management and Business Continuity Planning for Modern State
Treasuries. In IMF.
IBM (2007): Best Practices for Data Migration. In IBM Global Technology Services.
Infrastructure and Data Solutions (2008): Data Center Migration/Consolidation. Top Retail Banking
Institution. In Experis IT. Available online at http://www.experis.us/Website-File-Pile/Case-
Studies/Experis/IT_Data-Center-Retail-Banking_072712.pdf.
Kenneth, D. (2010): The Data Center Migration Methodolog. In David Kenneth Group.
Newstrom, S. (2004): A comprehensive strategy for successful data center consolidation. In CGI Group
Inc.
Server Central (2010): The Successful Data Center Migration.
Tarbi, F. (2013): The Total Economic Impact of Cisco Data Center Optimization Services. In A Forrester
Total Economic Impact Study comissioned by Cisco.
Technology Support Products, Services and Solutions (2010): Data Center Migration, Relocation
& Consolidation Resources for Sucess.
Uddin, M.; Rahman, A. (2010): Server Consolidation-An Approach to Make Data Centers Energy Efficient
& Green. In International Journal of Scientific & Engineering Research 1 (1).
Energy costs have dramatically increased in recent years and are to become a major factor in
the total cost of ownership of data centers (Filani et al., 2008; Orgerie et al., 2014). Therefore,
achieving energy efficiency in IT operations is a crucial task for enterprises in order to save
running costs. In some cases, 40-50% of the total data center operational budget is spent on
energy costs for IT components (Filani et al., 2008). Although energy efficiency of hardware has
been improved in recent years, at the same time, the energy consumption of data centers has
increased by 56% from 2005 to 2010 (Koomey, 2011; Splieth et al., 2015). This is mainly caused
by low average utilization levels of hardware resources in enterprise application environments
(Beloglazov and Buyya, 2010; Mi et al., 2010), since existing consolidation potential is not
exploited for the continuously growing number of servers. This growth, in turn, is caused by low
utilization rates due to a provisioning practice that is based on peak demands (Rolia et al.,
2003). The average utilization of data center servers was estimated by the Gartner Group in
2005 to be less than 20% (Speitkamp and Bichler, 2010). According to a newer Gartner report
from 2011, servers even run at average utilization levels less than 15% (Gartner, 2011). An
analysis of data collected from more than 5,000 production servers by (Beloglazov and Buyya,
2010) showed a capacity usage of usually 10-50% for a six-monthmeasurement period
(Beloglazov and Buyya, 2010). A case study that we performed preparatory (see Section 2) to
the work presented in this paper confirmed these observations and showed an average CPU
utilization of 33% across four enterprise data centers, comprising 206 physical servers and 311
business applications. Such low utilization rates adversely affect energy consumption, since
even idle servers, depending on their type and architecture, can consume up to 70% of their
peak power
(Beloglazov and Buyya, 2010). Thus, shutting down idle servers is beneficial for the total energy
efficiency, aiming at load concentration instead of load balancing (Petrucci et al., 2011). In
addition to that, cooling infrastructure is more effective with fewer servers, because effects of
overheating can be avoided (Beloglazov and Buyya, 2010). Thus, the consolidation of workloads
can achieve higher energy
savings (Xu and Fortes, 2010) by eliminating unused hardware resources, enabling IT service
providers to produce services more efficiently. However, the performance of IT systems must
not be degraded significantly (Beloglazov and Buyya, 2010) in order to support business
processes effectively. Therefore, IT Service Management (ITSM) frameworks such as ITIL and
ISO 20000 embed the task of balancing performance and operational costs into the capacity
management process. (Müller, H., Bosse, S., Turowski, K. 2016)
Frameworks

You might also like