Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

GREEN CLOUD

COMPUTING
Presented by:
Nanditha
Nidhi
Nikhil
Nishmitha
Pawan
CONTENTS
 Introduction
 A green cloud computing scenario
 High level system architectural framework for green cloud
computing.
 Energy-aware dynamic resource allocation
 InterClouds and integrated allocation of resources
 Disadvantages
 Conclusion
INTRODUCTION
 Modern datacenters that operate under the cloud
computing model are hosting a variety of applications.

 Datacenters are not only expensive to maintain, they are


also unfriendly to the environment.

 To address these concerns, leading IT vendors have recently


formed a global consortium, called The Green Grid, to
promote energy efficiency for datacenters and minimize
their impact on the environment.
• Green cloud computing is envisioned to achieve not only
efficient processing and utilization of computing
infrastructure but also minimize energy consumption.

• A high-level architecture for supporting energy-efficient


resource allocation in a green cloud computing infrastructure
is shown in Figure 11.2. It consists of four main components:

1. Consumers/brokers: Cloud consumers or their brokers


submit service requests from anywhere in the world to the
cloud.
2. Green Resource Allocator: Acts as the interface between the
cloud infrastructure and consumers.

3. VMs: Multiple VMs can be dynamically started and stopped on


a single physical machine to meet accepted requests, hence
providing maximum flexibility to configure various partitions of
resources.

4. Physical machines: The underlying physical computing servers


provide hardware infrastructure for creating virtualized
resources to meet service demands.
ENERGY-AWARE DYNAMIC
RESOURCE ALLOCATION
 Virtualization enables dynamic migration of VMs across
physical nodes.

 Unused VMs can be logically resized and consolidated on a


minimal number of physical nodes, while idle nodes can be
turned off (or hibernated).

 Through consolidation of VMs, large numbers of users can


share a single physical server, which increases utilization
and in turn reduces the total number of servers required.
• Currently, resource allocation in a cloud datacenter aims at providing
high performance while meeting SLAs, with limited or no
consideration for energy consumption during VM allocations.

• The current approaches to dynamic VM consolidation are weak in


terms of providing performance guarantees.

• One of the ways to prove performance bounds is to divide the


problem of energy-efficient dynamic VM consolidation into a few
subproblems that can be analysed individually.

• It is important to analytically model the problem and derive


optimal and near optimal approximation algorithms that
provide provable efficiency.
INTERCLOUDS AND INTEGRATED
ALLOCATION OF RESOURCES
 Intercloud is a network of clouds that are linked with each
other.

 For example, Amazon EC2 Cloud services are available via


Amazon datacenters located in the United States, Europe,
and Singapore.

 These Interclouds provide a powerful means of reducing


energy related costs.
• One reason is that the local demand for electricity varies with
time of day and weather. This causes time-varying differences
in the price of electricity at each location.

• Moreover, each site has a different source of energy (such as


coal, hydroelectric, or wind), with different environmental
costs.

• This gives scope to adjust the load sent to each location, and
the number of servers powered on at each location, to improve
efficiency.
DISADVANTAGES
 Implementation Cost is High: The initial investment to
adopt green computing processing may seem costly for
medium-sized and small enterprises. We can say that green
computing is yet not so reasonable to everybody.
 Advancing Technology will be challenging to Adapt to:
Green cloud computing technology keeps evolving, so it is
somewhat difficult for everybody to adapt to instantly.
 Green Computers might be Underpowered: As the point is
to save energy, the computer applications that require high
power to perform would be badly impacted by green
computing.
CONCLUSION
• The management of power consumption in data centers
has led to a number of substantial improvements in energy
efficiency.

• Cloud computing infrastructure is housed in data centers


and has benefited significantly from these advances.

• Techniques such as sleep scheduling and virtualization of


computing resources in cloud computing data centers
improve the energy efficiency of cloud computing.
THANK YOU

You might also like