Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 130

CLOUD COMPUTING

TRAINING

• By Samir

1
What is Cloud Computing?
• Cloud Computing is a general term used to describe a new class
of network based computing that takes place over the Internet,
– basically a step on from Utility Computing
– a collection/group of integrated and networked hardware,
software and Internet infrastructure (called a platform).
– Using the Internet for communication and transport provides
hardware, software and networking services to clients
• These platforms hide the complexity and details of the
underlying infrastructure from users and applications by
providing very simple graphical interface or API (Applications
Programming Interface).

2
What is Cloud Computing?
• In addition, the platform provides on demand
services, that are always on, anywhere,
anytime and any place.
• Pay for use and as needed, elastic
– scale up and down in capacity and functionalities
• The hardware and software services are
available to
– general public, enterprises, corporations and
businesses markets

3
Cloud Summary
• Cloud computing is an umbrella term used to refer to
Internet based development and services

• A number of characteristics define cloud data,


applications services and infrastructure:
– Remotely hosted: Services or data are hosted on remote
infrastructure.
– Ubiquitous: Services or data are available from anywhere.
– Commodified: The result is a utility computing model
similar to traditional that of traditional utilities, like gas
and electricity - you pay for what you would want!
4
Cloud Architecture

5
What is Cloud Computing

• Shared pool of configurable computing resources


• On-demand network access
• Provisioned by the Service Provider
Adopted from: Effectively and Securely Using the Cloud Computing Paradigm by peter Mell, Tim Grance 6
Cloud Computing Characteristics
Common Characteristics:
Massive Scale Resilient Computing

Homogeneity Geographic Distribution

Virtualization Service Orientation

Low Cost Software Advanced Security

Essential Characteristics:

On Demand Self-Service
Broad Network Access Rapid Elasticity
Resource Pooling Measured Service

Adopted from: Effectively and Securely Using the Cloud Computing Paradigm by peter Mell, Tim Grance 7
Cloud Service Models
Software as a Platform as a Infrastructure as a
Service (SaaS) Service (PaaS) Service (IaaS)

SalesForce CRM

LotusLive

Google
App
Engine

Adopted from: Effectively and Securely Using the Cloud Computing Paradigm by peter Mell, Tim Grance 8
SaaS Maturity Model

Level 1: Ad-Hoc/Custom –
One Instance per customer

Level 2: Configurable per


customer

Level 3: configurable & Multi-


Tenant-Efficient

Level 4: Scalable, Configurable


& Multi-Tenant-Efficient

9
Source: Frederick Chong and Gianpaolo Carraro, “Architectures Strategies for Catching the Long Tail”
Different Cloud Computing Layers
MS Live/ExchangeLabs, IBM,
Application Service Google Apps; Salesforce.com
(SaaS) Quicken Online, Zoho, Cisco

Google App Engine, Mosso,


Application Platform Force.com, Engine Yard,
Facebook, Heroku, AWS

Server Platform 3Tera, EC2, SliceHost,


GoGrid, RightScale, Linode

Storage Platform Amazon S3, Dell, Apple, ...

10
Cloud Computing Service Layers
Services Description
Services – Complete business services such as
Services PayPal, OpenID, OAuth, Google Maps, Alexa

Application Application – Cloud based software that eliminates


Application the need for local installation such as Google Apps,
Focused Microsoft Online

Development – Software development platforms used


Development to build custom cloud based applications (PAAS &
SAAS) such as SalesForce

Platform – Cloud based platforms, typically provided


Platform using virtualization, such as Amazon ECC, Sun Grid

Storage – Data storage or cloud based NAS such


Infrastructure Storage as CTERA, iDisk, CloudNAS

Focused
Hosting – Physical data centers such as those run
Hosting by IBM, HP, NaviSite, etc.

11
Basic Cloud Characteristics
• The “no-need-to-know” in terms of the underlying
details of infrastructure, applications interface with
the infrastructure via the APIs.
• The “flexibility and elasticity” allows these systems
to scale up and down at will
– utilising the resources of all kinds
• CPU, storage, server capacity, load balancing, and databases
• The “pay as much as used and needed” type of
utility computing and the “always on!, anywhere
and any place” type of network-based computing.

12
Basic Cloud Characteristics
• Cloud are transparent to users and
applications, they can be built in multiple ways
– branded products, proprietary open source,
hardware or software, or just off-the-shelf PCs.
• In general, they are built on clusters of PC
servers and off-the-shelf components plus
Open Source software combined with in-
house applications and/or system software.

13
Software as a Service (SaaS)
• SaaS is a model of software deployment where an
application is hosted as a service provided to
customers across the Internet.
• Saas alleviates the burden of software
maintenance/support
– but users relinquish control over software versions and
requirements.
• Terms that are used in this sphere include
– Platform as a Service (PaaS) and
– Infrastructure as a Service (IaaS)

14
Virtualization
• Virtual workspaces:
– An abstraction of an execution environment that can be made
dynamically available to authorized clients by using well-defined
protocols,
– Resource quota (e.g. CPU, memory share),
– Software configuration (e.g. O/S, provided services).
• Implement on Virtual Machines (VMs):
– Abstraction of a physical host machine,
– Hypervisor intercepts and emulates instructions from VMs, and allows
management of VMs, App App App
– VMWare, Xen, etc. OS OS OS
• Provide infrastructure API: Hypervisor

– Plug-ins to hardware/support structures Hardware


Virtualized Stack
Virtual Machines
• VM technology allows multiple virtual
machines to run on a single physical machine.
App App App App App
Xen
Guest OS Guest OS Guest OS
(Linux) (NetBSD) (Windows)
VMWare
VM VM VM

Virtual Machine Monitor (VMM) / Hypervisor


UML

Hardware
Denali
etc.
Performance: Para-virtualization (e.g. Xen) is very close to raw physical
performance!
16
What is the purpose and benefits?
• Cloud computing enables companies and applications,
which are system infrastructure dependent, to be
infrastructure-less.
• By using the Cloud infrastructure on “pay as used and on
demand”, all of us can save in capital and operational
investment!
• Clients can:
– Put their data on the platform instead of on their own desktop
PCs and/or on their own servers.
– They can put their applications on the cloud and use the
servers within the cloud to do processing and data
manipulations etc.
18
Cloud-Sourcing
• Why is it becoming a Big Deal:
– Using high-scale/low-cost providers,
– Any time/place access via web browser,
– Rapid scalability; incremental cost and load sharing,
– Can forget need to focus on local IT.
• Concerns:
– Performance, reliability, and SLAs,
– Control of data, and service parameters,
– Application features and choices,
– Interaction between Cloud providers,
– No standard API – mix of SOAP and REST!
– Privacy, security, compliance, trust…
19
Some Commercial Cloud Offerings

20
Cloud Taxonomy

21
Cloud Storage
• Several large Web companies are now exploiting the
fact that they have data storage capacity that can be
hired out to others.
– allows data stored remotely to be temporarily cached on
desktop computers, mobile phones or other Internet-
linked devices.

• Amazon’s Elastic Compute Cloud (EC2) and Simple


Storage Solution (S3) are well known examples
– Mechanical Turk

22
Amazon Simple Storage Service (S3)
• Unlimited Storage.
• Pay for what you use:
– $0.20 per GByte of data transferred,
– $0.15 per GByte-Month for storage used,
– Second Life Update:
• 1TBytes, 40,000 downloads in 24 hours - $200,

23
Utility Computing – EC2
• Amazon Elastic Compute Cloud (EC2):
– Elastic, marshal 1 to 100+ PCs via WS,
– Machine Specs…,
– Fairly cheap!
• Powered by Xen – a Virtual Machine:
– Different from Vmware and VPC as uses “para-virtualization” where
the guest OS is modified to use special hyper-calls:
– Hardware contributions by Intel (VT-x/Vanderpool) and AMD (AMD-V).
– Supports “Live Migration” of a virtual machine between hosts.
• Linux, Windows, OpenSolaris
• Management Console/AP

24
EC2 – The Basics
• Load your image onto S3 and register it.
• Boot your image from the Web Service.
• Open up required ports for your image.
• Connect to your image through SSH.
• Execute you application…

25
Opportunities and Challenges
• The use of the cloud provides a number of
opportunities:
– It enables services to be used without any understanding
of their infrastructure.
– Cloud computing works using economies of scale:
• It potentially lowers the outlay expense for start up companies, as
they would no longer need to buy their own software or servers.
• Cost would be by on-demand pricing.
• Vendors and Service providers claim costs by establishing an
ongoing revenue stream.
– Data and services are stored remotely but accessible from
“anywhere”.

26
Opportunities and Challenges
• In parallel there has been backlash against cloud computing:
– Use of cloud computing means dependence on others and that could
possibly limit flexibility and innovation:
• The others are likely become the bigger Internet companies like Google and
IBM, who may monopolise the market.
• Some argue that this use of supercomputers is a return to the time of
mainframe computing that the PC was a reaction against.
– Security could prove to be a big issue:
• It is still unclear how safe out-sourced data is and when using these services
ownership of data is not always clear.
– There are also issues relating to policy and access:
• If your data is stored abroad whose policy do you adhere to?
• What happens if the remote server goes down?
• How will you then access files?
• There have been cases of users being locked out of accounts and losing access
to data.

27
Advantages of Cloud Computing
• Lower computer costs:
– You do not need a high-powered and high-priced computer
to run cloud computing's web-based applications.
– Since applications run in the cloud, not on the desktop PC,
your desktop PC does not need the processing power or hard
disk space demanded by traditional desktop software.
– When you are using web-based applications, your PC can be
less expensive, with a smaller hard disk, less memory, more
efficient processor...
– In fact, your PC in this scenario does not even need a CD or
DVD drive, as no software programs have to be loaded and
no document files need to be saved.
28
Advantages of Cloud Computing
• Improved performance:
– With few large programs hogging your computer's
memory, you will see better performance from your PC.
– Computers in a cloud computing system boot and run
faster because they have fewer programs and processes
loaded into memory…
• Reduced software costs:
– Instead of purchasing expensive software applications, you
can get most of what you need for free-ish!
• most cloud computing applications today, such as the Google Docs suite.
– better than paying for similar commercial software
• which alone may be justification for switching to cloud applications.
29
Advantages of Cloud Computing
• Instant software updates:
– Another advantage to cloud computing is that you are no longer faced
with choosing between obsolete software and high upgrade costs.
– When the application is web-based, updates happen automatically
• available the next time you log into the cloud.
– When you access a web-based application, you get the latest version
• without needing to pay for or download an upgrade.

• Improved document format compatibility.


– You do not have to worry about the documents you create on your
machine being compatible with other users' applications or OSes
– There are potentially no format incompatibilities when everyone is
sharing documents and applications in the cloud.

30
Advantages of Cloud Computing
• Unlimited storage capacity:
– Cloud computing offers virtually limitless storage.
– Your computer's current 1 Tbyte hard drive is small compared
to the hundreds of Pbytes available in the cloud.
• Increased data reliability:
– Unlike desktop computing, in which if a hard disk crashes and
destroy all your valuable data, a computer crashing in the cloud
should not affect the storage of your data.
• if your personal computer crashes, all your data is still out there in the
cloud, still accessible
– In a world where few individual desktop PC users back up their
data on a regular basis, cloud computing is a data-safe
computing platform!
31
Advantages of Cloud Computing
• Universal document access:
– That is not a problem with cloud computing, because you
do not take your documents with you.
– Instead, they stay in the cloud, and you can access them
whenever you have a computer and an Internet connection
– Documents are instantly available from wherever you are
• Latest version availability:
– When you edit a document at home, that edited version is
what you see when you access the document at work.
– The cloud always hosts the latest version of your documents
• as long as you are connected, you are not in danger of having an outdated
version
32
Advantages of Cloud Computing
• Easier group collaboration:
– Sharing documents leads directly to better collaboration.
– Many users do this as it is an important advantages of cloud
computing
• multiple users can collaborate easily on documents and projects
• Device independence.
– You are no longer tethered to a single computer or network.
– Changes to computers, applications and documents follow
you through the cloud.
– Move to a portable device, and your applications and
documents are still available.

33
Disadvantages of Cloud Computing
• Requires a constant Internet connection:
– Cloud computing is impossible if you cannot connect to the
Internet.
– Since you use the Internet to connect to both your
applications and documents, if you do not have an Internet
connection you cannot access anything, even your own
documents.
– A dead Internet connection means no work and in areas
where Internet connections are few or inherently
unreliable, this could be a deal-breaker.

34
Disadvantages of Cloud Computing
• Does not work well with low-speed connections:
– Similarly, a low-speed Internet connection, such as that
found with dial-up services, makes cloud computing
painful at best and often impossible.
– Web-based applications require a lot of bandwidth to
download, as do large documents.
• Features might be limited:
– This situation is bound to change, but today many web-
based applications simply are not as full-featured as their
desktop-based applications.
• For example, you can do a lot more with Microsoft PowerPoint
than with Google Presentation's web-based offering

35
Disadvantages of Cloud Computing
• Can be slow:
– Even with a fast connection, web-based applications can
sometimes be slower than accessing a similar software
program on your desktop PC.
– Everything about the program, from the interface to the
current document, has to be sent back and forth from your
computer to the computers in the cloud.
– If the cloud servers happen to be backed up at that
moment, or if the Internet is having a slow day, you would
not get the instantaneous access you might expect from
desktop applications.

36
Disadvantages of Cloud Computing
• Stored data might not be secure:
– With cloud computing, all your data is stored on the cloud.
• The questions is How secure is the cloud?
– Can unauthorised users gain access to your confidential data?
• Stored data can be lost:
– Theoretically, data stored in the cloud is safe, replicated
across multiple machines.
– But on the off chance that your data goes missing, you have
no physical or local backup.
• Put simply, relying on the cloud puts you at risk if the cloud lets you
down.

37
Disadvantages of Cloud Computing
• HPC Systems:
– Not clear that you can run compute-intensive HPC applications
that use MPI/OpenMP!
– Scheduling is important with this type of application
• as you want all the VM to be co-located to minimize communication
latency!
• General Concerns:
– Each cloud systems uses different protocols and different APIs
• may not be possible to run applications between cloud based systems
– Amazon has created its own DB system (not SQL 92), and
workflow system (many popular workflow systems out there)
• so your normal applications will have to be adapted to execute on these
platforms.
38
The Future
• Many of the activities loosely grouped together under cloud
computing have already been happening and centralised
computing activity is not a new phenomena
• Grid Computing was the last research-led centralised
approach
• However there are concerns that the mainstream adoption of
cloud computing could cause many problems for users
• Many new open source systems appearing that you can install
and run on your local cluster
– should be able to run a variety of applications on these systems

39
• Elastic Web-Scale Computing Amazon EC2
enables you to increase or decrease capacity
within minutes, not hours or days. You can
commission one, hundreds, or even thousands
of server instances simultaneously. Because
this is all controlled with web service APIs,
your application can automatically scale itself
up and down depending on its needs.

40
Amazon Ec2
• Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable
compute capacity in the cloud. It is designed to make web-scale computing easier for
developers. The Amazon EC2 simple web service interface allows you to obtain and configure
capacity with minimal friction. It provides you with complete control of your computing
resources and lets you run on Amazon’s proven computing environment. Amazon EC2
reduces the time required to obtain and boot new server computing easier for developers.
The Amazon EC2 simple web service interface allows you to obtain and configure capacity
with minimal friction. It provides you with complete control of your computing resources and
lets you run.

Instances (called Amazon EC2 instances) to minutes, allowing you to quickly scale capacity, both
up and down, as your computing requirements change. Amazon EC2 changes the economics
of computing by allowing you to pay only for capacity that you actually use. Amazon EC2
provides developers and system administrators the tools to build failure resilient applications
and isolate themselves from common failure scenarios.

41
Benefits
• Elastic Web-Scale Computing: Amazon EC2 enables you to increase or decrease capacity
within minutes, not hours or days. You can commission one, hundreds, or even thousands of
server instances simultaneously. Because this is all controlled with web service APIs, your

application can automatically scale itself up and down depending on its needs .
• Completely Controlled :You have complete control of your Amazon EC2 instances. You have
root access to each one, and you can interact with them as you would any machine. You can
stop your Amazon EC2 instance while retaining the data on your boot partition, and then
subsequently restart the same instance using web service APIs. Instances can be rebooted
remotely using web service APIs.
• Flexible Cloud Hosting Services : You can choose among multiple instance types, operating
systems, and software packages. Amazon EC2 allows you to select the memory configuration,
CPU, instance storage, and boot partition size that are optimal for your choice of operating
system and application. For example, your choice of operating systems includes numerous
Linux distributions and Microsoft Windows Server.
• Integrated : Amazon EC2 is integrated with most AWS services, such as Amazon Simple
Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), and
Amazon Virtual Private Cloud (Amazon VPC) to provide a complete, secure solution for
computing, query processing, and cloud storage across a wide range of applications.

42
Cont..
• Reliable :Amazon EC2 offers a highly reliable environment where replacement instances can
be rapidly and predictably commissioned. The service runs within Amazon’s proven network
infrastructure and data centers. The Amazon EC2 Service Level Agreement (SLA) commitment
is 99.95% availability for each Region.
• Secure : Amazon EC2 works in conjunction with Amazon VPC to provide security and robust
networking functionality for your compute resources. • Your compute instances are located in
a VPC with an IP address range that you specify. You decide which instances are exposed to
the Internet and which remain private. • Security groups and network access control lists
(ACLs) allow you to control inbound and outbound network access to and from your
instances. • You can connect your existing IT infrastructure to resources in your VPC using
industry-standard encrypted IPsec virtual private network (VPN) connections. • You can
provision your Amazon EC2 resources as Dedicated Instances. Dedicated Instances are
Amazon EC2 instances that run on hardware dedicated to a single customer for additional
isolation. • You can provision your Amazon EC2 resources on Dedicated Hosts, which are
physical servers with EC2 instance capacity fully dedicated to your use. Dedicated Hosts can
help you address compliance requirements and reduce costs by allowing you to use your
existing server-bound software licenses

43
• Inexpensive : Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You pay
a very low rate for the compute capacity you actually consume. See Amazon EC2 Instance
Purchasing Options for a more detailed description. •
• On-Demand Instances—With On-Demand instances, you pay for compute capacity by the
hour with no long-term commitments. You can increase or decrease your compute capacity
depending on the demands of your application and only pay the specified hourly rate for the
instances you use. The use of On-Demand instances frees you from the costs and
complexities of planning, purchasing, and maintaining hardware and transforms what are
commonly large fixed costs into much smaller variable costs. On-Demand instances also
remove the need to buy “safety net” capacity to handle periodic traffic spikes.
• • Reserved Instances—Reserved Instances provide you with a significant discount (up to
75%) compared to On-Demand instance pricing. You have the flexibility to change families,
operating system types, and tenancies while benefitting from Reserved Instance pricing when
you use Convertible Reserved Instances.
• • Spot Instances—Spot Instances allow you to bid on spare Amazon EC2 computing capacity.
Since Spot instances are often available at a discount compared to On-Demand pricing, you
can significantly reduce the cost of running your applications, grow your application’s
compute capacity and throughput for the same budget, and enable new types of cloud
computing applications.

44
AWS Regions and Availability Zones:

• Region

AZ Transit

• Geographic area where


AWS services are available
AZ AZ AZ • Customers choose
region(s) for their AWS
resources
• Eleven regions worldwide
AZ Transit

45
Availablity Zone(AZ)

REGION
Each region has multiple, isolated
locations known as Availability Zones EC2

•Low-latency links between AZs in a


region <2ms, usually <1ms AVAILABILITY ZONE
1

•When launching an EC2 instance, a


customer chooses an AZ EC2
EC2
•Private AWS fiber links interconnect EC2

all major regions AVAILABILITY ZONE AVAILABILITY ZONE


2 3

46
Example AWS Availability Zone AZ Transit

AZ AZ AZ

AZ Transit

47
AMI

Amazon Machine Images (AMIs) are the basic building blocks


of Amazon EC2
An AMI is a template that contains a software configuration
(operating system, application server and applications) that
can run on Amazon’s computing environment
AMIs can be used to launch an instance, which is a copy of
the AMI running as a virtual server in the cloud.

48
Compute & Networking
Amazon Elastic Compute Cloud

Amazon EC2 instance instances AMI DB on instance with Elastic IP optimized Amazon
instance CloudWatch instance Lambda

Amazon Virtual Private Cloud

Amazon VPC router Internet customer virtual private VPN VPC peering
gateway gateway gateway connection

AWS Simple Icons: Compute & Networking

49
Compute & Networking
Amazon Route 53 Elastic Load Balancing

Amazon hosted zone route table Elastic Load


Route 53 Balancing

AWS Direct Connect

AWS Direct Connect

Auto Scaling Elastic Network Instance

Auto Scaling elastic network


instance 50
Storage & Content Delivery
Amazon Simple Storage Service AWS Import/Export

Amazon S3 bucket bucket with object AWS Import/Export


objects

Amazon Elastic Block Store AWS Storage Gateway

Amazon EBS volume snapshot AWS Storage Gateway non-cached cached virtual tape library
volume volume
Storage & Content Delivery
Amazon Glacier

Amazon Glacier archive vault

Amazon CloudFront

CloudFront download streaming edge location


distribution distribution
Database
Amazon DynamoDB

DynamoDB table item items attribute attributes global


secondary index

Amazon Relational Database Service

Amazon RDS RDS DB RDS DB RDS DB MySQL DB Oracle DB MS SQL


instance instance standby instance read instance instance instance
(Multi-AZ) replica

AWS Simple Icons: Database


Database
Amazon Relational Database Service Cont. Amazon ElastiCache

PostgreSQL SQL master SQL slave PIOP ElastiCache cache node Redis Memcached
instance

Amazon SimpleDB Amazon Redshift

Amazon SimpleDB domain Amazon Redshift solid state disks DW1 DW2
Dense Compute Dense Compute

AWS Simple Icons: Database


EC2 Pricing Model
• Free Usage Tier
• On-Demand Instances
– Start and stop instances whenever you like, costs are rounded up to
the nearest hour. (Worst price)
• Reserved Instances
– Pay up front for one/three years in advance. (Best price)
– Unused instances can be sold on a secondary market.
• Spot Instances
– Specify the price you are willing to pay, and instances get started and
stopped without any warning as the marked changes. (Kind of like
Condor!)
http://aws.amazon.com/ec2/pricing/
Surprisingly, you can’t scale up
that large.
Getting Started with Amazon
EC2
• Step 1: Sign up for Amazon EC2
• Step 2: Create a key pair
• Step 3: Launch an Amazon EC2 instance
• Step 4: Connect to the instance
• Step 5: Customize the instance
• Step 6: Terminate instance and delete the
volume created
Creating a key pair
• AWS uses public-key cryptography to encrypt
and decrypt login information.
• AWS only stores the public key, and the user
stores the private key.
• There are two options for creating a key pair:
– Have Amazon EC2 generate it for you
– Generate it yourself using a third-party tool such
as OpenSSH, then import the public key to
Amazon EC2
Generating a key pair with Amazon EC2
1. Open the Amazon EC2 console at
http://console.aws.amazon.com/ec2/
2. On the navigation bar select region for the key pair
3. Click Key Pairs in the navigation pane to display the
list of key pairs associated with the account
Generating a key pair with EC2 (cont.)

4. Click Create Key Pair


5. Enter a name for the key pair in the Key Pair
Name field of the dialog box and click Create
6. The private key file, with .pem extension, will
automatically be downloaded by the browser.
Launching an Amazon EC2 instance
1. Sign in to AWS Management Console and
open the Amazon EC2 console at
http://console.aws.amazon.com/ec2/
2. From the navigation bar select the region for
the instance
Launching an Amazon EC2 instance (cont.)
3. From the Amazon EC2 console dashboard, click
Launch Instance
Launching an Amazon EC2 instance (cont.)
4. On the Create a New Instance page, click Quick
Launch Wizard
5. In Name Your Instance, enter a name for the
instance
6. In Choose a Key Pair, choose an existing key pair, or
create a new one
7. In Choose a Launch Configuration, a list of basic
machine configurations are displayed, from which
an instance can be launched
8. Click continue to view and customize the settings
for the instance
Launching an Amazon EC2 instance (cont.)
9. Select a security group for the instance. A
Security Group defines the firewall rules
specifying the incoming network traffic delivered
to the instance. Security groups can be defined
on the Amazon EC2 console, in Security Groups
under Network and Security
Launching an Amazon EC2 instance (cont.)
10.Review settings and click Launch to launch the
instance
11.Close the confirmation page to return to EC2
console
12.Click Instances in the navigation pane to view
the status of the instance. The status is pending
while the instance is launching

After the instance is launched, its status changes to


running
Connecting to an Amazon EC2 instance
• There are several ways to connect to an EC2
instance once it’s launched.

• Remote Desktop Connection is the standard


way to connect to Windows instances.

• An SSH client (standalone or web-based) is


used to connect to Linux instances.
Connecting to Linux/UNIX Instances
from Linux/UNIX with SSH
Prerequisites:
- Most Linux/UNIX computers include an SSH client by
default, if not it can be downloaded from openssh.org
-Enable SSH traffic on the instance (using security groups)
-Get the path the private key used when launching the
instance
1.In a command line shell, change directory to the path of
the private key file
2.Use the chmod command to make sure the private key
file isn’t publicly viewable
Connecting to Linux/UNIX Instances(cont.)
3. Right click on the instance to connect to on the
AWS console, and click Connect.
4. Click Connect using a standalone SSH client.
5. Enter the example command provided in the
Amazon EC2 console at the command line shell
Transfering files to Linux/UNIX
instances from Linux/UNIX with SCP
Prerequisites:
-Enable SSH traffic on the instance
-Install an SCP client (included by default mostly)
-Get the ID of the Amazon EC2 instance, public DNS of the
instance, and the path to the private key
If the key file is My_Keypair.pem, the file to transfer is
samplefile.txt, and the instance’s DNS name is ec2-184-72-
204-112.compute-1.amazonaws.com, the command below
copies the file to the ec2-user home
Terminating Instances
- If the instance launched is not in the free
usage tier, as soon as the instance starts to
boot, the user is billed for each hour the
instance keeps running.
- A terminated instance cannot be restarted.
- To terminate an instance:
1. Open the Amazon EC2 console
2. In the navigation pane, click Instances
3. Right-click the instance, then click Terminate
4. Click Yes, Terminate when prompted for
confirmation
Auto scaling : Types of Scaling
• Scaling by Schedule
– Use Scheduled Actions in Auto Scaling Service
• Date
• Time
• Min and Max of Auto Scaling Group Size
– You can create up to 125 actions, scheduled up to 31 days
into the future, for each of your auto scaling groups. This
gives you the ability to scale up to four times a day for a
month.

• Scaling by Policy
– Scaling up Policy - Double the group size
– Scaling down Policy - Decrement by 1

• Scale By Hand
– Not so auto, but still better than nothing!
50% Savings Weekly CPU Load

Optimize during a year


Optimize by choosing the Right Instance Type
• Choose the EC2 instance type that best matches the
resources required by the application
– Start with memory requirements and architecture type
(32bit or 64-bit)
– Then choose the closest number of virtual cores required
– Then iterate based on actual performance!!

• Scaling across AZs


– Smaller sizes give more granularity for deploying to
multiple AZs
Your Best Option: Reserved + On-Demand
Save more when you reserve

That’s ½ a cent an hour…


m2.xlarge running Linux in US-East Region
over 3 Year period
Break-even
point

Utilization Sweet Spot Feature Savings over On-Demand


<10% On-Demand No Upfront Commitment
10% - 40% Light Utilization RI Ideal for Disaster Recovery Up to 56% (3-Year)
40% - 75% Medium Utilization RI Standard Reserved Capacity Up to 66% (3-Year)
>75% Heavy Utilization RI Lowest Total Cost Up to 71% (3-Year)
Ideal for Baseline Servers
Recommendations

• Steady State Usage Pattern


– For 100% utilization
• If you plan on running for at least 6 months, invest in RI for 1-year term
• If you plan on running for at least 8.7 months, invest in RI for 3-year term

• Spiky Predictable Usage Pattern


– Baseline
• 3-Year Heavy RI (for maximum savings over on-demand)
• 1-Year Light RI (for lowest upfront commitment) + savings over on-demand
– Peak: On-Demand

• Uncertain and unpredictable Usage Pattern


– Baseline: 3-Year Heavy RIs
– Median: 1-Year or 3-Year Light RIs
– Peak: On-Demand
What is Amazon Route 53?
• Amazon Route 53 is AWS’s authoritative
Domain Name System service.
• DNS is a Tier-0 service – availability is most
important.
• No pre-warm up required – handles
unpredictable traffic. Pay as you go pricing –
only pay for the resources you use.

81
82
Amazon Route 53 Design Principles

83
DNS Failover
• Can improve the availability of your applications running on
AWS.
• Allows you to configure backup and failover scenarios for your
own applications. Enables highly available multi-region
architectures on AWS Helps add redundancy to your
application and maintain high availability for your end users.
• Enables customers to run primary applications simultaneously
in multiple AWS regions, with Amazon Route 53 automatically
removing from service any region where your application is
unavailable.

84
Simple Failover Configuration

85
VPC: Virtual Private cloud

86
Choosing an IP address range

87
ELB
 Supports load balancing of HTTP, HTTPS and TCP traffic
to EC2 instances
 Detects and removes failing instances
 Dynamically grows and shrinks based on traffic
 Integrates with Auto Scaling
 Elastic Load Balancing allows the incoming traffic to be
distributed automatically across multiple healthy EC2
instances.
 • ELB serves as a single point of contact to the client.
 • ELB helps to being transparent and increases the
application availability by allowing addition or removal
of multiple EC2 instances across one or more availability
zones, without disrupting the overall flow of
information.

88
ELB Benefit
• Elb is itself a distributed system that is fault tolerant and actively
monitored
• abstracts out the complexity of managing, maintaining, and scaling
load balancers
• can also serve as the first line of defence against attacks on
network.
• can offload the work of encryption and decryption (SSL
termination) so that the EC2 instances can focus on their main work
• offers integration with Auto Scaling, which ensures enough back-
end capacity available to meet varying traffic levels
• are engineered to not be a single point of failure
Classic Load Balancer Types

• An Internet-facing load balancer takes requests from clients over


the Internet and distributes them across the EC2 instances that are
registered with the load balancer
• Internal Load Balancer
• Internal load balancer routes traffic to EC2 instances in private
subnets
Application Load Balancer
An Application Load Balancer functions at the application layer, the seventh layer
of the Open Systems Interconnection (OSI) model.
After the load balancer receives a request, it evaluates the listener rules in priority
order to determine which rule to apply, and then selects a target from the target
group for the rule action using the round robin routing algorithm. Note that you can
configure listener rules to route requests to different target groups based on the
content of the application traffic. Routing is performed independently for each
target group, even when a target is registered with multiple target groups.
The AWS storage portfolio

• Object storage: data presented as buckets of objects


• Data access via APIs over the Internet
Amazon S3

Amazon • Block storage (analogous to SAN): data presented as disk


Elastic volumes
Block
• Lowest-latency access from single Amazon EC2 instances
Store

• Archival storage: data presented as vaults/archives of


Amazon
objects
Glacier
• Lowest-cost storage, infrequent access via APIs over the
Internet

Amazon • File storage (analogous to NAS): data presented as a file


EFS system
• Shared low-latency access from multiple EC2 instances
93
Simple Storage Service (S3)
• A bucket is a container for objects and describes location,
logging, accounting, and access control. A bucket can hold
any number of objects, which are files of up to 5TB. A
bucket has a name that must be globally unique.
• Fundamental operations corresponding to HTTP actions:
– http://bucket.s3.amazonaws.com/object
– POST a new object or update an existing object.
– GET an existing object from a bucket.
– DELETE an object from the bucket
– LIST keys present in a bucket, with a filter.
• A bucket has a flat directory structure (despite the
appearance given by the interactive web interface.)
Bucket Properties

• Versioning – If enabled, POST/DELETE result in the creation of


new versions without destroying the old.
• Lifecycle – Delete or archive objects in a bucket a certain time
after creation or last access or number of versions.
• Access Policy – Control when and where objects can be
accessed.
• Access Control – Control who may access objects in this
bucket.
• Logging – Keep track of how objects are accessed.
• Notification – Be notified when failures occur.
S3 Weak Consistency Model

Direct quote from the Amazon developer API:


“Updates to a single key are atomic….”
“Amazon S3 achieves high availability by replicating data across multiple servers within
Amazon's data centers. If a PUT request is successful, your data is safely stored. However,
information about the changes must replicate across Amazon S3, which can take some time, and
so you might observe the following behaviors:
– A process writes a new object to Amazon S3 and immediately attempts to read it. Until the change is fully
propagated, Amazon S3 might report "key does not exist."
– A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change
is fully propagated, the object might not appear in the list.
– A process replaces an existing object and immediately attempts to read it. Until the change is fully
propagated, Amazon S3 might return the prior data.
– A process deletes an existing object and immediately attempts to read it. Until the deletion is fully
propagated, Amazon S3 might return the deleted data.”
Elastic Block Store

• An EBS volume is a virtual disk of a fixed size with a block


read/write interface. It can be mounted as a filesystem on a
running EC2 instance where it can be updated incrementally.
Unlike an instance store, an EBS volume is persistent.
• (Compare to an S3 object, which is essentially a file that must be
accessed in its entirety.)
• Fundamental operations:
– CREATE a new volume (1GB-1TB)
– COPY a volume from an existing EBS volume or S3 object.
– MOUNT on one instance at a time.
– SNAPSHOT current state to an S3 object.
Amazon Elastic File System
• Fully managed file system for EC2 instances
• Provides standard file system semantics
• Works with standard operating system APIs
• Sharable across thousands of instances
• Elastically grows to petabyte scale
• Delivers performance for a wide variety of workloads
• Highly available and durable
• NFS v4–based

101
EFS is designed for a broad range
of use cases, such as…
• Content repositories
• Development environments
• Home directories
• Big data

102
Use Glacier for Cold Data

• Glacier is structured like S3: a vault is a container for an arbitrary


number of archives. Policies, accounting, and access control are
associated with vaults, while an archive is a single object.
• However:
– All operations are asynchronous and notified via SNS.
– Vault listings are updated once per day.
– Archive downloads may take up to four hours.
– Only 5% of total data can be accessed in a given month.
• Pricing:
– Storage: $0.01 per GB-month
– Operations: $0.05 per 1000 requests
– Data Transfer: Like S3, free within AWS.
• S3 Policies can be set up to automatically move data into Glacier.
Durability
• Amazon claims about S3:
– Amazon S3 is designed to sustain the concurrent loss of data in two facilities, e.g. 3+ copies across
multiple available domains.
– 99.999999999% durability of objects over a given year.
• Amazon claims about EBS:
– Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss
of data from the failure of any single component.
– Volumes <20GB modified data since last snapshot have an annual failure rate of 0.1% - 0.5%, resulting in
complete loss of the volume.
– Commodity hard disks have an AFR of about 4%.
• Amazon claims about Glacier is the same as S3:
– Amazon S3 is designed to sustain the concurrent loss of data in two facilities, e.g. 3+ copies across
multiple available domains PLUS periodic internal integrity checks.
– 99.999999999% durability of objects over a given year.

• Beware of oversimplified arguments about low-probability events!


Architecture Center
• Ideas for constructing large scale
infrastructures using AWS:
http://aws.amazon.com/architecture/
Command Line Setup
• Go to your profile menu (your name) in the upper right hand
corner, select “Security Credentials” and “Continue to Security
Credentials”
• Select “Access Keys”
• Select “New Access Key” and save the generated keys
somewhere.
• Edit ~/.aws/config and set it up like this:

[default]
output = json Note the syntax here is
region = us-west-2 different from how
aws_access_key = XXXXXX it was given in the web console!
aws_secret_access_key = AWSAccessKey=XXXXXX
YYYYYYYYYYYY AWSSecretAccessKey=YYYYYYYY
Y

• Now test it: aws ec2-describe-instances


S3 Command Line Examples
aws s3 mb s3://bucket
... cp localfile s3://bucket/key
mv s3://bucket/key
s3://bucket/newname
ls s3://bucket
rm s3://bucket/key
rb s3://bucket

aws s3 help
aws s3 ls help
EC2 Command Line Examples
aws ec2 describe-instances
run-instances --image-id ami-xxxxx --
count 1
--instance-type
t1.micro --key-name keyfile
stop-instances --instance-id i-xxxxxx

aws ec2 help


aws ec2 start-instances help
Security Impact
• Security directives more important, but more difficult to achieve

• Traditional methods of managing security aren’t scaling to the growth of


the threat landscape

• There is more at stake

Security cannot be a blocker of innovative business


Pace of Innovation: Security vs. All
Security & Cloud

• Who manages which parts?


AWS Shared Responsibility Model
Customer content
Customers are
Customers

Platform, Applications, Identity & Access Management responsible for


their security
Operating System, Network & Firewall Configuration and compliance
IN the Cloud
Client-side Data Server-side Data Network Traffic
Encryption Encryption Protection

AWS Foundation Services


AWS is
Compute Storage Database Networking
responsible for
the security OF
Availability Zones
AWS Global Edge Locations the Cloud
Regions
Infrastructure
AWS Shared Responsibility Model:
for Infrastructure Services
Customer content
Managed by

Customer IAM
Platform & Applications Management

Custom
Operating System, Network & Firewall Configuration

ers
Client-Side Data encryption
Server-Side Encryption Network Traffic Protection
& Data Integrity Fire System and/or Data Encryption / Integrity / Identity
Authentication
Optional – Opaque data: 1’s and 0’s (in transit/at rest)

AWS Foundation Services Managed by


Compute Storage Database Networking

AWS IAM
Availability Zones
AWS Global Edge Locations
Regions
Infrastructure
Identity Access Management (IAM)
With AWS IAM you get to control who can do what in
your AWS environment and from where

•Root in AWS is the same as Root in Windows/Linux


•Password Policies
•IAM Credentials Reports
•Manage Access Keys
•Fine grained control of users, groups, roles, and permissions to
resources
•Integrate with your existing corporate directory using SAML 2.0 and
single sign-on
AWS Config

Fully managed service which provides:


•An Inventory of your AWS resources
•Lets you audit the resource configuration
history
•Notifies you of resource configuration
changes
Use cases enabled by Config

• Security Analysis: Am I safe?


• Config allows you to continuously monitor and evaluate configuration of
workloads

• Audit Compliance: Where is the evidence?


• Complete inventory of all resources and their configuration attributes @ any point
in time

• Change Management: What will this change affect?


• All resource changes (create,update,delete) streamed to SNS

• Troubleshooting: What has changed?


• Identify changes in resource to resource relationships
Amazon Elastic Container Service

+
Key Components

• Docker Daemon
• Task Definitions
• Containers
• Clusters
• Container Instances
Typical User Workflow

I have a Docker image,


and I want to run the
image on a cluster
Typical User Workflow

Push Image(s)
Typical User Workflow

Declare resource
requirements

Create Task
Amazon
Definition ECS
Typical User Workflow
Use custom AMI with
Docker support and ECS
Agent. Instances will
register with default
cluster.

Run Instances EC2


Typical User Workflow

Get information about


cluster state and
available resources

Describe Cluster
Amazon
ECS
Typical User Workflow

Using the task definition


created above

Run Task
Amazon
ECS
Typical User Workflow

Get information about


cluster state and running
containers

Describe Cluster
Amazon
ECS
AWS Elastic Beanstalk

• Easy to deploy and manage applications in AWS


• Application Container / PaaS (Platform-as-a-Service)
• Get started at no charge (free usage tier)
Elastic Beanstalk:
• Capacity provisioning (EC2)
• Load balancing (ELB)
• Auto-scaling (Auto Scaling)
• Application health monitoring (CloudWatch)

126
AWS Elastic Beanstalk - Environments

.NET PHP Java


• AWS SDK for .NET • AWS SDK for PHP • AWS SDK for Java
• Visual Studio Toolkit • Existing dev tools •Eclipse Toolkit
• Microsoft Web Deploy • • Git-based deployment • • Upload WAR file
Upload ZIP GIT PUSH • Linux + Tomcat + Apache
• Microsoft Windows + IIS • Linux + Apache

127
CloudFormation
• Why CloudFormation?
• It’s a service which use and module resources
with template.

128
129
JASON SCRIPT STRUCTURE
• AWS TEMPLETE FORMET VERSION
• DESCRIPTION
• METADATA
• PARAMETERS
• MAPPING
• CONDITIONS
• OUTPUT
• RESOURCES
130
Demo
• EC2 creation, Deletion
• EC2 deployment
• AUTO Scalling
• AMI
• VPC
• SUBNET
• Routing Table
• IG
• Elastic Beanstak
• ELB-Elastic Load Balancer
• Lambda
• Cloud Formation
• S3 Bucket
• System Manager- for patching
• IAM Role creation
131

You might also like