Region (AutoRecovered) (AutoRecovered)

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

Introduction

1. Region: is a physical location in the world that consists of two or more Availability Zones
(AZs).
2. An Availability Zone: Is one or more discrete data centers – each with redundant power
networking, and connectivity – housed in separate facilities.
3. An Edge Location: are endpoints for AWS that are used for caching content. Typically, this
consists of CloudFront and Amazon’s CDN.

1. Storage
a. S3
b. EBS
c. EFS
d. FSx
e. Storage Gateway
2. Databases
a. RDS
b. DynamoDB
c. Redshift
3. Networking
a. VPCs
b. Direct Connect
c. Route 53
d. API Gateway
e. AWS Global Accelerator

AWS Whitepapers

Read through

AWS Well-Architected Framework:

1. Operational Excellence: Running and monitoring systems to deliver business value, and
continually improving processes and procedures
2. Performance Efficiency: Using IT and computing resources efficiently
3. Security: Protecting information and systems
4. Cost Optimization: Avoiding unnecessary costs
5. Reliability: Ensuring a workload performs its intended functions correctly and consistently
when it’s expected to.
6. Sustainability: Minimizing the environmental impacts of running cloud workloads.

Identity and Access Management:


allows you to manage users and their level of access to the AWS console

1. Create users and grant permissions to those users


2. Create groups and roles
3. Control access to AWS resources

The root account is the email address you used to sign up for AWS. The root account has full
administrative access.

Exam Tips

1. Turn on MFA on the root account


2. Create an admin group for administrators and assign the appropriate permissions to this
group.
3. Create users accounts for your admins
4. Add users to your admin groups

Controlling users’ actions with IAM policy documents

JavaScript Object Notation = JSON

Policy Document are made of JSONs

IAM Policy Documents can be assigned to Groups, Users and Roles.

Exam Tips

1. Assign Permissions Using IAM Policy Documents Consisting of JSON

Permanent Identity and Access Management Credentials

The building blocks:

1. Users: A physical person. One user equal to one person. Never share user accounts across
different people
2. Groups: Functions, such as admins, developers etc. Contains users.
3. Roles: Internal usage within AWS.

It’s best practice for users to inherit permissions from groups

The Principle of Least Privilege

Only assign a user the minimum number of privileges they need to do their job.
Exam Tips

1. IAM is Universal: It does not apply to regions


2. The root account: the account created when you first set up your AWS account and which
has complete admin access. Secure it as soon as possible and do not use it to log in day to
day.
3. New Users: No permissions when first created.
4. Access Key ID and secret access key are not the same as usernames and passwords
5. You only get to view these once: If you lose them, you to regenerate them. So, save them in
a secure location.
6. Always set up password rotations: You can create and customize your own password
rotation policies.
7. IAM Federation: You can combine your existing user account with AWS. For example, when
you log on to your PC (usually using Microsoft Active Directory), you can use the same
credentials to log in to AWS if you set up federation.
8. Identity Federations: Uses the SAML standard, which is Active Directory.

S3
S3 Overview

1. Object Storage: S3 provides secure, durable, highly scalable object storage.


2. Scalable: S3 allows you to store and retrieve any amount of data from anywhere on the web
at a very low cost
3. Simple: Amazon S3 is easy to use, with a simple web service interface.

S3 Basics

Manages data as objects rather than in file systems or data blocks.

1. Unlimited storage:
2. Objects up to 5TB in Size: O bytes to 5 tb
3. S3 Buckets: Folders inside S3

Working with S3 Buckets:

1. Universal Namespace: All AWS accounts share the S3 namespace. Each S3 Bucket name is
globally unique
2. Example S3 URLs: https://bucket-name.s3.Region.amazonaws/key-name
3. Uploading files: When you upload a file to S3 you receive a HTTP 200 code regarding
successful upload

S3 file features

1. Key: name of the object


2. Version Id: For multiple versions of the same object
3. Value: the data itself
4. Meta Data: The data about the data.

S3 buckets are spread across the globe to cover high availability and high durability.

1. Built for availability


2. Designed for durability

S3 Standard

1. Default version of S3 when you store a file.


2. Designed for Frequent access to data
3. Suitable for Most Workloads

Tiered Storage

S3 offers a range of storage classes designed for different use cases

Lifecycle Management

Defines rules to automatically transition objects to a cheaper storage tier or delete objects that are
no longer required after a set period of time

Versioning

With versioning, all versions of an object are stored and can be retrieved including deleted objects.

Securing your data

1. Server-side Encryption
a. U can set default encryption on a bucket to encrypt all new objects when are stored
2. Access Control Lists
a. Define which AWS accounts or groups are granted access and the type of access. U
can attach S3 ACLs to individual objects within a bucket
3. Bucket Policies
a. S3 bucket policies specify what actions are allowed or deleted. JSON policies

Strong Read After Write Consistency

- After a successful write of a new object (PUT) or an overwrite of an existing object, any
subsequent read request immediately receives the latest version of the object
- Strong consistency for list operations, so after a write, you can immediately perform a list
operation and your file will be there.

Securing your S3 Bucket.

Object ACLS vs Bucket Policies

- Object ACLS work on an individual object level.


- Bucket Policy work on an entire bucket level.

S3 operates across a global space, thus a unique name is needed.


- Buckets are private by default: When you create an S3 bucket, it is private by default
(including all objects within it). You have to allow public access on both the bucket and its
objects in order to make the bucket public.
- Object ACLs: You can make individual objects public by using ACLs.
- Bucket policies: You can make an entire bucket public using bucket policies.
- HTTP status code: When you upload an object to S3 and its successful, you will receive an
HTTP 200 code.

Hosting a Static Website on S3

- You can use S3 to host static websites, such as .html sites.


- Dynamic websites, such as those that require database connections, cannot be hosted on
S3.
- S3 Scales Automatically to meet demand.
- Bucket policies: Make entire bucket public
- Static Content: Use S3 to host static content only. NOT DYNAMIC

What is Versioning?

You can enable versioning in S3 so you can have multiple versions of an object within S3.

Advantages of Versioning

1. All Versions: All versions of an object are stored in S3. This includes all writes and even if
you delete an object.
2. Backup: Can be a great backup tool
3. Cannot be Disabled: Once enabled, versioning cannot be disabled – only suspended.
4. Lifecycle Rules: Can be integrated with lifecycle rules.
5. Support MFA

S3 Storage Classes

1. S3 Standard
a. High Availability and Durability: Data is stored redundantly across multiple devices
in multiple facilities (>= 3 AZs):
i. 99.99% in availability
ii. 99.9999999% durability (11 9’s)
b. Designed for Frequent Access: Perfect for frequently accessed data.
c. Suitable for Most Workloads
i. The default storage class.
ii. Use cases include websites, content distribution, mobile and gaming
applications, and big data analytics.
2. S3 Standard-Infrequent Access (S3 Standard-IA)
a. Rapid Access: Used for data that is accessed less frequently but requires rapid
access when needed.
b. You pay to access the data. There is a low per-GB storage price and per-GB retrieval
fee
c. Use Cases: Great for long-term storage, backups, and as a data store for disaster
recovery files.
3. S3 One Zone-Infrequent Access
a. Like S3 Standard-IA, but data is stored redundantly within a single AZ.
b. Costs 20% less than regular S3 Standard-IA
c. Great for long-lived, infrequently accessed, non-critical data.
4. S3 Intelligent Tiering
a. Frequent and Infrequent Access: Automatically moves your data to the most cost-
effective tier based on how frequently you access each object.
b. Optimize Costs: 0.0025 per 1,000 objects

Glacier Options

- You pay each time you access your data


- Use only for archiving data
- Glacier is cheap storage
- Optimized for data that is very infrequently accessed.
1. Glacier Instant Retrieval: Provided long-term data archiving with instant retrieval time for
your data. Once a year access.
2. Glacier Flexible Retrieval: Ideal storage class for archive data that does not require
immediate access but needs the flexibility to retrieve large sets of data at no cost, such as
backup or disaster recovery use cases. Can be minutes or up to 12 hours.
3. Glacier Deep Archive (Cheapest Storage Class): Cheapest storage class and designed for
customers that retain data sets for 7-10 years or longer to meet customers’ needs and
regulatory compliance requirements. The standard retrieval time is 12hrs, and the bulk
retrieval time is 48hrs.
Exam Tips

- If we’re working on any type of archival data, then we choose S3 Glacier


- Look at how fast you want to retrieve the data and what’s the cheapest option.
- In terms of the cost, understand how it is priced both S3 and Glacier
- S3 standard is the most expensive one
- If you want to cost optimized, then use S3 Intelligent Tiering
- Infrequently accessed is Glacier for retrieval fee
- S3 Glacier is the most expensive in Glacier

Lifecycle Management with S3

- Lifecycle Management automates moving your objects between different storage tiers,
thereby maximizing cost effectiveness.
- S3 Standard: Keep for 30 Days
- S3 IA: After 30 days
- Glacier: After 90 days

Combining Lifecycle Management with Versioning: You can use lifecycle management to move
different versions of objects to different storage tiers.

Exam Tips:

- Automates moving objects between storage tiers


- Can be used in conjunction with versioning
- Can be applied to current and non-current versions

S3 Object Lock and Glacier Vault

1. S3 Object Lock: You can use S3 Object Lock to store objects using a write once, read many
(WORM) models. It can help prevent objects from being deleted or modified for a fixed
amount of time or indefinitely. You can use S3 Object Lock to meet regulatory requirements
that require WORM storage or add an extra layer of protection against changes and
deletion. S3 Object Modes:
a. Governance Mode: Users can’t overwrite or delete an object version or alter its
lock settings unless they have special permissions. With governance mode, you
protect objects against being deleted by most users, but you can still grant some
users permission to alter the retention settings or delete the object if necessary
b. Compliance Mode: a protected object version can’t be overwritten or deleted by
any user, including the root user in your AWS account. When an object is locked in
compliance mode, its retention mode can’t be changed, and its retention period
can’t be shortened. Compliance mode ensures an object version can’t be
overwritten or deleted for the duration of the retention period.
2. Retention Period: Protects an object version for a fixed amount of time. When you place a
retention period on an object version S3 stores a timestamp in the object version’s metadata
to indicate when the retention period expires. After the retention period expires, the object
version can be overwritten or deleted unless you also placed a legal hold on the object.
3. Legal Hold: S3 Object Lock also enable you to place a legal hold on an object version. Like a
retention period, a legal hold prevents an object version from being overwritten or deleted.
However, a legal hold doesn’t have an associated retention period and remains in effect until
removed. Legal holds can be freely placed and removed by a user who has the s3:
PutObjectLegalHold permission.

Glacier Vault Lock: allows you to easily deploy and enforce compliance controls for individual
S3 Glacier vaults with a vault lock policy. You can specify controls, such as WORM, in a vault
lock policy and lock the policy from future edits. Once locked, the policy can no longer be
changed.

Encrypting S3 Objects

Types of Encryptions

1. Encryption in Transit: When sending objects to and from buckets


a. SSL/TLS
b. HTTPS
2. Encryption at Rest: Server-Side Encryption
a. SSE-S3: S3 managed keys, using AES 256-bit encryption
b. SSE-KMS: AWS Key Management service-managed keys
c. SSE-C: Customer provided keys
3. Encryption at Rest: Client-Side Encryption: you can encrypt the files yourself before you
upload them to S3.

Enforcing Server-Side Encryption

1. Console: Select the encryption setting on your S3 bucket. The easiest way is just a checkbox
in the console.
2. Bucket Policy: You can enforce encryption using a bucket policy. This method sometimes
comes up in the exam. You can create a bucket policy that denies any S3 PUT request that
doesn’t include x-amz-server-side-encryption parameter in the request header.

Optimizing S3 Performance.

S3 Prefixes

- mybucketname/folder1/subfolder1/myfile.jpg > prefix: /folder1/subfolder1


- mybucketname/folder2/subfolder1/myfile.jpg > prefix: /folder2/subfolder1
- mybucketname/folder3/ myfile.jpg > prefix: /folder3
- mybucketname/folder4/subfolder4/myfile.jpg > prefix: /folder4/subfolder4

S3 Performance

- S3 has extremely low latency. You can get the first byte out of S3 within 100-200
milliseconds.
- You can also achieve a high number of requests: 3.500 PUT/COPY/POST/DELETE and 5,500
GET/HEAD requests per second, per prefix

The more prefixes the higher performance we will get.

- You can get better performance by spreading your reads across different prefixes. For
example, if you’re using 2 prefixes you will achieve 11,000 requests per second
- If we used all 4 prefixes in the last example, then we’d get 22,000 requests per second

Limitations with KMS

- If you are using SSE-KMS to encrypt your objects in S3, you must keep in the KMS Limits
- When you upload a file, you will call a GenerateDataKey in KMS API
- When you download a file, you will call a Decrypt in the KMS API

Multipart Upload for Uploads

- Recommended for files over 100MB


- Required for files over 5 GB
- Parallelize uploads (increase efficiency)

S3 Byte-Range Fetches for downloads

- Parallelize downloads by specifying byte ranges


- If there’s a failure in the download, its only for a specific byte range.
- Can be used to speed-up downloads
- Can be used for partial downloads
Backing up Data with S3 Replication

S3 Replication:

1. You can replicate objects from one bucket to another.


a. Versioning must be enabled on both the source and destination buckets
2. Objects in an existing bucket replicated automatically.
a. Once replication is turned on, all subsequent updated objects will be replicated
automatically.

Chapter 3 EC2
EC2 = Elastic Compute Cloud: Secure, resizable compute capacity in the cloud.

- Like a VM, only hosted in AWS instead of your own data center
- Designed to make web-scale cloud computing easier for developers
- The capacity you want when you need it
- You are in complete control of your own instances

Pricing Options

1. On Demand: Pay by the hour or the second, depending on the type of instances you run.
a. Flexible: Low cost and flexibility of Amazon EC2 without any upfront payment or
long-term commitment
b. Short-Term: Applications with short-term, spiky, or unpredictable workloads that
cannot be interrupted
c. Testing the Water: Applications being developed or tested on Amazon EC2 for the
first time
2. Reserved: Reserved Capacity for 1 to 3 years. Up to 72% discount on the hourly charge
a. Predictable Usage: Application with steady state or predictable usage
b. Specific Capacity Requirements: Applications that require reserved capacity
c. Pay up Front: You can make upfront payments to reduce the total computing costs
even further
d. Standard RIS: up to 72% off the on-demand prices by paying 3 years upfront
e. Convertible RIS: up to 54% off the on-demand price. Has the option to change to a
different RI type of equal or greater value
f. Scheduled RIS: Launch within the time window you define. Match your capacity
reservation to a predictable recurring schedule that only requires a fraction of a day,
week, or month.
g. Reserved instances operate at a regional level.
h. Savings Plans with Reserved Instances
i. Save up to 72%: All AWS compute usage regardless of instance type or
Region
ii. Commit to 1 to 3 years: Commit to use a specific amount of compute power
(measured by the hour) for a 1-year or 3-year period
iii. Super Flexible: Not only EC2, but this also includes serverless technologies
like LAMBDA and FARGATE
3. Spot: Purchase unused capacity at a discount of up to 90%. Prices fluctuate with supply and
demand.
a. Applications that have flexible start and end times. DON’T USE IT FOR WEBSITE.
b. Applications that are only feasible at a very low compute price
c. Users with an urgent need for large amounts of additional computing capacity
d. Examples of usage:
i. Image Rendering
ii. Genomic sequencing
iii. Algorithmic trading engines
4. Dedicated: A physical EC2 server dedicated for your use. The most expensive option. Great if
you have server-bound licenses to reuse or compliance requirements.
a. Compliance: Regulatory requirements that may not support multi-tenant
virtualization.
b. Licensing: Great for licensing that does not support multi-tenancy or cloud
deployments
c. On-Demand: Can be purchased on Demand
d. Reserved: Can be purchased as a reservation for up to 70% off.

Exam Tips:

- What is EC2?
- What are the pricing options?

Demo: Launching an EC2 Instance.

- Step 1: Choose an Amazon Machine Image


- Step 2: Choose Instance Type
- Step 3: Config. Instance Details
- Step 4: Add Storage
- Step 5: Add Tags
- Step 6: Config Security Group
-

Subnet: This is an availability zone.

- What is a key pair at the end of an EC2 instance?

AWS Command Line

AWS Management Console

The AWS command line allows you to interact with AWS simply by typing commands.

IAM – Create New User and New Group and give it access to the S3

Least Privilege – Always give your users the minimum amount of access required to do their job

Use Groups – Create IAM groups and assign your users to groups. Group permissions are assigned
using IAM policy documents. Your users automatically
AWS CLI Exam Tips

1. Secret Access Key – You will only see this once! If you lose it, you can delete the access key
ID and secret access key and regenerate them. You will need to run AWS configure again.
2. Don’t Share Key Pairs – Each developer should have their access key ID and secret access
key. Just like passwords, they should not be shared.
3. Supports Linux, Windows, Mac OS

Using Roles

What is an IAM Role?

- A role is an identity you can create in IAM that has specific permissions. A role is similar to a
user, as it is an AWS identity with permission policies that determine what the identity can
and cannot do in AWS.
- However, instead of being uniquely associated with one person, a role is intended to be
assumed by anyone who needs access to it

Roles are Temporary

- A role does not have standard long-term credentials the same way passwords or access keys
do. Instead, when you assume a role, it provides you with temporary security credentials for
your role session.

What else can Roles Do?

- Roles can be assumed by people, AWS architecture, or other system-level accounts


- Roles can allow cross-account access. This allows one AWS account the ability to interact
with resources in other AWS accounts.

Console Demo

Exam Tips

1. Preferred Options: Roles are the preferred option from a security perspective
2. Avoid Hard Coding your credentials: Roles allow you to provide access without the use of
access key IDS and secret access keys.
3. Policies: Policies control a role’s permissions.
4. Updates: you can update a policy attached to a role and it will take immediate effect.
5. Attaching and Detaching: you can attach and detach roles to running EC2 instances without
having to stop or terminate those instances.

Always chose roles over hardcoding your credentials


Security Groups and Bootstrap Scripts

How computers communicate

- Linux: SSH Port 22


- Windows: RDP Port 3389
- HTTP: Web Browsing Port 80
- HTTPS: encrypted web browsing (SSL) Port 443

Security Groups

- Are virtual firewalls for your EC2 instance? By default, everything is blocked.
- To let everything in 0.0.0.0/0
- To be able to communicate to your EC2 instances via SSH/RDP/HTTP, you will need to open
up the correct ports

Bootstrap Scripts

- A script that runs when the instance first runs

Exam Tips

1. Changes to security groups take effect immediately


2. You can have any number of EC2 instances within a security group
3. You can have multiple security groups attached to EC2 instances
4. All inbound traffic is blocked by default
5. All outbound traffic allowed
EC2 Metadata and User Data

What is EC2 Metadata?

- Is simply data about your EC2 instance. This can include information such as Private IP
address, public IP address, hostname, security groups

Retrieving Metadata

- Using the curl command, we can query meta data about our EC2 instance.

Using User Data to Save Metadata

- In this simple bootstrap script, we use

Exam Tips

- User data is simply bootstrap scripts


- Metadata is data about your EC2 instances.
- You can use bootstrap scripts (user data) to access metadata.
Networking with EC2

Different Virtual Networking Options

You can attach 3 different types of virtual networking cards to your EC2 instances.

1. ENI: Elastic Network Interface: For basic, day-to-day networking


a. Private IPv4 Addresses
b. Public IPv4 Address
c. Many IPv6 Addresses
d. MAC address
e. 1 or More Security Groups
f. Common ENI Use Cases
i. Create a management network
ii. Use network and security applications in your VPC
iii. Create dual-homed instances with workloads/roles on distinct subnets
iv. Create a low-budget, high-availability solution
2. EN: Enhanced Networking: Uses single root I/O virtualization (SR-IOV) to provide high
performance. 10 Gbps- 100 Gbps.
a. SR-IOV provides higher I/O performances and lower CPU utilization
b. Provides higher bandwidth, higher packet per second (PPS) performance, and
consistently lower inter-instance latencies
c. Elastic Network Adapter (ENA): Supports network speeds of up to 100 Gbps for
supported instance types. Always preferred
d. Intel 82599 Virtual Function (VF) Interface: Supports network speeds of up to
10Gbps for supported instance types. Typically used on older instances.
3. EFA: Elastic Fabric Adapter: Accelerated High-performance computing (HPC) and machine
learning application
a. A network device you can attach to your Amazon EC2 instance to accelerate High-
performance computing (HPC) and machine learning applications.
b. Provides lower and more consistent latency and higher throughput than the TCP
transport traditionally used in cloud based HPC systems.
c. EFA can use OS-Bypass: It makes it a lot faster with much lower latency.

Exam Tips:

1. ENI: for basic networking. Perhaps you need a separate management network from your
production network or a separate logging network, and you need to do this at a low cost. In
this scenario, use multiple ENIs for each network.
2. Enhanced Networking: For when you need speeds between 10 Gbps and 100 Gbps.
Anywhere you need reliable, high throughput
3. EFA: for when you need to accelerate high performance computing (HPC) and machine
learning applications or if you need to do an OS-bypass. If you see a scenario question
mentioning HPC or ML and asking what network adapter you want, choose EFA.
Optimizing with EC2 Placement

3 types of placement groups

1. Cluster: Grouping of instances within a single AZ . Recommended for applications that need
low network latency, high network throughput, or both. Fact: Only certain instance types
can be launched into a cluster placement group.
2. Spread: A spread placement group is a group of instances that are each placed on the
distinct underlying hardware. Spread placement groups are recommended for applications
that have a small number of critical instances that should be kept separate from each other.
Used for individual instances.
3. Partition: Each partition placement group has its own set of racks. Each rack has its own
network and power source. No two partitions within a placement group share the same
racks, allowing you to isolate the impact of a hardware failure within your application.
Multiple EC2 instances: HDFS

Exam Tips:

1. A cluster placement group can’t span multiple AZs, whereas a spread placement group and
partition placement group can.
2. Only certain types of instances can be launched in a placement group (compute optimized,
GPU, memory optimized, storage optimized)
3. AWS recommends homogenous instances within a cluster placement groups
4. You can’t merge placement groups
5. You can move an existing instance into a placement group. Before you move the instance,
the instance must be in the stopped state. You can move or remove an instance using the
AWS CLI or an AWS SDK, but you can’t do it via the console yet.

Solving Licensing Issues with Dedicated Hosts.

- Any question that talks about special licensing requirements.


- An AWS EC2 Dedicated host is a physical server with EC2 instance capacity fully dedicated to
your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM
software licenses, including Windows Serve, Microsoft SQL server, and SUSE Linux Enterprise
Server.
Timing Workloads with Spot

What are EC2 Spot Instances?

- AWS EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS Cloud.
Spot Instances are available at up to 90% discount compared to On-Demand Prices
- WHEN TO USE SPOT INSTANCES?
o Stateless, fault-tolerant, or flexible applications
 Applications such as big data, containerized workloads, CI/CD high-
performance computing (HPC), and other test and development workloads

Spot Prices

- To use spot instances, you must first decide on your maximum spot prices, the instance will
be provisioned so long as the spot price is below your maximum spot price
- The hourly spot price varies depending on capacity and region
- If the spot price goes above your maximum, you have 2 mins to choose whether to stop or
terminate your instance

Spot Blocks

- You may also use a spot block to stop your spot instances from being terminated even if the
spot price goes over your max spot price. You can set spot blocks for between 1 to 6 hours
currently.
-

Use Cases

- Big data and analytics


- Containerized Workloads
- CI/CD and testing
- Image and media rendering
- High performance computing

NOT USED FOR:

1. Persistent Workloads
2. Critical Jobs
3. Databases
Terminating Spot Instances

What are Spot Fleets?

- A spot fleet is a collection of spot instances and (optionally) On-Demand instances.


- The spot fleet attempts to launch the number of spot instances and On-Demand instances to
meet the target capacity you specified in the Spot Fleet request. The request for spot
instances is fulfilled if there is available capacity and the maximum price you specified in the
request exceeds the current spot price. The spot fleet also attempts to maintain its target
capacity fleet if your spot instances are interrupted.
- Spot Fleets will try and match the target capacity with your price restraints.

Launch Pools

1. Setup different launch pools. Define things like EC2 instance type, operating system, and AZ.
2. You can have multiple pools, and the fleet will choose the best way to implement depending
on the strategy you define.
3. Spot fleets will stop launching instances once you reach your price threshold or your
capacity desire.

Strategies

- capacityOptimized: the spot instances come from the pool optimal capacity for the number
if instances launching
- lowestPrice: the spot instances come from the pool with the lowest price. this is the default
strategy
- diversified: the spot instances are distributed across all pools.
- InstancePoolstoUseCount: the spot instances are distributed across the number of spot
instances pools you specify. This parameter is valid only when used in combination with
lowestPrice.
Exam Tips

- Spot Instances save up to 90% of the cost of On-Demand Instances


- Useful for any type of computing where you don’t need persistent storage
- You can block Spot Instances from terminating by using a Spot Block
- A Spot Fleet is a collection of Spot Instances and (optionally) On-Demand instances.
Deploying vCenter in AWS with VMware Cloud on AWS

Why use VMware on AWS?

- VMware is used by organizations around the world for private cloud deployments. Some
organizations opt for a hybrid cloud strategy and would like to leverage AWS services.

Use Cases

1. Hybrid Cloud: Connect your on-premises cloud to the AWS public cloud, and manage a
hybrid workload
2. Cloud Migration: Migrate your existing cloud environment to AWS using VMware’s built-in
tools
3. Disaster Recovery: VMware is famous for its disaster recovery technology. Using hybrid
cloud, you can have an inexpensive disaster recovery environment on AWS.
4. Leverage AWS: Use over 200 services to update your applications or to create new ones

Deployment

1. It runs on dedicated hardware hosted in AWS using a single AWS account


2. Each host has two sockets with 18 cores per socket, 512 GiB RAM, and 15.2 TB Raw SSD
Storage.
3. Each host can run multiple VMware instances (up to the hundreds)
4. Clusters can start with two hosts up to a maxim of 16 hosts per cluster

Exam Tips

1. You can deploy VCenter on the AWS cloud using VMware


2. Perfect solution for extending your private VMware Cloud into the AWS Public Cloud.

Extending AWS Beyond the Cloud with AWS Outposts

What are Outposts?

- Outposts brings the AWS data center directly to you, on-premises. Outposts allows you to
have the large variety of AWS services in your data center. You can have Outposts in sizes
such as 1U and 2U servers all the way up to 42U racks and multiple-rack deployments.

Benefits

1. Hybrid Cloud
2. Fully Managed Infrastructure
3. Consistency

Family Members

1. Outposts Rack: for large deployments


a. Hardware: Available starting with a single 42U rack and scale up to 96 racks
b. Services: Provides AWS compute storage, database, and other services locally
c. Results: Gives the same AWS infrastructure, services, and APIs in your own data
center
2. Outposts Servers: for small deployments
a. Hardware: Individual servers in 1U or 2U form factors
b. Use Cases: Useful for small space requirements, such as retail stores, branch offices,
health care provider locations or factory floors.
c. Results: Provides local compute and networking services.
EC2 Exam Tips

- EC2 is like a VM, hosted in AWS instead of your own data center
- Select the capacity you need right now, grow and shrink when you need, Pay for what you
use
- EC2 Instance Pricing Option
o On-Demand
o Spot
o Reserved
o Dedicated
- AWS Command Line Interface
o Least Privilege
o Use Groups
- Roles are the Preferred Option
- Avoid Hard Coding Your Credentials
- Policies control a role’s permission
-
Elastic Block Storage (EBS) and Elastic File System (EFS)

EBS (Elastic Block Storage) Overview

Understanding EBS Volumes

- Storage Volumes you can attach your EC2 instances.


- Use them the same you would use any system disk
o Create a file system
o Run a database
o Run an operating system
o Store data
o Install applications

Mission Critical

1. Production Workloads: Designed for mission-critical workloads


2. Highly Available: Automatically replicated within a single AZ to protect against hardware
failures
3. Scalable: Dynamically increase capacity and change volume type with no downtime or
performance impact to your live systems.

EBS Volume Types; Solid State Disk (SSD) (Used for installing applications on)

1. General Purpose SSD (gp2): a balance of price and performance


a. 3 IOPS per GiB, up to a maximum of 16,000 IOPS per volume
b. Gp2 volumes smaller than 1 TB can burst up to 3,000 IOPS
c. Good for boot volumes or development and test applications that are not latency
sensitive.
2. General Purpose SSD (gp3): A balance of price and performance. (There for boot device
volume)
a. Predictable 3,000 IOPs baseline performance and 125 MiB/s regardless of volume
size
b. Ideal for applications that require high performance at a low cost, such as MySQL,
Cassendra, virtual desktops, and Hadoop Analytics
c. Customers looking for higher performance can scale up to 16,000 IOPs and 1,000
MiB/s for an additional fee.
3. Provisioned IOPS SSD (io1): The high-performance option and the most expensive
a. Up to 64,000 IOPS per volume 50 IOPs per GIB
b. Use if you need more than 16,000 IOPS
c. Designed for I/O-intensive applications, large databases, and latency-sensitive
workloads
4. Provisioned IOPS SSD (io2): Latest generation with higher durability and more IOPS.
a. Same price as io1
b. 500 IOPs per GIB, up to 64,000 IOPs
c. 99.999% durability
d. I/O intensive apps, large databases, latency-sensitive workloads. Applications that
need high levels of durability
EBS Volume Types: Hard Disk Drive (MB/s-intensive)

1. Throughput Optimized HDD (st1): Low-cost HDD Volume


a. Baseline throughput of 40MB/s per TB
b. Ability to burst up to 250 MB/s per TB
c. Maximum throughput of 500MB/s per volume
d. Frequently accessed, throughput-intensive workloads
e. Big data, data warehouses, ETL, and log processing
f. A cost-effective way to store mountains of data
g. Cannot be a boot volume
2. COLD HDD (SC1): Lowest Cost Option
a. Baseline throughput of 12 MB/s per TB
b. Ability to burst up to 80MB/s per TB
c. Max throughput of 250 MB/s per volume
d. A good choice for colder data requiring fewer scans per day
e. Good for applications that need the lowest cost and performance is not a factor
f. Cannot be a boot volume

IOPS vs. Throughput


Exam TIps
Volumes and Snapshots

What are Volumes?

- Volumes exist on EBS


- Volumes are simply virtual hard disks. You need minimum of 1 volume per EC2 instance. This
is called the root device volume

What are Snapshots?

- Snapshots exist on S3: Think of snapshots as a photograph of the virtual disk/volume.


- Snapshots are point in time: When you take a snapshot, it is a point-in-time copy of a
volume
- Snapshots are incremental: this means only the data that has been changed since your last
snapshot are moved to S3. This saves dramatically on space and the time it takes to take a
snapshot
- The first snapshot: If it is your first snapshot, it may take some time to create as there is no
previous point-in-time copy.

3 Tips for Snapshots

- Consistent Snapshots: Snapshots only capture data that has been written to your AWS EBS
Volume, which might exclude any data that has been locally cached by your application or
OS. For a consistent snapshot, it is recommended you stop the instance and take a snap
- Encrypted Snapshots: If you take a snapshot of an encrypted EBS volume, the snapshot will
be encrypted automatically
- Sharing Snapshots: You can share snapshots, but only in the region in which they were
created. To share to other regions, you will need to copy them to the destination region first

What to Know about EBS Volumes

1. Location: EBMS Volumes will always be in the same AZ as EC2


a. Your EBS Volumes will always be in the same AZ as the EC2 instance to which it is
attached.
2. Resizing: Resize on the fly
a. You can resize the EBS Volumes on the fly. You do not need to stop or restart the
instance. However, you will need to extend the filesystem in the OS so the OS can
see the resized volume.
3. Volume Types: Switch volume types on the fly
a. You can change volume types on the fly (e.g., go from gp2 to io2) you do not need to
stop or restart the instance.

Console Demo

Exam Tips

1. Volumes exist on EBS, whereas snapshots exist on S3


2. Snapshots are point-in-time photographs of volumes and are incremental in nature
3. The first snapshot will take some time to create. For consistent snapshots, stop the instance
and detach the volume/.
4. You can share snapshots between AWS accounts as well as between regions, but you need
to copy that snapshot to the target region
5. You can resize EBS volumes on the fly as well as changing the volume types
Protecting EBS Volumes with Encryption

EBS Encryption

- EBS encrypts your volume with a data key using the industry standard AES 256 algo. Amazon
EBS encryption uses AWS Key Management Service (AWS KMS) customer master keys (CMK)
when creating encrypted volumes and snapshots.

What happens when you e


EC2 Hibernation
EFS Overview
FSx Overview
Amazon Machine Images: EBS
AWS Backup
EBS Exam Tips

You might also like