Professional Documents
Culture Documents
Hackettjoe - Amazon-AWS-SAA-C02 - My Study Material and Notes For Taking The AWS SAA-C02 Exam
Hackettjoe - Amazon-AWS-SAA-C02 - My Study Material and Notes For Taking The AWS SAA-C02 Exam
hackettjoe / Amazon-AWS-SAA-C02
My study material and notes for taking the AWS SAA-C02 exam
1 star 0 forks
Star Watch
master
View code
README.md
My SAA-C02 Notes
These are my personal notes from AWS Documentation and a Udemy course. I would
highly recommend this course by Stephane aws-sa-associate-saac02.
Table of Contents
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 1/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
AWS-Infrastructure
IAM-Accounts-Organizations
Amazon-S3
VPC
EC2
Containers
Additional-EC2-Info
Route53
Amazon-RDS
Network-Storage-EFS
HA-Scale
Serverless-Services
CDN-Network-Optimization
Other-VPC-Topics
Migration
Security-Operations
DynamoDB
AWS-Infrastructure
Global Infrastructure
Regions
AWS has the concept of a Region, which is a physical location around the world where we
cluster data centers. We call each group of logical data centers an Availability Zone. Each
AWS Region consists of multiple, isolated, and physically separate AZ's within a geographic
area. Each AZ has independent power, cooling, and physical security and is connected via
redundant, ultra-low-latency networks. Note, a region is simply a cluster of datacenters
where most AWS services are region-scoped.
Names can be us-east-1 or eu-west-3 and include areas such as countries or states
North America
California
Singapore
Beijing
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 2/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
London
Paris
Tokyo
AWS maintains multiple geographic Regions, including Regions in North America, South
America, Europe, China, Asia Pacific, South Africa, and the Middle East. To view a region
near you or your users click here.
Availability Zones
An Availability Zone (AZ) is one or more discrete data centers with redundant power,
networking, and connectivity in an AWS Region. All AZ’s in an AWS Region are
interconnected with high-bandwidth, low-latency networking, over fully redundant,
dedicated metro fiber providing high-throughput, low-latency networking between AZ’s.
All traffic between AZ’s is encrypted. AZ’s are physically separated by a meaningful
distance, many kilometers, from any other AZ, although all are within 100 km (60 miles) of
each other.
Each region has many AZ's, usually 3 AZ's in each region (minimum of 2, maximum of 6).
Benefits of Regions
Security
Availability
Performance
Global Footprint
Scalability
Flexibility
Local Zones
AWS Local Zones are a new type of AWS infrastructure deployment that places AWS
compute, storage, database, and other select services closer to large population, industry,
and IT centers where no AWS Region exists today. You can easily run latency-sensitive
portions of applications local to end-users and resources in a specific geography,
delivering single-digit millisecond latency. Each AWS Local Zone location is an extension of
an AWS Region where you can run your latency-sensitive applications. Local Zones are
managed and supported by AWS.
Use Cases: Media & Entertainment Content Creation, Real-time Multiplayer Gaming,
Reservoir Simulations, Electronic Design Automation, and Machine Learning.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 3/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
IAM-Accounts-Organizations
AWS IAM
AWS Identity and Access Management (IAM) is a web service that helps you securely
control access to AWS resources. You use IAM to control who is authenticated (signed in)
and authorized (has permissions) to use resources.
User
An AWS Identity and Access Management (IAM) user is an entity that you create in
AWS to represent the person or application that uses it to interact with AWS. A user in
AWS consists of a name and credentials.
Principals
A person or application that uses the AWS account root user, an IAM user, or an IAM
role to sign in and make requests to AWS.
Role
An IAM identity that you can create in your account that has specific permissions. An
IAM role has some similarities to an IAM user. Roles and users are both AWS identities
with permissions policies that determine what the identity can and cannot do in AWS.
Use IAM to configure roles when using CLI within AWS
Roles can be attached to EC2 instances
For example, when making secure calls from EC2 to S3 buckets, create an IAM role
granting least privilege and assign it to the EC2 instance
Federation
The creation of a trust relationship between an external identity provider and AWS.
Users can sign in to a web identity provider, such as Login with Amazon, Facebook,
Google, or any IdP that is compatible with OpenID Connect (OIDC). Users can also
sign in to an enterprise identity system that is compatible with Security Assertion
Markup Language (SAML) 2.0, such as Microsoft Active Directory Federation Services.
Permissions boundary
An advanced feature in which you use policies to limit the maximum permissions that
an identity-based policy can grant to a role. You cannot apply a permissions boundary
to a service-linked role.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 4/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Use a managed policy as the permissions boundary for an IAM entity (user or role).
That policy defines the maximum permissions that the identity-based policies can
grant to an entity, but does not grant permissions. Permissions boundaries do not
define the maximum permissions that a resource-based policy can grant to an entity.
When you use a policy to set the permissions boundary for a user, it limits the user's
permissions but does not provide permissions on its own.
Organizations SCP
Use an AWS Organizations service control policy (SCP) to define the maximum
permissions for account members of an organization or organizational unit (OU). SCPs
limit permissions that identity-based policies or resource-based policies grant to
entities (users or roles) within the account, but do not grant permissions.
SCPs are JSON policies that specify the maximum permissions for an organization or
organizational unit (OU). The SCP limits permissions for entities in member accounts,
including each AWS account root user. An explicit deny in any of these policies
overrides the allow.
As an administrator of the master account of an organization, you can use service
control policies (SCPs) to specify the maximum permissions for member accounts in
the organization.
SCPs provide a single point of maintenance and organization for root OUs and AWS
Organizations
Inline Policy: Policies that you create and manage and that are embedded directly into
a single user, group, or role. In most cases, we don't recommend using inline policies.
Managed Policy: Standalone identity-based policies that you can attach to multiple
users, groups, and roles in your AWS account. You can use two types of managed
policies: AWS managed policies and customer managed policies.
Amazon Resource Names (ARNs) uniquely identify AWS resources. We require an ARN
when you need to specify a resource unambiguously across all of AWS, such as in IAM
policies, Amazon Relational Database Service (Amazon RDS) tags, and API calls.
ARN format:
arn:partition:service:region:account-id:resource-id
arn:partition:service:region:account-id:resource-type/resource-id
arn:partition:service:region:account-id:resource-type:resource-id
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 5/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
AWS Organizations
AWS Organizations is an account management service that enables you to consolidate
multiple AWS accounts into an organization that you create and centrally manage. AWS
Organizations includes account management and consolidated billing capabilities that
enable you to better meet the budgetary, security, and compliance needs of your business.
As an administrator of an organization, you can create accounts in your organization and
invite existing accounts to join the organization.
Organization Root
The parent container for all the accounts for your organization. If you apply a policy to the
root, it applies to all organizational units (OUs) and accounts in the organization.
SCPs are similar to AWS Identity and Access Management (IAM) permission policies and
use almost the same syntax. However, an SCP never grants permissions. Instead, SCPs are
JSON policies that specify the maximum permissions for an organization or organizational
unit (OU).
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 6/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
SCPs affect only principals that are managed by accounts that are part of the
organization.
An SCP restricts permissions for principals in member accounts, including each AWS
account root user.
SCPs do not affect any service-linked role.
Amazon-S3
S3 Overview
Amazon S3 has a simple web services interface that you can use to store and retrieve any
amount of data, at any time, from anywhere on the web.
Buckets
A bucket is a container for objects stored in Amazon S3. Every object is contained in a
bucket.
Objects
Objects are the fundamental entities stored in Amazon S3. Objects consist of object
data and metadata.
Keys
A key is the unique identifier for an object within a bucket. Every object in a bucket
has exactly one key. The combination of a bucket, key, and version ID uniquely identify
each object.
Bucket Policies
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 7/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Bucket policies provide centralized access control to buckets and objects based on a
variety of conditions, including Amazon S3 operations, requesters, resources, and aspects
of the request (for example, IP address). The policies are expressed in the access policy
language and enable centralized management of permissions. The permissions attached to
a bucket apply to all of the objects in that bucket.
Unlike access control lists (described later), which can add (grant) permissions only on
individual objects, policies can either add or deny permissions across all (or a subset)
of objects within a bucket.
To host a static website on Amazon S3, you configure an Amazon S3 bucket for
website hosting and then upload your website content to the bucket. When you
configure a bucket as a static website, you must enable website hosting, set
permissions, and create and add an index document.
You can then access the bucket through the AWS Region-specific Amazon S3 website
endpoints for your bucket
S3 doesn't support HTTPS access for website endpoints (use CloudFront instead)
MFA Delete
You can optionally add another layer of security by configuring a bucket to enable MFA
(multi-factor authentication) Delete, which requires additional authentication for either of
the following operations:
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 8/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
S3 Transfer Acceleration
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long
distances between your client and an S3 bucket. Transfer Acceleration takes advantage of
Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge
location, data is routed to Amazon S3 over an optimized network path.
Use Cases
You have customers that upload to a centralized bucket from all over the world
You transfer gigabytes to terabytes of data on a regular basis across continents
You are unable to utilize all of your available bandwidth over the Internet when
uploading to Amazon S3
S3 Encryption
Types
When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each
object is encrypted with a unique key. As an additional safeguard, it encrypts the key
itself with a master key that it regularly rotates. Amazon S3 server-side encryption
uses one of the strongest block ciphers available, 256-bit Advanced Encryption
Standard (AES-256), to encrypt your data.
AWS owned and managed keys (manages encryption and decryption processs
automatically)
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key
Management Service (SSE-KMS)
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key
Management Service (SSE-KMS) is similar to SSE-S3, but with some additional benefits
and charges for using this service. There are separate permissions for the use of a
CMK that provides added protection against unauthorized access of your objects in
Amazon S3. SSE-KMS also provides you with an audit trail that shows when your CMK
was used and by whom. Additionally, you can create and manage customer managed
CMKs or use AWS managed CMKs that are unique to you, your service, and your
Region.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 9/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
With Server-Side Encryption with Customer-Provided Keys (SSE-C), you manage the
encryption keys and Amazon S3 manages the encryption, as it writes to disks, and
decryption, when you access your objects.
encryption keys are stored by organization
Client-Side Encryption
Encrypt data client-side and upload the encrypted data to Amazon S3. In this case,
you manage the encryption process, the encryption keys, and related tools.
Objects are encrypted by client before they are sent to AWS
S3 Standard
The default storage class. If you don't specify the storage class when you upload an
object, Amazon S3 assigns the S3 Standard storage class.
instantaneous retrieval time
no default storage day minimum
can be stored as little as 24 hours or more
use cases: big data, mobile, gaming data
S3 Intelligent-Tiering
designed to optimize storage costs by automatically moving data to the most cost-
effective storage access tier, without performance impact or operational overhead
Intelligent-Tiering storage class is ideal if you want to optimize storage costs
automatically for long-lived data when access patterns are unknown or unpredictable
suitable for objects larger than 128 KB that you plan to store for at least 30 days
Designed for data that isn't accessed often, long term storage, backups, disaster recovery
files. The requirement for data to be safe is most important.
S3 Standard-IA and S3 One Zone-IA storage classes are designed for long-lived and
infrequently accessed data
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 10/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
S3 Glacier
Use for archives where portions of the data might need to be retrieved in minutes
Data stored in the S3 Glacier storage class has a minimum storage duration period of
90 days and can be accessed in as little as 1-5 minutes using expedited retrieval
Retrieval methods:
Retrieval methods:
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 11/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
S3 Lifecycle Management
To manage your objects so that they are stored cost effectively throughout their lifecycle,
configure their Amazon S3 Lifecycle.
Transition actions — Define when objects transition to another storage class. For
example, you might choose to transition objects to the S3 Standard-IA storage class
30 days after you created them, or archive objects to the S3 Glacier storage class one
year after creating them.
Expiration actions — Define when objects expire. Amazon S3 deletes expired objects
on your behalf.
Use Cases:
If you upload periodic logs to a bucket, your application might need them for a week
or a month. After that, you might want to delete them.
Some documents are frequently accessed for a limited period of time. After that, they
are infrequently accessed. At some point, you might not need real-time access to
them, but your organization or regulations might require you to archive them for a
specific period. After that, you can delete them.
You might upload some types of data to Amazon S3 primarily for archival purposes.
For example, you might archive digital media, financial and healthcare records, raw
genomics sequence data, long-term database backups, and data that must be
retained for regulatory compliance.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 12/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
S3 Replication
Replication enables automatic, asynchronous copying of objects across Amazon S3
buckets. Buckets that are configured for object replication can be owned by the same AWS
account or by different accounts. You can copy objects between different AWS Regions or
within the same Region.
Use Cases:
S3 Presigned URL
All objects by default are private. Only the object owner has permission to access
these objects. However, the object owner can optionally share objects with others by
creating a presigned URL, using their own security credentials, to grant time-limited
permission to download the objects.
When you create a presigned URL for your object, you must provide your security
credentials, specify a bucket name, an object key, specify the HTTP method (GET to
download the object) and expiration date and time. The presigned URLs are valid only
for the specified duration.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 13/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Anyone who receives the presigned URL can then access the object. For example, if
you have a video in your bucket and both the bucket and the object are private, you
can share the video with others by generating a presigned URL.
VPC
Private IP Addressing
Amazon VPC supports IPv4 and IPv6 addressing, and has different CIDR block size quotas
for each. By default, all VPCs and subnets must have IPv4 CIDR blocks—you can't change
this behavior. You can optionally associate an IPv6 CIDR block with your VPC.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 14/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Internet Gateway
Your default VPC includes an internet gateway, and each default subnet is a public
subnet.
Each instance that you launch into a default subnet has a private IPv4 address and a
public IPv4 address. These instances can communicate with the internet through the
internet gateway. An internet gateway enables your instances to connect to the
internet through the Amazon EC2 network edge.
You can enable internet access for an instance launched into a nondefault subnet by
attaching an internet gateway to its VPC (if its VPC is not a default VPC) and
associating an Elastic IP address with the instance.
Bastion Host
Including bastion hosts in your VPC environment enables you to securely connect to your
Linux instances without exposing your environment to the Internet. After you set up your
bastion hosts, you can access the other instances in your VPC through Secure Shell (SSH)
connections on Linux. Bastion hosts are also configured with security groups to provide
fine-grained ingress control.
When you add new instances to the VPC that require management access from the
bastion host, make sure to associate a security group ingress rule, which references
the bastion security group as the source, with each instance. It is also important to
limit this access to the required ports for administration.
During deployment, the public key from the selected Amazon EC2 key pair is
associated with the user ec2-user in the Linux instance. For additional users, you
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 15/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
should create users with the required permissions and associate them with their
individual authorized public keys for SSH connectivity.
For the bastion host instances, you should select the number and type of instances
according to the number of users and operations to be performed. The Quick Start
creates one bastion host instance and uses the t2.micro instance type by default, but
you can change these settings during deployment.
Set your desired expiration time directly in the CloudWatch Logs log group for the
logs collected from each bastion instance. This ensures that bastion log history is
retained only for the amount of time you need.
Keep your CloudWatch log files separated for each bastion host restarting the instance
so that you can filter and isolate logs messages from individual bastion hosts more
easily. Every instance that is launched by the bastion Auto Scaling group will create its
own log stream based on the instance ID.
NACL basics:
Your VPC automatically comes with a modifiable default network ACL. By default, it
allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic.
You can create a custom network ACL and associate it with a subnet. By default, each
custom network ACL denies all inbound and outbound traffic until you add rules.
Each subnet in your VPC must be associated with a network ACL. If you don't explicitly
associate a subnet with a network ACL, the subnet is automatically associated with the
default network ACL.
You can associate a network ACL with multiple subnets. However, a subnet can be
associated with only one network ACL at a time. When you associate a network ACL
with a subnet, the previous association is removed.
A network ACL contains a numbered list of rules. We evaluate the rules in order,
starting with the lowest numbered rule, to determine whether traffic is allowed in or
out of any subnet associated with the network ACL. The highest number that you can
use for a rule is 32766. We recommend that you start by creating rules in increments
(for example, increments of 10 or 100) so that you can insert new rules where you
need to later on.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 16/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
A network ACL has separate inbound and outbound rules, and each rule can either
allow or deny traffic.
Network ACLs are stateless, which means that responses to allowed inbound traffic are
subject to the rules for outbound traffic (and vice versa).
Security Groups
A security group acts as a virtual firewall for your instance to control inbound and
outbound traffic. When you launch an instance in a VPC, you can assign up to five
security groups to the instance. Security groups act at the instance level, not the
subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a
different set of security groups.
If you launch an instance using the Amazon EC2 API or a command line tool and you
don't specify a security group, the instance is automatically assigned to the default
security group for the VPC. If you launch an instance using the Amazon EC2 console,
you have an option to create a new security group for the instance.
For each security group, you add rules that control the inbound traffic to instances,
and a separate set of rules that control the outbound traffic. This section describes the
basic things that you need to know about security groups for your VPC and their rules.
Security groups are stateful: any changes applied to an incoming rule will be
automatically applied to the outgoing rule
NACLs are stateless: any changes applied to an incoming rule will not be applied to
the outgoing rule
Security group support allow rules only (by default all rules are denied)
NACLs support allow and deny rules
All rules in a security group are applied whereas rules are applied in their order (the
rule with the lower number gets processed first) in Network ACL
Security group first layer of defense, whereas Network ACL is second layer of the
defense
NAT Gateway
enables instances in a private subnet to connect to the internet or other AWS services,
but prevent the internet from initiating a connection with those instances
You are charged for creating and using a NAT gateway in your account. NAT gateway
hourly usage and data processing rates apply.
do not support IPv6 (use egress-only gateway instead)
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 17/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Each NAT gateway is created in a specific Availability Zone and implemented with
redundancy in that zone.
supports 5 Gbps of bandwidth and automatically scales up to 45 Gbps
supports TCP, UDP, and ICMP
cannot associate a security group with a NAT gateway
You can use a network ACL to control the traffic to and from the subnet in which the
NAT gateway is located.
use NAT Gateway over NAT Instance
VPC Peering
A VPC peering connection is a networking connection between two VPCs that enables you
to route traffic between them privately. Instances in either VPC can communicate with each
other as if they are within the same network. You can create a VPC peering connection
between your own VPCs, with a VPC in another AWS account, or with a VPC in a different
AWS Region.
You can establish peering relationships between VPCs across different AWS Regions (also
called Inter-Region VPC Peering). This allows VPC resources including EC2 instances,
Amazon RDS databases and Lambda functions that run in different AWS Regions to
communicate with each other using private IP addresses, without requiring gateways, VPN
connections, or separate network appliances. The traffic remains in the private IP space. All
inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck.
Traffic always stays on the global AWS backbone, and never traverses the public internet,
which reduces threats, such as common exploits, and DDoS attacks. Inter-Region VPC
Peering provides a simple and cost-effective way to share resources between regions or
replicate data for geographic redundancy.
EC2
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the
AWS cloud. Simply put, it's IaaS.
Instance ID Purpose
A1 General purpose
C4 Compute optimized
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 19/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Instance ID Purpose
C5 Compute optimized
D2 Storage optimized
F1 Accelerated computing
G3 Accelerated computing
G4 Accelerated computing
H1 Storage optimized
I3 Storage optimized
M4 General purpose
M5 General purpose
P2 Accelerated computing
P3 Accelerated computing
R4 Memory optimized
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 20/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Instance ID Purpose
R5 Memory optimized
T2 General purpose
T3 General purpose
X1 Memory optimized
EC2 Storage
EBS volumes are created in a specific Availability Zone, and can then be attached to
any instances in that same Availability Zone. To make a volume available outside of
the Availability Zone, you can create a snapshot and restore that snapshot to a new
volume anywhere in that Region. You can copy snapshots to other Regions and then
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 21/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
restore them to new volumes there, making it easier to leverage multiple AWS Regions
for geographical expansion, data center migration, and disaster recovery.
General Purpose SSD volumes offer a base performance of 3 IOPS/GiB, with the ability to
burst to 3,000 IOPS for extended periods of time. These volumes are ideal for a broad
range of use cases such as boot volumes, small and medium-size databases, and
development and test environments.
Provisioned IOPS SSD volumes support up to 64,000 IOPS and 1,000 MiB/s of throughput.
This allows you to predictably scale to tens of thousands of IOPS per EC2 instance.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 22/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Throughput Optimized HDD volumes provide low-cost magnetic storage that defines
performance in terms of throughput rather than IOPS. These volumes are ideal for large,
sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing.
Cold HDD volumes provide low-cost magnetic storage that defines performance in terms
of throughput rather than IOPS. These volumes are ideal for large, sequential, cold-data
workloads. If you require infrequent access to your data and are looking to save costs,
these volumes provides inexpensive block storage.
You can specify instance store volumes for an instance only when you launch it. You
can't detach an instance store volume from one instance and attach it to a different
instance.
The data in an instance store persists only during the lifetime of its associated
instance. If an instance reboots (intentionally or unintentionally), data in the instance
store persists. However, data in the instance store is lost under any of the following
circumstances:
The underlying disk drive fails
The instance stops
The instance terminates
Therefore, do not rely on instance store for valuable, long-term data. Instead, use
more durable data storage, such as Amazon S3, Amazon EBS, or Amazon EFS.
When you stop or terminate an instance, every block of storage in the instance store is
reset. Therefore, your data cannot be accessed through the instance store of another
instance.
Instance Store has very performance with millions of IOPS (read and write)
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 23/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
EBS Snapshots
You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-
time snapshots. Snapshots are incremental backups, which means that only the blocks on
the device that have changed after your most recent snapshot are saved. This minimizes
the time required to create the snapshot and saves on storage costs by not duplicating
data. When you delete a snapshot, only the data unique to that snapshot is removed. Each
snapshot contains all of the information that is needed to restore your data (from the
moment when the snapshot was taken) to a new EBS volume.
The first snapshot you take of an encrypted volume that has been created from an
unencrypted snapshot is always a full snapshot.
The first snapshot you take of a reencrypted volume, which has a different CMK
compared to the source snapshot, is always a full snapshot.
EBS Encryption
Amazon EBS encryption is an encryption solution for your EBS volumes and snapshots. It
uses AWS Key Management Service (AWS KMS) customer master keys (CMK).
All AMIs are categorized as either backed by Amazon EBS, which means that the root
device for an instance launched from the AMI is an Amazon EBS volume, or backed by
instance store, which means that the root device for an instance launched from the
AMI is an instance store volume created from a template stored in Amazon S3.
On-Demand Instances – Pay, by the second, for the instances that you launch. No
long-term commitment required.
Savings Plans – Reduce your Amazon EC2 costs by making a commitment to a
consistent amount of usage, in USD per hour, for a term of 1 or 3 years.
Reserved Instances – Reduce your Amazon EC2 costs by making a commitment to a
consistent instance configuration, including instance type and Region, for a term of 1
or 3 years.
Standard: These provide the most significant discount, but can only be modified.
Convertible: These provide a lower discount than Standard Reserved Instances,
but can be exchanged for another Convertible Reserved Instance with different
instance attributes. Convertible Reserved Instances can also be modified.
Scheduled Instances – Purchase instances that are always available on the specified
recurring schedule, for a one-year term.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 25/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Instance Metadata
Instance metadata is data about your instance that you can use to configure or manage
the running instance. Instance metadata is divided into categories, for example, host name,
events, and security groups.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 26/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Instance Profile: Amazon EC2 uses an instance profile as a container for an IAM role. When
you create an IAM role using the IAM console, the console creates an instance profile
automatically and gives it the same name as the role to which it corresponds. If you use
the Amazon EC2 console to launch an instance with an IAM role or to attach an IAM role to
an instance, you choose the role based on a list of instance profile names.
If you use the AWS CLI, API, or an AWS SDK to create a role, you create the role and
instance profile as separate actions, with potentially different names. If you then use
the AWS CLI, API, or an AWS SDK to launch an instance with an IAM role or to attach
an IAM role to an instance, specify the instance profile name.
An instance profile can contain only one IAM role. This limit cannot be increased.
Containers
is a regional service
To deploy applications on Amazon ECS, your application components must be
architected to run in containers using Docker.
Amazon Elastic Container Registry (ECR) is a managed AWS Docker registry service.
Customers can use the Docker CLI to push, pull, and manage images.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 27/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Clusters are Region-specific. A cluster may contain a mix of both Auto Scaling group
capacity providers and Fargate capacity providers.
Additional-EC2-Info
You can also reference parameters in a number of other AWS services, including the
following:
AWS CloudFormation
AWS CodeBuild
AWS CodePipeline
AWS CodeDeploy
EC2 Logging
You can monitor your instances using Amazon CloudWatch, which collects and processes
raw data from Amazon EC2 into readable, near real-time metrics. These statistics are
recorded for a period of 15 months, so that you can access historical information and gain
a better perspective on how your web application or service is performing.
By default, Amazon EC2 sends metric data to CloudWatch in 5-minute periods. To send
metric data for your instance to CloudWatch in 1-minute periods, you can enable detailed
monitoring on the instance.
By default, your instance is enabled for basic monitoring. You can optionally enable
detailed monitoring. After you enable detailed monitoring, the Amazon EC2 console
displays monitoring graphs with a 1-minute period for the instance.
Basic monitoring
Detailed monitoring
Cluster Placement
When you launch a new EC2 instance, the EC2 service attempts to place the instance in
such a way that all of your instances are spread out across underlying hardware to
minimize correlated failures. You can use placement groups to influence the placement of
a group of interdependent instances to meet the needs of your workload. Depending on
the type of workload, you can create a placement group using one of the following
placement strategies:
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 29/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
1. Cluster – packs instances close together inside an Availability Zone. This strategy
enables workloads to achieve the low-latency network performance necessary for
tightly-coupled node-to-node communication that is typical of HPC applications.
Cluster placement groups are recommended for applications that benefit from low
network latency, high network throughput, or both. They are also recommended when
the majority of the network traffic is between the instances in the group.
2. Partition – spreads your instances across logical partitions such that groups of
instances in one partition do not share the underlying hardware with groups of
instances in different partitions. This strategy is typically used by large distributed and
replicated workloads, such as Hadoop, Cassandra, and Kafka.
Partition placement groups can be used to deploy large distributed and replicated
workloads, such as HDFS, HBase, and Cassandra, across distinct racks.
3. Spread – strictly places a small group of instances across distinct underlying hardware
to reduce correlated failures.
Spread placement groups are recommended for applications that have a small
number of critical instances that should be kept separate from each other.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 30/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Enhanced Networking
Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-
performance networking capabilities on supported instance types. SR-IOV is a method of
device virtualization that provides higher I/O performance and lower CPU utilization when
compared to traditional virtualized network interfaces. Enhanced networking provides
higher bandwidth, higher packet per second (PPS) performance, and consistently lower
inter-instance latencies. There is no additional charge for using enhanced networking.
Route53
Overview
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web
service. You can use Route 53 to perform three main functions in any combination: domain
registration, DNS routing, and health checking. Do these in order if you do all 3:
DNS failover
A method for routing traffic away from unhealthy resources and to healthy
resources. When you have more than one resource performing the same function
—for example, more than one web server or mail server—you can configure
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 31/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Route 53 health checks to check the health of your resources and configure
records in your hosted zone to route traffic only to healthy resources.
Endpoint
The resource, such as a web server or an email server, that you configure a health
check to monitor the health of. You can specify an endpoint by IPv4 address
(192.0.2.243), by IPv6 address (2001:0db8:85a3:0000:0000🔡0001:2345), or by
domain name (example.com).
Health Check
You can configure a health check that monitors an endpoint that you specify either by
IP address or by domain name. At regular intervals that you specify, Route 53 submits
automated requests over the internet to your application, server, or other resource to
verify that it's reachable, available, and functional. Optionally, you can configure the
health check to make requests similar to those that your users make, such as
requesting a web page from a specific URL.
2. Health checks that monitor other health checks (calculated health checks)
You can create a health check that monitors whether Route 53 considers other health
checks healthy or unhealthy. One situation where this might be useful is when you
have multiple resources that perform the same function, such as multiple web servers,
and your chief concern is whether some minimum number of your resources are
healthy. You can create a health check for each resource without configuring
notification for those health checks. Then you can create a health check that monitors
the status of the other health checks and that notifies you only when the number of
available web resources drops below a specified threshold.
You can create CloudWatch alarms that monitor the status of CloudWatch metrics,
such as the number of throttled read events for an Amazon DynamoDB database or
the number of Elastic Load Balancing hosts that are considered healthy. After you
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 32/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
create an alarm, you can create a health check that monitors the same data stream
that CloudWatch monitors for the alarm.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 33/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Amazon-RDS
DB Instances
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 34/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
The basic building block of Amazon RDS is the DB instance. A DB instance is an isolated
database environment in the AWS Cloud. Your DB instance can contain multiple user-
created databases. You can access your DB instance by using the same tools and
applications that you use with a standalone database instance. You can create and modify
a DB instance by using the AWS Command Line Interface, the Amazon RDS API, or the
AWS Management Console.
Each DB instance runs a DB engine. Amazon RDS currently supports the MySQL, MariaDB,
PostgreSQL, Oracle, and Microsoft SQL Server DB engines. Each DB engine has its own
supported features, and each version of a DB engine may include specific features.
Additionally, each DB engine has a set of parameters in a DB parameter group that control
the behavior of the databases that it manages.
Multi-AZ deployments for MariaDB, MySQL, Oracle, and PostgreSQL DB instances use
Amazon's failover technology. SQL Server DB instances use SQL Server Database
Mirroring (DBM) or Always On Availability Groups (AGs).
recommended to use Provisioned IOPS and DB instance classes that are optimized for
Provisioned IOPS for fast, consistent performance
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 35/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
RDS Restore Amazon RDS creates a storage volume snapshot of your DB instance, backing
up the entire DB instance and not just individual databases. You can create a DB instance
by restoring from this DB snapshot. When you restore the DB instance, you provide the
name of the DB snapshot to restore from, and then provide a name for the new DB
instance that is created from the restore. You can't restore from a DB snapshot to an
existing DB instance; a new DB instance is created when you restore.
You can't restore a DB instance from a DB snapshot that is both shared and encrypted.
Instead, you can make a copy of the DB snapshot and restore the DB instance from
the copy.
Amazon Aurora
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 36/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Amazon Aurora (Aurora) is a fully managed relational database engine that's compatible
with MySQL and PostgreSQL. You already know how MySQL and PostgreSQL combine the
speed and reliability of high-end commercial databases with the simplicity and cost-
effectiveness of open-source databases. The code, tools, and applications you use today
with your existing MySQL and PostgreSQL databases can be used with Aurora. With some
workloads, Aurora can deliver up to five times the throughput of MySQL and up to three
times the throughput of PostgreSQL without requiring changes to most of your existing
applications.
Aurora Endpoints
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 37/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Cluster endpoint
A cluster endpoint (or writer endpoint) for an Aurora DB cluster connects to the
current primary DB instance for that DB cluster. This endpoint is the only one that can
perform write operations such as DDL statements. Because of this, the cluster
endpoint is the one that you connect to when you first set up a cluster or when your
cluster only contains a single DB instance.
Reader endpoint
A reader endpoint for an Aurora DB cluster provides load-balancing support for read-
only connections to the DB cluster. Use the reader endpoint for read operations, such
as queries. By processing those statements on the read-only Aurora Replicas, this
endpoint reduces the overhead on the primary instance. It also helps the cluster to
scale the capacity to handle simultaneous SELECT queries, proportional to the number
of Aurora Replicas in the cluster. Each Aurora DB cluster has one reader endpoint.
Custom endpoint
A custom endpoint for an Aurora cluster represents a set of DB instances that you
choose. When you connect to the endpoint, Aurora performs load balancing and
chooses one of the instances in the group to handle the connection. You define which
instances this endpoint refers to, and you decide what purpose the endpoint serves.
Instance endpoint
An Aurora global database consists of one primary AWS Region where your data is
mastered, and up to five read-only, secondary AWS Regions. Aurora replicates data to the
secondary AWS Regions with typical latency of under a second. You issue write operations
directly to the primary DB instance in the primary AWS Region.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 38/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
An Aurora global database uses dedicated infrastructure to replicate your data, leaving
database resources available entirely to serve application workloads. Applications with a
worldwide footprint can use reader instances in the secondary AWS Regions for low-
latency reads. In some unlikely cases, your database might become degraded or isolated in
an AWS Region. If this happens, with a global database you can promote one of the
secondary AWS Regions to take full read/write workloads in under a minute.
The Aurora cluster in the primary AWS Region where your data is mastered performs both
read and write operations. The clusters in the secondary Regions enable low-latency reads.
You can scale up the secondary clusters independently by adding one of more DB
instances (Aurora Replicas) to serve read-only workloads. For disaster recovery, you can
remove a secondary cluster from the Aurora global database and promote it to allow full
read and write operations.
Network-Storage-EFS
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 39/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic
NFS file system for use with AWS Cloud services and on-premises resources. Amazon EFS
supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol. Amazon EFS
o?ers two storage classes, Standard and Infrequent Access. The Standard storage class is
used to store frequently accessed files. The Infrequent Access (IA) storage class is a lower-
cost storage class that's designed for storing long-lived, infrequently accessed ?les cost-e?
ectively.
control access to your file systems through Portable Operating System Interface
(POSIX) permissions
can access your Amazon EFS file system concurrently from multiple NFS clients
primarily used with EC2 Linux instances
work with the files and directories just as you do with a local file system
HA-Scale
A load balancer serves as the single point of contact for clients. The load balancer
distributes incoming application traffic across multiple targets, such as EC2 instances, in
multiple Availability Zones. This increases the availability of your application. You add one
or more listeners to your load balancer.
A listener checks for connection requests from clients, using the protocol and port that you
configure. The rules that you define for a listener determine how the load balancer routes
requests to its registered targets. Each rule consists of a priority, one or more actions, and
one or more conditions. When the conditions for a rule are met, then its actions are
performed. You must define a default rule for each listener, and you can optionally define
additional rules.
Each target group routes requests to one or more registered targets, such as EC2
instances, using the protocol and port number that you specify. You can register a target
with multiple target groups. You can configure health checks on a per target group basis.
Health checks are performed on all targets registered to a target group that is specified in
a listener rule for your load balancer.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 40/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Elastic Load Balancing supports three types of load balancers: Application Load Balancers,
Network Load Balancers, and Classic Load Balancers.
Use cases:
Support for path-based routing. You can configure rules for your listener that forward
requests based on the URL in the request.
Support for host-based routing. You can configure rules for your listener that forward
requests based on the host field in the HTTP header.
Support for routing requests to multiple applications on a single EC2 instance
Support for registering Lambda functions as targets
Support for containerized applications - Amazon ECS
you can specify your launch configuration with multiple Auto Scaling groups
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 41/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Autoscaling Groups
Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon
EC2 instances available to handle the load for your application. You create collections of
EC2 instances, called Auto Scaling groups. You can specify the minimum number of
instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your
group never goes below this size.
Key components:
Groups - EC2 instances are organized into groups so that they can be treated as a
logical unit for the purposes of scaling and management
Configuration Templates - can specify information such as the AMI ID, instance type,
key pair, security groups, and block device mapping for your instances
Scaling Options - several ways to scale instances
Maintain current instance levels at all times
Scale manually
Scale based on a schedule
Scale based on demand
Predictive scaling
Scaling Policies
A scaling policy instructs Amazon EC2 Auto Scaling to track a specific CloudWatch metric,
and it defines what action to take when the associated CloudWatch alarm is in ALARM.
When a scaling policy is executed, if the capacity calculation produces a number outside of
the minimum and maximum size range of the group, Amazon EC2 Auto Scaling ensures
that the new capacity never goes outside of the minimum and maximum size limits.
Capacity is measured in one of two ways: using the same units that you chose when you
set the desired capacity in terms of instances, or using capacity units.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 42/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Target tracking scaling — Increase or decrease the current capacity of the group
based on a target value for a specific metric. This is similar to the way that your
thermostat maintains the temperature of your home—you select a temperature and
the thermostat does the rest.
Step scaling — Increase or decrease the current capacity of the group based on a set
of scaling adjustments, known as step adjustments, that vary based on the size of the
alarm breach.
Simple scaling — Increase or decrease the current capacity of the group based on a
single scaling adjustment.
For TCP traffic, the load balancer selects a target using a flow hash algorithm based on the
protocol, source IP address, source port, destination IP address, destination port, and TCP
sequence number. The TCP connections from a client have different source ports and
sequence numbers, and can be routed to different targets. Each individual TCP connection
is routed to a single target for the life of the connection.
For UDP traffic, the load balancer selects a target using a flow hash algorithm based on the
protocol, source IP address, source port, destination IP address, and destination port. A
UDP flow has the same source and destination, so it is consistently routed to a single
target throughout its lifetime. Different UDP flows have different source IP addresses and
ports, so they can be routed to different targets.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 43/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Sticky Sessions
By default, a Classic Load Balancer routes each request independently to the registered
instance with the smallest load. However, you can use the sticky session feature (also
known as session affinity), which enables the load balancer to bind a user's session to a
specific instance. This ensures that all requests from the user during the session are sent to
the same instance.
Requirements
The load balancer uses a special cookie, AWSELB, to track the instance for each request to
each listener. When the load balancer receives a request, it first checks to see if this cookie
is present in the request. If so, the request is sent to the instance specified in the cookie. If
there is no cookie, the load balancer chooses an instance based on the existing load
balancing algorithm. If an instance fails or becomes unhealthy, the load balancer stops
routing requests to that instance, and chooses a new healthy instance based on the
existing load balancing algorithm. The request is routed to the new instance as if there is
no cookie and the session is no longer sticky.
Serverless-Services
AWS Lambda
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 44/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Lambda is a compute service that lets you run code without provisioning or managing
servers. Lambda executes your code only when needed and scales automatically, from a
few requests per day to thousands per second. You can use AWS Lambda to run your code
in response to events, such as changes to data in an Amazon S3 bucket or an Amazon
DynamoDB table; to run your code in response to HTTP requests using Amazon API
Gateway; or invoke your code using API calls made using AWS SDKs.
charged for every 100ms your code executes and the number of times your code is
triggered
Use Cases:
data processing
real-time file processing
real-time stream processing
machine learning
IoT and mobile backend applications
CloudWatch
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the
applications you run on AWS in real time. You can use CloudWatch to collect and track
metrics, which are variables you can measure for your resources and applications.
Namespaces
A namespace is a container for CloudWatch metrics. Metrics in different namespaces are
isolated from each other, so that metrics from different applications are not mistakenly
aggregated into the same statistics.
Metrics
Metrics are the fundamental concept in CloudWatch. A metric represents a time-ordered
set of data points that are published to CloudWatch.
Dimensions
A dimension is a name/value pair that is part of the identity of a metric. You can assign up
to 10 dimensions to a metric.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 45/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Statistics
Statistics are metric data aggregations over specified periods of time. CloudWatch provides
statistics based on the metric data points provided by your custom data or provided by
other AWS services to CloudWatch.
Percentiles
A percentile indicates the relative standing of a value in a dataset. For example, the 95th
percentile means that 95 percent of the data is lower than this value and 5 percent of the
data is higher than this value. Percentiles help you get a better understanding of the
distribution of your metric data.
Alarms
You can use an alarm to automatically initiate actions on your behalf. An alarm watches a
single metric over a specified time period, and performs one or more specified actions,
based on the value of the metric relative to a threshold over time. The action is a
notification sent to an Amazon SNS topic or an Auto Scaling policy. You can also add
alarms to dashboards.
Are HTTP-based.
Enable stateless client-server communication.
Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 46/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and
manages the delivery or sending of messages to subscribing endpoints or clients. In
Amazon SNS, there are two types of clients—publishers and subscribers—also referred to
as producers and consumers.
Common scenarios:
1. Fanout - The "fanout" scenario is when an Amazon SNS message is sent to a topic and
then replicated and pushed to multiple Amazon SQS queues, HTTP endpoints, or
email addresses.
2. Application and system alerts - Application and system alerts are notifications that are
triggered by predefined thresholds and sent to specified users by SMS and/or email.
3. Push email and text messaging - Push email and text messaging are two ways to
transmit messages to individuals or groups via email and/or SMS.
4. Mobile push notifications - Mobile push notifications enable you to send messages
directly to mobile apps.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 47/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
5. Message durability - Amazon SNS provides durable storage of all messages that it
receives. When Amazon SNS receives your Publish request, it stores multiple copies of
your message to disk.
Standard Queues
Unlimited throughput
At-least-once Delivery
Best-effort Ordering
FIFO Queues
Kinesis
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 48/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Amazon Kinesis makes it easy to collect, process, and analyze video and data streams in
real time.
You can use Amazon Kinesis Data Streams to collect and process large streams of data
records in real time. You can create data-processing applications, known as Kinesis Data
Streams applications. A typical Kinesis Data Streams application reads data from a data
stream as data records.
Amazon Kinesis Video Streams is a fully managed AWS service that you can use to stream
live video from devices to the AWS Cloud, or build applications for real-time video
processing or batch-oriented video analytics. Kinesis Video Streams isn't just storage for
video data. You can use it to watch your video streams in real time as they are received in
the cloud.
scales well
can handle large amounts of data (think volume)
connects multiple consumers to a stream concurrently
maintains order of record
SQS
very reliable
serverless architecture
batch sending of messages
message delay up to 15 mins
CDN-Network-Optimization
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 49/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
CloudFront
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic
web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your
content through a worldwide network of data centers called edge locations. When a user
requests content that you're serving with CloudFront, the user is routed to the edge
location that provides the lowest latency (time delay), so that content is delivered with the
best possible performance.
Use cases:
Accelerate static website content delivery: Using S3 together with CloudFront has a
number of advantages, including the option to use Origin Access Identity (OAI) to
easily restrict access to your S3 content.
Serve video on demand or live streaming video: For broadcasting a live stream, you
can cache media fragments at the edge, so that multiple requests for the manifest file
that delivers the fragments in the right order can be combined, to reduce the load on
your origin server.
Encrypt specific fields throughout system processing: secure end-to-end connections
to origin servers
Customize at the edge: For example, you can return a custom error message when
your origin server is down for maintenance, so viewers don't get a generic HTTP error
message.
Serve private content by using Lambda@Edge customizations: configure your
CloudFront distribution to serve private content from your own custom origin
To restrict access to content that you serve from Amazon S3 buckets, follow these steps:
1. Create a special CloudFront user called an origin access identity (OAI) and associate it
with your distribution.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 50/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
2. Configure your S3 bucket permissions so that CloudFront can use the OAI to access
the files in your bucket and serve them to your users. Make sure that users can’t use a
direct URL to the S3 bucket to access a file there.
Main Components
Accelerator An accelerator directs traffic to optimal endpoints over the AWS global
network to improve the availability and performance of your internet applications. Each
accelerator includes one or more listeners.
Endpoint Endpoints can be Network Load Balancers, Application Load Balancers, EC2
instances, or Elastic IP addresses. An Application Load Balancer endpoint can be an
internet-facing or internal. Traffic is routed to endpoints based on configuration options
that you choose, such as endpoint weights. For each endpoint, you can configure weights,
which are numbers that you can use to specify the proportion of traffic to route to each
one. This can be useful, for example, to do performance testing within a Region.
Other-VPC-Topics
You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow
log for a subnet or VPC, each network interface in that subnet or VPC is monitored. By
default, the log line format for a flow log record is a space-separated string that has the
following set of fields in the following order.
You cannot associate a security group with an egress-only internet gateway. You can
use security groups for your instances in the private subnet to control the traffic to
and from those instances.
You can use a network ACL to control the traffic to and from the subnet for which the
egress-only internet gateway routes traffic.
VPC Endpoints
A VPC endpoint enables you to privately connect your VPC to supported AWS services and
VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway,
NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do
not require public IP addresses to communicate with resources in the service. Traffic
between your VPC and the other service does not leave the Amazon network.
Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available
VPC components. They allow communication between instances in your VPC and services
without imposing availability risks or bandwidth constraints on your network traffic.
Key concepts:
Endpoint service — Your own application in your VPC. Other AWS principals can create
a connection from their VPC to your endpoint service
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 52/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Gateway endpoint — A gateway endpoint is a gateway that you specify as a target for
a route in your route table for traffic destined to a supported AWS service.
S3
DynamoDB
Interface endpoint — An interface endpoint is an elastic network interface (ENI) with a
private IP address from the IP address range of your subnet that serves as an entry
point for traffic destined to a supported service.
powered by AWS PrivateLink (restricts all network traffic between your VPC and
services to the Amazon network)
LOTS of AWS services are supported
VPC Peering
A VPC peering connection is a networking connection between two VPCs that enables you
to route traffic between them privately. Instances in either VPC can communicate with each
other as if they are within the same network. You can create a VPC peering connection
between your own VPCs, with a VPC in another AWS account, or with a VPC in a different
AWS Region.
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is
neither a gateway nor an AWS Site-to-Site VPN connection, and does not rely on a
separate piece of physical hardware. There is no single point of failure for communication
or a bandwidth bottleneck. A VPC peering connection helps you to facilitate the transfer of
data.
You can establish peering relationships between VPCs across different AWS Regions
(also called Inter-Region VPC Peering)
traffic remains in the private IP space
Inter-Region VPC Peering provides a simple and cost-effective way to share resources
between regions or replicate data for geographic redundancy
Migration
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 53/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Although the term VPN connection is a general term, in this documentation, a VPN
connection refers to the connection between your VPC and your own on-premises
network. Site-to-Site VPN supports Internet Protocol security (IPsec) VPN connections. A
Site-to-Site VPN connection offers two VPN tunnels between a virtual private gateway or a
transit gateway on the AWS side, and a customer gateway (which represents a VPN device)
on the remote (on-premises) side.
Concepts
Virtual private gateway: A virtual private gateway is the VPN concentrator on the
Amazon side of the Site-to-Site VPN connection. You create a virtual private gateway
and attach it to the VPC from which you want to create the Site-to-Site VPN
connection.
Transit gateway: A transit gateway is a transit hub that you can use to interconnect
your virtual private clouds (VPC) and on-premises networks. For more information, see
Amazon VPC Transit Gateways. You can create a Site-to-Site VPN connection as an
attachment on a transit gateway.
Customer gateway: A customer gateway is a resource that you create in AWS that
represents the customer gateway device in your on-premises network.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 54/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Storage Gateway
AWS Storage Gateway connects an on-premises software appliance with cloud-based
storage to provide seamless integration with data security features between your on-
premises IT environment and the AWS storage infrastructure. You can use the service to
store data in the AWS Cloud for scalable and cost-effective storage that helps maintain
data security.
File Gateway – A file gateway supports a file interface into Amazon Simple Storage Service
(Amazon S3) and combines a service and a virtual software appliance.
Volume Gateway – A volume gateway provides cloud-backed storage volumes that you
can mount as Internet Small Computer System Interface (iSCSI) devices from your on-
premises application servers.
Tape Gateway – A tape gateway provides cloud-backed virtual tape storage. The tape
gateway is deployed into your on-premises environment as a VM running on VMware ESXi,
KVM, or Microsoft Hyper-V hypervisor.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 55/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Snowball Options
Snowball
The AWS Snowball service uses physical storage devices to transfer large amounts of data
between Amazon Simple Storage Service (Amazon S3) and your onsite data storage
location at faster-than-internet speeds.
Snowball Edge
The AWS Snowball Edge is a type of Snowball device with on-board storage and compute
power for select AWS capabilities. Snowball Edge can undertake local processing and
edge-computing workloads in addition to transferring data between your local
environment and the AWS Cloud. Snowball Edge devices have three options for device
configurations – storage optimized, compute optimized, and with GPU.
Device options:
Snowball Edge Storage Optimized (for data transfer) – This Snowball Edge device
option has a 100 TB (80 TB usable) storage capacity.
Snowball Edge Storage Optimized (with EC2 compute functionality) – This Snowball
Edge device option has up to 80 TB of usable storage space, 24 vCPUs, and 32 GiB of
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 56/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
memory for compute functionality. It also comes with 1 TB of additional SSD storage
space for block volumes attached to Amazon EC2 AMIs.
Snowball Edge Compute Optimized – This Snowball Edge device option has the most
compute functionality, with 52 vCPUs, 208 GiB of memory, and 42 TB (39.5 usable)
plus 7.68 TB of dedicated NVMe SSD for compute instances for block storage volumes
for EC2 compute instances, and 42 TB of HDD capacity for either object storage or
block storage volumes.
Snowball Edge Compute Optimized with GPU – This Snowball Edge device option is
identical to the compute optimized option, save for an installed GPU, equivalent to the
one available in the P3 Amazon EC2 instance type. It has a storage capacity of 42 TB
(39.5 TB of HDD storage that can be used for a combination of Amazon S3 compatible
object storage and Amazon EBS compatible block storage volumes) plus 7.68 TB of
dedicated NVMe SSD for compute instances.
Snowmobile
AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large
amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long
ruggedized shipping container, pulled by a semi-trailer truck. (WOW!)
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 57/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Amazon Cognito: is a user directory that adds sign-up and sign-in to your mobile app
or web application using Amazon Cognito User Pools.
AWS DataSync
AWS DataSync is an online data transfer service that simplifies, automates, and accelerates
copying large amounts of data to and from AWS storage services over the internet or AWS
Direct Connect. DataSync can copy data between Network File System (NFS), Server
Message Block (SMB) file servers, or AWS Snowcone, and Amazon Simple Storage Service
(Amazon S3) buckets, Amazon EFS file systems, and Amazon FSx for Windows File Server
file systems.
rates up to 10Gbps
can transfer data between SMB file shares and Amazon S3, Amazon EFS or Amazon
FSx for Windows File Server
supports Network File System (NFS), Server Message Block (SMB), Amazon EFS,
Amazon FSx for Windows File Server, and Amazon S3 as location types
Data migration – Move active datasets rapidly over the network into Amazon S3,
Amazon EFS, or Amazon FSx for Windows File Server. DataSync includes automatic
encryption and data integrity validation to help make sure that your data arrives
securely, intact, and ready to use.
Data movement for timely in-cloud processing – Move data into or out of AWS for
processing when working with systems that generate data on-premises. This approach
can speed up critical hybrid cloud workflows across many industries. These include
video production in media and entertainment, seismic research in oil and gas, machine
learning in life science, and big data analytics in finance.
Data archiving – Move cold data from expensive on-premises storage systems directly
to durable and secure long-term storage such as Amazon S3 Glacier or S3 Glacier
Deep Archive. By doing this, you can free up on-premises storage capacity and shut
down legacy storage systems.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 58/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Data protection – Move data into all Amazon S3 storage classes, and choose the most
cost-effective storage class for your needs. You can also send the data to Amazon EFS
or Amazon FSx for Windows File Server for a standby file system.
Use Cases
Security-Operations
Features:
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 59/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
AWS Shield
AWS provides AWS Shield Standard and AWS Shield Advanced for protection against DDoS
attacks. AWS Shield Standard is automatically included at no extra cost beyond what you
already pay for AWS WAF and your other AWS services. For added protection against DDoS
attacks, AWS offers AWS Shield Advanced. AWS Shield Advanced provides expanded DDoS
attack protection for your Amazon Elastic Compute Cloud instances, Elastic Load Balancing
load balancers, Amazon CloudFront distributions, Amazon Route 53 hosted zones, and
your AWS Global Accelerator accelerators.
Shield Standard
All AWS customers benefit from the automatic protections of AWS Shield Standard, at
no additional charge. AWS Shield Standard defends against most common, frequently
occurring network and transport layer DDoS attacks that target your website or
applications. While AWS Shield Standard helps protect all AWS customers, you get
particular benefit if you are using Amazon CloudFront and Amazon Route 53. These
services receive comprehensive availability protection against all known infrastructure
(Layer 3 and 4) attacks.
Shield Advanced
Adds protect to ELB's, EC2 instance EIP's, CloudFront distributions, Route 53 hosted
zones, and AWS Global Accelerator
access to 24x7 DDoS response team (DRT)
supports Layer 7 attacks
AWS WAF
AWS WAF is a web application firewall that lets you monitor the HTTP(S) requests that are
forwarded to an Amazon CloudFront distribution, an Amazon API Gateway REST API, or an
Application Load Balancer. You can also use AWS WAF to protect your applications that are
hosted in Amazon Elastic Container Service (Amazon ECS) containers.
You use AWS WAF to control how an Amazon CloudFront distribution, an Amazon API
Gateway REST API, or an Application Load Balancer responds to web requests.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 60/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Web ACLs – You use a web access control list (ACL) to protect a set of AWS resources.
You create a web ACL and define its protection strategy by adding rules. Rules define
criteria for inspecting web requests and specify how to handle requests that match the
criteria. You set a default action for the web ACL that indicates whether to block or
allow through those requests that pass the rules inspections.
Rules – Each rule contains a statement that defines the inspection criteria, and an
action to take if a web request meets the criteria. When a web request meets the
criteria, that's a match. You can use rules to block matching requests or to allow
matching requests through. You can also use rules just to count matching requests.
Rules groups – You can use rules individually or in reusable rule groups. AWS
Managed Rules and AWS Marketplace sellers provide managed rule groups for your
use. You can also define your own rule groups.
A web access control list (web ACL) gives you fine-grained control over the web requests
that your Amazon CloudFront distribution, Amazon API Gateway REST API, or Application
Load Balancer responds to.
You can use criteria like the following to allow or block requests:
CloudHSM
AWS CloudHSM provides hardware security modules in the AWS Cloud. A hardware
security module (HSM) is a computing device that processes cryptographic operations and
provides secure storage for cryptographic keys.
When you use an HSM from AWS CloudHSM, you can perform a variety of cryptographic
tasks:
Generate, store, import, export, and manage cryptographic keys, including symmetric
keys and asymmetric key pairs.
Use symmetric and asymmetric algorithms to encrypt and decrypt data.
Use cryptographic hash functions to compute message digests and hash-based
message authentication codes (HMACs).
Cryptographically sign data (including code signing) and verify signatures.
Generate cryptographically secure random data.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 61/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Use Cases:
DynamoDB
What is DynamoDB?
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. With DynamoDB, you can create
database tables that can store and retrieve any amount of data and serve any level of
request traffic. DynamoDB provides on-demand backup capability. DynamoDB
automatically spreads the data and traffic for your tables over a sufficient number of
servers to handle your throughput and storage requirements, while maintaining consistent
and fast performance. All of your data is stored on solid-state disks (SSDs) and is
automatically replicated across multiple Availability Zones in an AWS Region.
When you read data from a DynamoDB table, the response might not reflect the
results of a recently completed write operation. The response might include some
stale data. If you repeat your read request after a short time, the response should
return the latest data.
When you request a strongly consistent read, DynamoDB returns a response with the
most up-to-date data, reflecting the updates from all prior write operations that were
successful. However, this consistency comes with some disadvantages:
A strongly consistent read might not be available if there is a network delay or outage.
In this case, DynamoDB may return a server error (HTTP 500).
Strongly consistent reads may have higher latency than eventually consistent reads.
Strongly consistent reads are not supported on global secondary indexes.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 62/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
Strongly consistent reads use more throughput capacity than eventually consistent
reads.
Partitions
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-
memory performance for demanding applications. DAX addresses three core scenarios:
1. As an in-memory cache, DAX reduces the response times of eventually consistent read
workloads by an order of magnitude from single-digit milliseconds to microseconds.
2. DAX reduces operational and application complexity by providing a managed service
that is API-compatible with DynamoDB. Therefore, it requires only minimal functional
changes to use with an existing application.
3. For read-heavy or bursty workloads, DAX provides increased throughput and potential
operational cost savings by reducing the need to overprovision read capacity units.
This is especially beneficial for applications that require repeated reads for individual
keys.
DynamoDB global tables are ideal for massively scaled applications with globally dispersed
users. In such an environment, users expect very fast application performance. Global
tables provide automatic multi-master replication to AWS Regions worldwide. They enable
you to deliver low-latency data access to your users no matter where they are located.
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 63/64
8/20/2020 hackettjoe/Amazon-AWS-SAA-C02: My study material and notes for taking the AWS SAA-C02 exam
As of this writing, there are two versions of DynamoDB global tables available: Version
2019.11.21 (Current and recommended) and Version 2017.11.29.
Amazon Athena
Athena is an interactive query service that makes it easy to analyze data directly in Amazon
Simple Storage Service (Amazon S3) using standard SQL.
Releases
No releases published
Packages
No packages published
https://github.com/hackettjoe/Amazon-AWS-SAA-C02 64/64