Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

PRACTICAL FILE

CLOUD WEB SERVICES LAB

(BCA-CTIS )

Under the Supervision of


Dr. SAHIL KANSAL
By
(DIVYAJEET SINGH(121519002)]

JAGANNATH INSTITUTE OF MANAGEMENT SCIENCE


Sector-3 Rohini
Delhi-110085
DECEMBER ,2021
INDEX
S.NO Practical Date Remarks

Amazon simple storage


service (s3)
1.

Amazon cloud front


2.

AWS Key Managment


3. Services

4. Elastic search service

5. Amazon DynamoDB

6. Amazon api gateway

7. Amazon Machine learning

AWS IOT
8

Aws Database Migration


9. services

10. Amazon Route 53


Practical 1. Amazon Simple Storage Service (s3)

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
industry-leading scalability, data availability, security, and performance. Customers of all
sizes and industries can use Amazon S3 to store and protect any amount of data for a range
of use cases, such as data lakes, websites, mobile applications, backup and restore, archive,
enterprise applications, IoT devices, and big data analytics. Amazon S3 provides
management features so that you can optimize, organize, and configure access to your data
to meet your specific business, organizational, and compliance requirements.

Features of Amazon S3
Storage classes
Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store
mission-critical production data in S3 Standard for frequent access, save costs by storing infrequently
accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier
Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive.

You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes
storage costs by automatically moving your data between four access tiers when your access patterns
change. These four access tiers include two low-latency access tiers optimized for frequent and infrequent
access, and two opt-in archive access tiers designed for asynchronous access for rarely accessed data.

For more information, see Using Amazon S3 storage classes. For more information about S3 Glacier
Flexible Retrieval, see the Amazon S3 Glacier Developer Guide.

Storage management
Amazon S3 has storage management features that you can use to manage costs, meet regulatory
requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements.

S3 Lifecycle – Configure a lifecycle policy to manage your objects and store them cost effectively throughout their lifecycle.
You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes.

S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely.
You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to
simply add another layer of protection against object changes and deletions.

S3 Replication – Replicate objects and their respective metadata and object tags to one or more destination buckets in the
same or different AWS Regions for reduced latency, compliance, security, and other use cases.

S3 Batch Operations – Manage billions of objects at scale with a single S3 API request or a few clicks in the Amazon S3
console. You can use Batch Operations to perform operations such as Copy, Invoke AWS Lambda function,
and Restore on millions or billions of objects.

Access management
Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3
buckets and the objects in them are private. You have access only to the S3 resources that you create. To
grant granular resource permissions that support your specific use case or to audit the permissions of your
Amazon S3 resources, you can use the following features.

S3 Block Public Access – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned
on at the account and bucket level.

AWS Identity and Access Management (IAM) – Create IAM users for your AWS account to manage access to your Amazon
S3 resources. For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has to
an S3 bucket that your AWS account owns.

Bucket policies – Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the
objects in them.

Access control lists (ACLs) – Grant read and write permissions for individual buckets and objects to authorized users. As a
general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM policies
for access control instead of ACLs. ACLs are an access control mechanism that predates resource-based policies and IAM.
For more information about when you'd use ACLs instead of resource-based policies or IAM policies, see Access policy
guidelines.

S3 Object Ownership – Disable ACLs and take ownership of every object in your bucket, simplifying access management
for data stored in Amazon S3. You, as the bucket owner, automatically own and have full control over every object in your
bucket, and access control for your data is based on policies.

Access Analyzer for S3 – Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the
intended access to your S3 resources.

How Amazon S3 works


Amazon S3 is an object storage service that stores data as objects within buckets. An object is a file and any
metadata that describes the file. A bucket is a container for objects.

To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region.
Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name),
which is the unique identifier for the object within the bucket.

S3 provides features that you can configure to support your specific use case. For example, you can use S3
Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects
that are accidentally deleted or overwritten.

Buckets and the objects in them are private and can be accessed only if you explicitly grant access
permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access
control lists (ACLs), and S3 Access Points to manage access.
Topics

• Buckets

• Objects

• Keys

• S3 Versioning

• Version ID

• Bucket policy

• Access control lists (ACLs)

• S3 Access Points

• Regions

Buckets
A bucket is a container for objects stored in Amazon S3. You can store any number of objects in a bucket
and can have up to 100 buckets in your account. To request an increase, visit the Service Quotas Console.

Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in
the DOC-EXAMPLE-BUCKET bucket in the US West (Oregon) Region, then it is addressable using the
URL https://DOC-EXAMPLE-BUCKET.s3.us-west-2.amazonaws.com/photos/puppy.jpg. For more information,
see Accessing a Bucket.

When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will
reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names
must follow the bucket naming rules. You can also configure a bucket to use S3 Versioning or other storage
management features.

Buckets also:

⚫ Organize the Amazon S3 namespace at the highest level

⚫ .Identify the account responsible for storage and data transfer charges.

⚫ Provide access control options, such as bucket policies, access control lists (ACLs), and S3 Access Points, that you can
use to manage access to your Amazon S3 resources

⚫ Serve as the unit of aggregation for usage reporting.

For more information about buckets, see Buckets overview.


Objects
Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata.
The metadata is a set of name-value pairs that describe the object. These pairs include some default
metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also
specify custom metadata at the time that the object is stored.

An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is
enabled on the bucket). For more information about objects, see Amazon S3 objects overview.

Keys
An object key (or key name) is the unique identifier for an object within a bucket. Every object in a bucket
has exactly one key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is
enabled for the bucket) uniquely identify each object. So you can think of Amazon S3 as a basic data map
between "bucket + key + version" and the object itself.

Every object in Amazon S3 can be uniquely addressed through the combination of the web service
endpoint, bucket name, key, and optionally, a version. For example, in the URL https://DOC-EXAMPLE-
BUCKET.s3.us-west-2.amazonaws.com/photos/puppy.jpg, DOC-EXAMPLE-BUCKET is the name of the bucket
and /photos/puppy.jpg is the key.

For more information about object keys, see Creating object key names.

S3 Versioning
You can use S3 Versioning to keep multiple variants of an object in the same bucket. With S3 Versioning,
you can preserve, retrieve, and restore every version of every object stored in your buckets. You can easily
recover from both unintended user actions and application failures.

For more information, see Using versioning in S3 buckets.

Version ID
When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object
added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have
a version ID of null. If you modify these (or any other) objects with other operations, such
as CopyObject and PutObject, the new objects get a unique version ID.

Bucket policy
A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use to
grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy
with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are
owned by the bucket owner. Bucket policies are limited to 20 KB in size.

Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket
policies to add or deny permissions for the objects in a bucket. Bucket policies allow or deny requests
based on the elements in the policy, including the requester, S3 actions, resources, and aspects or
conditions of the request (for example, the IP address used to make the request). For example, you can
create a bucket policy that grants cross-account permissions to upload objects to an S3 bucket while
ensuring that the bucket owner has full control of the uploaded objects. For more information, see Bucket
policy examples.

In your bucket policy, you can use wildcard characters on Amazon Resource Names (ARNs) and other
values to grant permissions to a subset of objects. For example, you can control access to groups of objects
that begin with a common prefix or end with a given extension, such as .html.

Access control lists (ACLs)


You can use ACLs to grant read and write permissions to authorized users for individual buckets and
objects. Each bucket and object has an ACL attached to it as a subresource. The ACL defines which AWS
accounts or groups are granted access and the type of access. ACLs are an access control mechanism that
predates IAM. For more information about ACLs, see Access control list (ACL) overview.

By default, when another AWS account uploads an object to your S3 bucket, that account (the object
writer) owns the object, has access to it, and can grant other users access to it through ACLs. You can use
Object Ownership to change this default behavior so that ACLs are disabled and you, as the bucket owner,
automatically own every object in your bucket. As a result, access control for your data is based on policies,
such as IAM policies, S3 bucket policies, virtual private cloud (VPC) endpoint policies, and AWS
Organizations service control policies (SCPs).

A majority of modern use cases in Amazon S3 no longer require the use of ACLs, and we recommend that
you disable ACLs except in unusual circumstances where you need to control access for each object
individually. With Object Ownership, you can disable ACLs and rely on policies for access control. When
you disable ACLs, you can easily maintain a bucket with objects uploaded by different AWS accounts. You,
as the bucket owner, own all the objects in the bucket and can manage access to them using policies. For
more information, see Controlling ownership of objects and disabling ACLs for your bucket.

S3 Access Points
Amazon S3 Access Points are named network endpoints with dedicated access policies that describe how
data can be accessed using that endpoint. Access Points simplify managing data access at scale for shared
datasets in Amazon S3. Access Points are named network endpoints attached to buckets that you can use
to perform S3 object operations, such as GetObject and PutObject.

Each access point has its own IAM policy. You can configure Block Public Access settings for each access
point. To restrict Amazon S3 data access to a private network, you can also configure any access point to
accept requests only from a virtual private cloud (VPC).

Regions
You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You
might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects
stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another
Region. For example, objects stored in the Europe (Ireland) Region never leave it.
Creating a bucket

To upload your data to Amazon S3, you must first create an Amazon S3 bucket in one of the
AWS Regions. When you create a bucket, you must choose a bucket name and Region. You
can optionally choose other storage management options for the bucket. After you create a
bucket, you cannot change the bucket name or Region. For information about naming
buckets, see Bucket naming rules.
The AWS account that creates the bucket owns it. You can upload any number of objects to
the bucket. By default, you can create up to 100 buckets in each of your AWS accounts. If
you need more buckets, you can increase your account bucket limit to a maximum of 1,000
buckets by submitting a service limit increase. To learn how to submit a bucket limit
increase, see AWS service quotas in the AWS General Reference. You can store any number
of objects in a bucket.
S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable
access control lists (ACLs) and take ownership of every object in your bucket, simplifying
access management for data stored in Amazon S3. By default, when another AWS account
uploads an object to your S3 bucket, that account (the object writer) owns the object, has
access to it, and can grant other users access to it through ACLs. When you create a bucket,
you can apply the bucket owner enforced setting for Object Ownership to change this
default behavior so that ACLs are disabled and you, as the bucket owner, automatically own
every object in your bucket. As a result, access control for your data is based on policies.
For more information, see Controlling ownership of objects and disabling ACLs for your
bucket.
You can use the Amazon S3 console, Amazon S3 APIs, AWS CLI, or AWS SDKs to create a
bucket. For more information about the permissions required to create a bucket,
see CreateBucket in the Amazon Simple Storage Service API Reference.
Using the S3 console

1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. Choose Create bucket.
The Create bucket wizard opens.
3. In Bucket name, enter a DNS-compliant name for your bucket.
The bucket name must:
◆ Be unique across all of Amazon S3.
◆ Be between 3 and 63 characters long.
◆ Not contain uppercase characters.
◆ Start with a lowercase letter or number.
After you create the bucket, you cannot change its name. For information about naming
buckets, see Bucket naming rules.
Important

Avoid including sensitive information, such as account number, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.

4. In Region, choose the AWS Region where you want the bucket to reside.
Choose a Region close to you to minimize latency and costs and address regulatory
requirements. Objects stored in a Region never leave that Region unless you explicitly
transfer them to another Region. For a list of Amazon S3 AWS Regions, see AWS service
endpoints in the Amazon Web Services General Reference.
5. Under Object Ownership, to disable or enable ACLs and control ownership of objects
uploaded in your bucket, choose one of the following settings:
ACLs disabled

Bucket owner enforced – ACLs are disabled, and the bucket owner automatically owns and
has full control over every object in the bucket. ACLs no longer affect permissions to data in
the S3 bucket. The bucket uses policies to define access control.
To require that all new buckets are created with ACLs disabled by using IAM or AWS
Organizations policies, see Disabling ACLs for all new buckets (bucket owner enforced).
ACLs enabled

Bucket owner preferred – The bucket owner owns and has full control over new objects
that other accounts write to the bucket with the bucket-owner-full-control canned ACL.
If you apply the bucket owner preferred setting, to require all Amazon S3 uploads to
include the bucket-owner-full-control canned ACL, you can add a bucket policy that only
allows object uploads that use this ACL.
Object writer – The AWS account that uploads an object owns the object, has full control
over it, and can grant other users access to it through ACLs.
Note

To apply the Bucket owner enforced setting or the Bucket owner preferred setting, you
must have the following permission: s3:CreateBucket and s3:PutBucketOwnershipControls.
6. In Bucket settings for Block Public Access, choose the Block Public Access settings that
you want to apply to the bucket.
We recommend that you keep all settings enabled unless you know that you need to turn
off one or more of them for your use case, such as to host a public website. Block Public
Access settings that you enable for the bucket are also enabled for all access points that
you create on the bucket. For more information about blocking public access, see Blocking
public access to your Amazon S3 storage.
7. (Optional) If you want to enable S3 Object Lock, do the following:
a) Choose Advanced settings, and read the message that appears.
Important

You can only enable S3 Object Lock for a bucket when you create it. If you enable Object
Lock for the bucket, you cannot disable it later. Enabling Object Lock also enables
versioning for the bucket. After you enable Object Lock for the bucket, you must configure
the Object Lock default retention and legal hold settings to protect new objects from being
deleted or overwritten. For more information, see Configuring S3 Object Lock using the
console.

b) If you want to enable Object Lock, enter enable in the text box and choose Confirm.
For more information about the S3 Object Lock feature, see Using S3 Object Lock.
Note

To create an Object Lock enabled bucket, you must have the following permissions:
s3:CreateBucket, s3:PutBucketVersioning and s3:PutBucketObjectLockConfiguration.
8. Choose Create bucket.
Practical -2 Amazon cloud front
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic
web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your
content through a worldwide network of data centers called edge locations. When a user
requests content that you're serving with CloudFront, the request is routed to the edge
location that provides the lowest latency (time delay), so that content is delivered with the
best possible performance.
If the content is already in the edge location with the lowest latency, CloudFront delivers it
immediately.
If the content is not in that edge location, CloudFront retrieves it from an origin that you've
defined—such as an Amazon S3 bucket, a MediaPackage channel, or an HTTP server (for
example, a web server) that you have identified as the source for the definitive version of
your content.
As an example, suppose that you're serving an image from a traditional web server, not
from CloudFront. For example, you might serve an image, sunsetphoto.png, using the
URL http://example.com/sunsetphoto.png.
Your users can easily navigate to this URL and see the image. But they probably don't know
that their request is routed from one network to another—through the complex collection
of interconnected networks that comprise the internet—until the image is found.
CloudFront speeds up the distribution of your content by routing each user request through
the AWS backbone network to the edge location that can best serve your content. Typically,
this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the
AWS network dramatically reduces the number of networks that your users' requests must
pass through, which improves performance. Users get lower latency—the time it takes to
load the first byte of the file—and higher data transfer rates.
You also get increased reliability and availability because copies of your files (also known
as objects) are now held (or cached) in multiple edge locations around the world.
Topics

• How you set up CloudFront to deliver content


• CloudFront use cases
• How CloudFront delivers content
• Locations and IP address ranges of CloudFront edge servers
• Accessing CloudFront
• How to get started with Amazon CloudFront
• AWS Identity and Access Management
• CloudFront pricing

How you set up CloudFront to deliver content


You create a CloudFront distribution to tell CloudFront where you want content to be
delivered from, and the details about how to track and manage content delivery. Then
CloudFront uses computers—edge servers—that are close to your viewers to deliver that
content quickly when someone wants to see it or use it.

How you configure CloudFront to deliver your content


1. You specify origin servers, like an Amazon S3 bucket or your own HTTP server, from
which CloudFront gets your files which will then be distributed from CloudFront edge
locations all over the world.
2. An origin server stores the original, definitive version of your objects. If you're serving
content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server,
such as a web server. Your HTTP server can run on an Amazon Elastic Compute Cloud
(Amazon EC2) instance or on a server that you manage; these servers are also known
as custom origins.
3. You upload your files to your origin servers. Your files, also known as objects, typically
include web pages, images, and media files, but can be anything that can be served over
HTTP
4. If you're using an Amazon S3 bucket as an origin server, you can make the objects in
your bucket publicly readable, so that anyone who knows the CloudFront URLs for your
objects can access them. You also have the option of keeping objects private and
controlling who accesses them. See Serving private content with signed URLs and signed
cookies.
5. You create a CloudFront distribution, which tells CloudFront which origin servers to get
your files from when users request the files through your web site or application. At the
same time, you specify details such as whether you want CloudFront to log all requests
and whether you want the distribution to be enabled as soon as it's created.
6. CloudFront assigns a domain name to your new distribution that you can see in the
CloudFront console, or that is returned in the response to a programmatic request, for
example, an API request. If you like, you can add an alternate domain name to use
instead.
7. CloudFront sends your distribution's configuration (but not your content) to all of
its edge locations or points of presence (POPs)— collections of servers in
geographically-dispersed data centers where CloudFront caches copies of your files.
8. As you develop your website or application, you use the domain name that CloudFront
provides for your URLs. For example, if CloudFront
returns d111111abcdef8.cloudfront.net as the domain name for your distribution, the
URL for logo.jpg in your Amazon S3 bucket (or in the root directory on an HTTP server)
is http://d111111abcdef8.cloudfront.net/logo.jpg.
9. Or you can set up CloudFront to use your own domain name with your distribution. In
that case, the URL might be http://www.example.com/logo.jpg.

10. Optionally, you can configure your origin server to add headers to the files, to
indicate how long you want the files to stay in the cache in CloudFront edge locations.
By default, each file stays in an edge location for 24 hours before it expires. The
minimum expiration time is 0 seconds; there isn't a maximum expiration time. For more
information, see Managing how long content stays in the cache (expiration).

How CloudFront delivers content to your users


1) After you configure CloudFront to deliver your content, here's what happens when
users request your files:
2) A user accesses your website or application and requests one or more files, such as an
image file and an HTML file.
3) DNS routes the request to the CloudFront POP (edge location) that can best serve the
request—typically the nearest CloudFront POP in terms of latency—and routes the
request to that edge location.
⚫ In the POP, CloudFront checks its cache for the requested files. If the files are in the
cache, CloudFront returns them to the user. If the files are not in the cache, it does the
following:
⚫ CloudFront compares the request with the specifications in your distribution and
forwards the request for the files to your origin server for the corresponding file type—
for example, to your Amazon S3 bucket for image files and to your HTTP server for
HTML files.
⚫ The origin servers send the files back to the edge location.
As soon as the first byte arrives from the origin, CloudFront begins to forward the files to
the user. CloudFront also adds the files to the cache in the edge location for the next time
someone requests those files.

How CloudFront works with regional edge caches


CloudFront points of presence (POPs) (edge locations) make sure that popular content can
be served quickly to your viewers. CloudFront also has regional edge caches that bring
more of your content closer to your viewers, even when the content is not popular enough
to stay at a POP, to help improve performance for that content.
Regional edge caches help with all types of content, particularly content that tends to
become less popular over time. Examples include user-generated content, such as video,
photos, or artwork; e-commerce assets such as product photos and videos; and news and
event-related content that might suddenly find new popularity.
How regional caches work
Regional edge caches are CloudFront locations that are deployed globally, close to your
viewers. They're located between your origin server and the POPs—global edge locations
that serve content directly to viewers. As objects become less popular, individual POPs
might remove those objects to make room for more popular content. Regional edge caches
have a larger cache than an individual POP, so objects remain in the cache longer at the
nearest regional edge cache location. This helps keep more of your content closer to your
viewers, reducing the need for CloudFront to go back to your origin server, and improving
overall performance for viewers.
When a viewer makes a request on your website or through your application, DNS routes
the request to the POP that can best serve the user’s request. This location is typically the
nearest CloudFront edge location in terms of latency. In the POP, CloudFront checks its
cache for the requested files. If the files are in the cache, CloudFront returns them to the
user. If the files are not in the cache, the POPs go to the nearest regional edge cache to
fetch the object.
In the regional edge cache location, CloudFront again checks its cache for the requested
files. If the files are in the cache, CloudFront forwards the files to the POP that requested
them. As soon as the first byte arrives from regional edge cache location, CloudFront begins
to forward the files to the user. CloudFront also adds the files to the cache in the POP for
the next time someone requests those files.
For files not cached at either the POP or the regional edge cache location, CloudFront
compares the request with the specifications in your distributions and forwards the request
for your files to the origin server. After your origin server sends the files back to the
regional edge cache location, they are forwarded to the POP, and CloudFront forwards the
files to the user. In this case, CloudFront also adds the files to the cache in the regional edge
cache location in addition to the POP for the next time a viewer requests those files. This
makes sure that all of the POPs in a region share a local cache, eliminating multiple requests
to origin servers. CloudFront also keeps persistent connections with origin servers so files
are fetched from the origins as quickly as possible.
Practical -3 AWS Key Managment Services
AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the
cryptographic keys that are used to protect your data. AWS KMS uses hardware security modules (HSM) to protect
and validate your AWS KMS keys under the FIPS 140-2 Cryptographic Module Validation Program, except in the
China (Beijing) and China (Ningxia) Regions.

AWS KMS integrates with most other AWS services that encrypt your data. AWS KMS also integrates with AWS
CloudTrail to log use of your KMS keys for auditing, regulatory, and compliance needs.

You can use the AWS KMS API to create and manage KMS keys and special features, such as custom key stores, and
use KMS keys in cryptographic operations. For detailed information, see the AWS Key Management Service API
Reference.

You can create and manage your AWS KMS keys:

⚫ Create, edit, and view symmetric and asymmetric KMS keys

⚫ Control access to your KMS keys by using key policies, IAM policies, and grants. AWS KMS supports attribute-
based access control (ABAC). You can also refine policies by using condition keys.

⚫ Create, delete, list, and update aliases, friendly names for your KMS keys. You can also use aliases to control
access to your KMS keys.

⚫ Tag your KMS keys for identification, automation, and cost tracking. You can also use tags to control access to
your KMS keys.

⚫ Enable and disable KMS keys.

⚫ Enable and disable automatic rotation of the cryptographic material in a KMS keys.

⚫ Delete KMS keys to complete the key lifecycle.

⚫ You can use your KMS keys in cryptographic operations. For examples, see Programming the AWS KMS API.

⚫ Encrypt, decrypt, and re-encrypt data with symmetric or asymmetric KMS keys.

⚫ Sign and verify messages with asymmetric KMS keys.

⚫ Generate exportable symmetric data keys and asymmetric data key pairs.

⚫ Generate random numbers suitable for cryptographic applications.

⚫ You can use the advanced features of AWS KMS.


⚫ Import cryptographic material into a KMS key

⚫ Create KMS keys in your own custom key store backed by a AWS CloudHSM cluster

⚫ Connect directly to AWS KMS through a private endpoint in your VPC

⚫ Use hybrid post-quantum TLS to provide forward-looking encryption in transit for the data that you send to
AWS KMS.

By using AWS KMS, you gain more control over access to data you encrypt. You can use the key management and
cryptographic features directly in your applications or through AWS services integrated with AWS KMS. Whether you
write applications for AWS or use AWS services, AWS KMS enables you to maintain control over who can use your
AWS KMS keys and gain access to your encrypted data.

AWS KMS integrates with AWS CloudTrail, a service that delivers log files to your designated Amazon S3 bucket. By
using CloudTrail you can monitor and investigate how and when your KMS keys have been used and who used them.

AWS KMS in AWS Regions

The AWS Regions in which AWS KMS is supported are listed in AWS Key Management Service Endpoints and Quotas.
If an AWS KMS feature is not supported in an AWS Region that AWS KMS supports, the regional difference is
described in the topic about the feature.

AWS KMS pricing

As with other AWS products, using AWS KMS does not require contracts or minimum purchases. For more
information about AWS KMS pricing, see AWS Key Management Service Pricing.

Service level agreement

AWS Key Management Service is backed by a service level agreement that defines our service availability policy.
Practical -4 Amazon Elastic search service
Elasticsearch is a distributed search and analytics engine built on Apache Lucene. Since its
release in 2010, Elasticsearch has quickly become the most popular search engine and is
commonly used for log analytics, full-text search, security intelligence, business analytics,
and operational intelligence use cases.

On January 21, 2021, Elastic NV announced that they would change their software licensing
strategy and not release new versions of Elasticsearch and Kibana under the permissive
Apache License, Version 2.0 (ALv2) license. Instead, new versions of the software will be
offered under the Elastic license, with source code available under the Elastic License or
SSPL. These licenses are not open source and do not offer users the same freedoms. To
ensure that the open source community and our customers continue to have a secure,
high-quality, fully open source search and analytics suite, we introduced
the OpenSearch project, a community-driven, ALv2 licensed fork of open source
Elasticsearch and Kibana.

How does Elasticsearch work?

You can send data in the form of JSON documents to Elasticsearch using the API or
ingestion tools such as Logstash and Amazon Kinesis Firehose. Elasticsearch automatically
stores the original document and adds a searchable reference to the document in the
cluster’s index. You can then search and retrieve the document using the Elasticsearch API.
You can also use Kibana, a visualization tool, with Elasticsearch to visualize your data and
build interactive dashboards.

You can run Apache 2.0 licensed Elasticsearch versions (up until version 7.10.2 & Kibana
7.10.2) on-premises, on Amazon EC2, or on Amazon OpenSearch Service (successor to
Amazon Elasticsearch Service). With on-premises or Amazon EC2 deployments, you are
responsible for installing Elasticsearch and other necessary software, provisioning
infrastructure, and managing the cluster. Amazon OpenSearch Service, on the other hand,
is a fully managed service, so you don’t have to worry about time-consuming cluster
management tasks such as hardware provisioning, software patching, failure recovery,
backups, and monitoring.

Elasticsearch benefits
FAST TIME-TO-VALUE
Elasticsearch offers simple REST based APIs, a simple HTTP interface, and uses schema-free
JSON documents, making it easy to get started and quickly build applications for a variety of
use-cases.
HIGH PERFORMANCE
The distributed nature of Elasticsearch enables it to process large volumes of data in
parallel, quickly finding the best matches for your queries.
COMPLIMENTARY TOOLING AND PLUGINS
Elasticsearch comes integrated with Kibana, a popular visualization and reporting tool. It
also offers integration with Beats and Logstash, while enable you to easily transform source
data and load it into your Elasticsearch cluster. You can also use a number of open-source
Elasticsearch plugins such as language analyzers and suggesters to add rich functionality to
your applications.
NEAR REAL-TIME OPERATIONS
Elasticsearch operations such as reading or writing data usually take less than a second to
complete. This lets you use Elasticsearch for near real-time use cases such as application
monitoring and anomaly detection.
EASY APPLICATION DEVELOPMENT
Elasticsearch provides support for various languages including Java, Python, PHP, JavaScript,
Node.js, Ruby, and many more.

Getting started with Elasticsearch on AWS

Managing and scaling Elasticsearch can be difficult and requires expertise in Elasticsearch
setup and configuration. To make it easy for customers to run open-source Elasticsearch,
AWS offers Amazon OpenSearch Service to perform interactive log analytics, real-time
application monitoring, website search, and more.
Practical -5 Introduction to Amazon DynamoDB
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. DynamoDB lets you offload the
administrative burdens of operating and scaling a distributed database so that you don't
have to worry about hardware provisioning, setup and configuration, replication, software
patching, or cluster scaling. DynamoDB also offers encryption at rest, which eliminates the
operational burden and complexity involved in protecting sensitive data. For more
information, see DynamoDB Encryption at Rest.
With DynamoDB, you can create database tables that can store and retrieve any amount of
data and serve any level of request traffic. You can scale up or scale down your tables'
throughput capacity without downtime or performance degradation. You can use the AWS
Management Console to monitor resource utilization and performance metrics.
DynamoDB provides on-demand backup capability. It allows you to create full backups of
your tables for long-term retention and archival for regulatory compliance needs. For more
information, see Using On-Demand Backup and Restore for DynamoDB.
You can create on-demand backups and enable point-in-time recovery for your Amazon
DynamoDB tables. Point-in-time recovery helps protect your tables from accidental write or
delete operations. With point-in-time recovery, you can restore a table to any point in time
during the last 35 days. For more information, see Point-in-Time Recovery: How It Works.
DynamoDB allows you to delete expired items from tables automatically to help you reduce
storage usage and the cost of storing data that is no longer relevant. For more information,
see Expiring Items By Using DynamoDB Time to Live (TTL).
High Availability and Durability
DynamoDB automatically spreads the data and traffic for your tables over a sufficient
number of servers to handle your throughput and storage requirements, while maintaining
consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is
automatically replicated across multiple Availability Zones in an AWS Region, providing
built-in high availability and data durability. You can use global tables to keep DynamoDB
tables in sync across AWS Regions. For more information, see Global Tables: Multi-Region
Replication with DynamoDB.

Getting Started with DynamoDB


We recommend that you begin by reading the following sections:

Amazon DynamoDB: How It Works—To learn essential DynamoDB concepts.


Setting Up DynamoDB—To learn how to set up DynamoDB (the downloadable version or
the web service).
Accessing DynamoDB—To learn how to access DynamoDB using the console, AWS CLI, or
API.
To get started quickly with DynamoDB, see Getting Started with DynamoDB and AWS SDKs.
To learn more about application development, see the following:
Programming with DynamoDB and the AWS SDKs
Working with Tables, Items, Queries, Scans, and Indexes
To quickly find recommendations for maximizing performance and minimizing throughput
costs, see Best Practices for Designing and Architecting with DynamoDB. To learn how to
tag DynamoDB resources, see Adding Tags and Labels to Resources.
For best practices, how-to guides, and tools, see Amazon DynamoDB resources.
You can use AWS Database Migration Service (AWS DMS) to migrate data from a relational
database or MongoDB to a DynamoDB table. For more information, see the AWS Database
Migration Service User Guide.
To learn how to use MongoDB as a migration source, see Using MongoDB as a Source for
AWS Database Migration Service. To learn how to use DynamoDB as a migration target,
see Using an Amazon DynamoDB Database as a Target for AWS Database Migration Service.
Practical -6 Introduction to amazon api gateway
Amazon API Gateway is a fully managed service that makes it easy for developers to create,
publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for
applications to access data, business logic, or functionality from your backend services.
Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time
two-way communication applications. API Gateway supports containerized and serverless
workloads, as well as web applications.

API Gateway handles all the tasks involved in accepting and processing up to hundreds of
thousands of concurrent API calls, including traffic management, CORS support,
authorization and access control, throttling, monitoring, and API version management. API
Gateway has no minimum fees or startup costs. You pay for the API calls you receive and
the amount of data transferred out and, with the API Gateway tiered pricing model, you
can reduce your cost as your API usage scales.

API Types

RESTful APIs

Build RESTful APIs optimized for serverless workloads and HTTP backends using HTTP APIs.
HTTP APIs are the best choice for building APIs that only require API proxy functionality. If
your APIs require API proxy functionality and API management features in a single solution,
API Gateway also offers REST APIs.

WEBSOCKET APIs

Build real-time two-way communication applications, such as chat apps and streaming
dashboards, with WebSocket APIs. API Gateway maintains a persistent connection to
handle message transfer between your backend service and your clients.
Benefits

Efficient API development

Run multiple versions of the same API simultaneously with API Gateway, allowing you to
quickly iterate, test, and release new versions. You pay for calls made to your APIs and data
transfer out and there are no minimum fees or upfront commitments.

Performance at any scale

Provide end users with the lowest possible latency for API requests and responses by taking
advantage of our global network of edge locations using Amazon CloudFront. Throttle
traffic and authorize API calls to ensure that backend operations withstand traffic spikes
and backend systems are not unnecessarily called.

Cost savings at scale

API Gateway provides a tiered pricing model for API requests. With an API Requests price as
low as $0.90 per million requests at the highest tier, you can decrease your costs as your
API usage increases per region across your AWS accounts.

Easy monitoring

Monitor performance metrics and information on API calls, data latency, and error rates
from the API Gateway dashboard, which allows you to visually monitor calls to your services
using Amazon CloudWatch.

Flexible security controls

Authorize access to your APIs with AWS Identity and Access Management (IAM) and
Amazon Cognito. If you use OAuth tokens, API Gateway offers native OIDC and OAuth2
support. To support custom authorization requirements, you can execute a Lambda
authorizer from AWS Lambda.

RESTful API options

Create RESTful APIs using HTTP APIs or REST APIs. HTTP APIs are the best way to build APIs
for a majority of use cases—they're up to 71% cheaper than REST APIs. If your use case
requires API proxy functionality and management features in a single solution, you can use
REST APIs.
Getting started with API Gateway

In this getting started exercise, you create a serverless API. Serverless APIs let you focus on
your applications, instead of spending time provisioning and managing servers. This
exercise takes less than 20 minutes to complete, and is possible within the AWS Free Tier.

First, you create a Lambda function using the AWS Lambda console. Next, you create an
HTTP API using the API Gateway console. Then, you invoke your API.

When you invoke your HTTP API, API Gateway routes the request to your Lambda function.
Lambda runs the Lambda function and returns a response to API Gateway. API Gateway
then returns a response to you.

Architectural overview of the API that you create in this getting started guide. Clients
use an API Gateway

HTTP API to invoke a Lambda function. API Gateway returns the Lambda function's
response to clients.#

To complete this exercise, you need an AWS account and an AWS Identity and Access
Management user with console access. For more information, see Prerequisites for getting
started with API Gateway.

Topics

Step 1: Create a Lambda function

Step 2: Create an HTTP API

Step 3: Test your API

(Optional) Step 4: Clean up

Next steps

Step 1: Create a Lambda function


You use a Lambda function for the backend of your API. Lambda runs your code only when
needed and scales automatically, from a few requests per day to thousands per second.

For this example, you use the default Node.js function from the Lambda console.

To create a Lambda function

Sign in to the Lambda console at https://console.aws.amazon.com/lambda.

Choose Create function.

For Function name, enter my-function.

Choose Create function.

The example function returns a 200 response to clients, and the text Hello from Lambda!.

You can modify your Lambda function, as long as the function's response aligns with the
format that API Gateway requires.

The default Lambda function code should look similar to the following:

exports.handler = async (event) => {

const response = {

statusCode: 200,

body: JSON.stringify('Hello from Lambda!'),

};

return response;

};

Step 2: Create an HTTP API


Next, you create an HTTP API. API Gateway also supports REST APIs and WebSocket APIs,
but an HTTP API is the best choice for this exercise. HTTP APIs have lower latency and lower
cost than REST APIs. WebSocket APIs maintain persistent connections with clients for full-
duplex communication, which isn't required for this example.

The HTTP API provides an HTTP endpoint for your Lambda function. API Gateway routes
requests to your Lambda function, and then returns the function's response to clients.

To create an HTTP API

Sign in to the API Gateway console at https://console.aws.amazon.com/apigateway.]

Do one of the following:

To create your first API, for HTTP API, choose Build.

If you've created an API before, choose Create API, and then choose Build for HTTP API.

For Integrations, choose Add integration.

Choose Lambda.

For Lambda function, enter my-function.

For API name, enter my-http-api.

Choose Next.

Review the route that API Gateway creates for you, and then choose Next.

Review the stage that API Gateway creates for you, and then choose Next.

Choose Create.

Now you've created an HTTP API with a Lambda integration that's ready to receive requests
from clients.
Step 3: Test your API

Next, you test your API to make sure that it's working. For simplicity, use a web browser to
invoke your API.

To test your API

Sign in to the API Gateway console at https://console.aws.amazon.com/apigateway.

Choose your API.

Note your API's invoke URL.

After you create your API, the console shows your API's invoke URL

Copy your API's invoke URL, and enter it in a web browser. Append the name of your
Lambda function to your invoke URL to call your Lambda function. By default, the API
Gateway console creates a route with the same name as your Lambda function, my-
function.

The full URL should look like https://abcdef123.execute-api.us-east-2.amazonaws.com/my-


function.

Your browser sends a GET request to the API.]

Verify your API's response. You should see the text "Hello from Lambda!" in your browser.

(Optional) Step 4: Clean up

To prevent unnecessary costs, delete the resources that you created as part of this getting
started exercise. The following steps delete your HTTP API, your Lambda function, and
associated resources.

To delete an HTTP API


Sign in to the API Gateway console at https://console.aws.amazon.com/apigateway.

On the APIs page, select an API. Choose Actions, and then choose Delete.

Choose Delete.

To delete a Lambda function

Sign in to the Lambda console at https://console.aws.amazon.com/lambda.

On the Functions page, select a function. Choose Actions, and then choose Delete.

Choose Delete.

To delete a Lambda function's log group

In the Amazon CloudWatch console, open the Log groups page.

On the Log groups page, select the function's log group (/aws/lambda/my-function).
Choose Actions, and then choose Delete log group.

Choose Delete.

To delete a Lambda function's execution role

In the AWS Identity and Access Management console, open the Roles page.

Select the function's role, for example, my-function-31exxmpl.

Choose Delete role.

Choose Yes, delete.

You can automate the creation and cleanup of AWS resources by using AWS
CloudFormation or AWS SAM. For example AWS CloudFormation templates, see example
AWS CloudFormation templates.

Next steps
For this example, you used the AWS Management Console to create a simple HTTP API. The
HTTP API invokes a Lambda function and returns a response to clients.

The following are next steps as you continue to work with API Gateway.

Configure additional types of API integrations, including:

HTTP endpoints

Private resources in a VPC, such as Amazon ECS services

AWS services such as Amazon Simple Queue Service, AWS Step Functions, and Kinesis Data
Streams

Control access to your APIs

Enable logging for your APIs

Configure throttling for your APIs

Configure custom domains for your APIs

To get help with Amazon API Gateway from the community, see the API Gateway
Discussion Forum. When you enter this forum, AWS might require you to sign in.

To get help with API Gateway directly from AWS, see the support options on the AWS
Support page.
Practical -7 Amazon Machine learning
Make accurate predictions, get deeper insights from your data, reduce operational
overhead, and improve customer experience with AWS machine learning (ML). AWS helps
you at every stage of your ML adoption journey with the most comprehensive set of
artificial intelligence (AI) and ML services, infrastructure, and implementation resources.

1. Build with a proven leader


Solve real-world business problems in any industry and innovate with confidence. Join more
than 100,000 AWS customers building on 20+ years of experience at Amazon.

2. Tailor ML to your business needs


Address common business problems to improve customer experience, optimize business
processes, and accelerate innovation. Use ready-made, purpose-built AI services or your
own models with AWS ML services.

3. Accelerate your ML adoption


Get the support you need along every stage of your ML journey. Kick off your proof of
concept with AWS experts, work with 80+ competency partners, and upskill your teams
with trainings and hands-on tutorials.

Use cases

Explore the key use cases of AI/ML to improve customer experience, optimize business
operations, and accelerate innovation.

Add intelligence to the contact center

Enhance your customer service experience and reduce costs by integrating machine
learning into your contact center. Through intelligent chat and voice bots, voice sentiment
analysis, live-call analytics and agent assist, post-call analytics, and more, personalize every
customer interaction and improve overall customer satisfaction.
ChartSpan, the largest chronic care management service provider in the U.S., decreased
cost by 80% and increased staff utilization by 12%.

Identify fraudulent online activities


Improve profitability by automating the detection of potentially fraudulent online activity,
such as payment fraud and fake accounts, using machine learning and your own unique
data.
Truevo, a payment service provider, was able to build a fraud detection model in just 30
minutes and is operating with greater confidence to catch bad actors faster.

analyze_graphic
Analyze media content and discover new insights
Create new insights from video, audio, images and text by applying machine learning to
better manage and analyze content. Automate key functions of the media workflow to
accelerate the search and discovery, content localization, compliance, monetization, and
more.
SmugMug, a global image and video sharing platform, is able to find and properly flag
unwanted content at scale, enabling a safe and welcoming experience for its community.

forecast_graphic
Forecast future values and detect anomalies in your business metrics
Accurately forecast sales, financial, and demand data to streamline decision-making.
Automatically identify anomalies in your business metrics and their root cause to stay
ahead of the game.
Domino’s Pizza Enterprises Ltd, the largest pizza chain in Australia, gets orders to customers
faster by predicting what pizzas would be ordered next.

Improve developer operations with intelligent insights


Detect deviation from best practices and other common coding bugs and maintain a high-
quality customer experience by reducing deployment risks and facilitating faster delivery of
new features. Empower developers to evaluate operational data and leverage intelligent
insights to reduce the time and effort spent analyzing and resolving issues.
Atlassian, a collaboration software provider, reduced investigation time from days to hours
and sometimes minutes, enabling them to focus on delivering differentiated customer
capabilities.

Modernize your machine learning development


Accelerate innovation while reducing cost by modernizing machine learning development
lifecycle through scalable infrastructure, integrated tooling, healthy practices for
responsible machine learning use, tool choices accessible to developers of all skill levels,
and efficient resource management.
Intuit, the leading platform for managing personal, business, and tax finances, modernized
its machine learning platform and saved tax filers over 25,000 hours.
Practical -8 Introduction to Aws Database Migration services
AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly
and securely. The source database remains fully operational during the migration,
minimizing downtime to applications that rely on the database. The AWS Database
Migration Service can migrate your data to and from the most widely used commercial and
open-source databases.

AWS Database Migration Service supports homogeneous migrations such as Oracle to


Oracle, as well as heterogeneous migrations between different database platforms, such as
Oracle or Microsoft SQL Server to Amazon Aurora. With AWS Database Migration Service,
you can also continuously replicate data with low latency from any supported source to any
supported target. For example, you can replicate from multiple sources to Amazon S3 to
build a highly available and scalable data lake solution. You can also consolidate databases
into a petabyte-scale data warehouse by streaming data to Amazon Redshift. Learn
more about the supported source and target databases.

When migrating databases to Amazon Aurora, Amazon Redshift, Amazon DynamoDB or


Amazon DocumentDB (with MongoDB compatibility) you can use AWS DMS free for six
months.

Benefits

Simple to use

AWS Database Migration Service is simple to use. There is no need to install any drivers or
applications, and it does not require changes to the source database in most cases. You can
begin a database migration with just a few clicks in the AWS Management Console. Once
the migration has started, DMS manages all the complexities of the migration process
including automatically replicating data changes that occur in the source database during
the migration process. You can also use this service for continuous data replication with the
same simplicity.

Minimal downtime

AWS Database Migration Service helps you migrate your databases to AWS with virtually no
downtime. All data changes to the source database that occur during the migration are
continuously replicated to the target, allowing the source database to be fully operational
during the migration process. After the database migration is complete, the target database
will remain synchronized with the source for as long as you choose, allowing you to
switchover the database at a convenient time.

Supports widely used databases

AWS Database Migration Service can migrate your data to and from most of the widely
used commercial and open source databases. It supports homogeneous migrations such as
Oracle to Oracle, as well as heterogeneous migrations between different database
platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises
databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa,
as well as from one RDS database to another RDS database. It can also move data between
SQL, NoSQL, and text based targets.

Low cost

AWS Database Migration Service is a low cost service. You only pay for the compute
resources used during the migration process and any additional log storage. Migrating a
terabyte-size database can be done for as little as $3. This applies to both homogeneous
and heterogeneous migrations of any supported databases. This is in stark contrast to
conventional database migration methods that can be very expensive.

On-going replication

You can set up a DMS task for either one-time migration or on-going replication. An on-
going replication task keeps your source and target databases in sync. Once set up, the on-
going replication task will continuously apply source changes to the target with minimal
latency. All DMS features such as data validation and transformations are available for any
replication task.

Reliable

The AWS Database Migration Service is highly resilient and self–healing. It continually
monitors source and target databases, network connectivity, and the replication instance.
In case of interruption, it automatically restarts the process and continues the migration
from where it stopped. Multi-AZ option allows you to have high-availability for database
migration and continuous data replication by enabling redundant replication instances

Use cases

Homogeneous Database Migrations

In homogeneous database migrations, the source and target database engines are the same
or are compatible like Oracle to Amazon RDS for Oracle, MySQL to Amazon Aurora, MySQL
to Amazon RDS for MySQL, or Microsoft SQL Server to Amazon RDS for SQL Server. Since
the schema structure, data types, and database code are compatible between the source
and target databases, this kind of migration is a one-step process. You create a migration
task with connections to the source and target databases, and then start the migration with
the click of a button. AWS Database Migration Service takes care of the rest. The source
database can be located in your own premises outside of AWS, running on an Amazon EC2
instance, or it can be an Amazon RDS database. The target can be a database in Amazon
EC2 or Amazon RDS.
Verizon is a global leader delivering innovative communications and technology solutions.
“Verizon is helping our customers build a better, more connected life. As part of this
journey, we are undergoing a major transformation in our database management approach,
moving away from expensive, legacy commercial database solutions to more efficient and
cost-effective options. Testing of Amazon Aurora PostgreSQL showed better performance
over standard PostgreSQL residing on Amazon EC2 instances, and the AWS Database
Migration Service and Schema Conversion Tool were found effective at identifying areas for
data-conversion that required special attention during migration.” - Shashidhar Sureban,
Associate Director, Database Engineering, Verizon.

Heterogeneous Database Migrations

In heterogeneous database migrations the source and target databases engines are
different, like in the case of Oracle to Amazon Aurora, Oracle to PostgreSQL, or Microsoft
SQL Server to MySQL migrations. In this case, the schema structure, data types, and
database code of source and target databases can be quite different, requiring a schema
and code transformation before the data migration starts. That makes heterogeneous
migrations a two-step process. First use the AWS Schema Conversion Tool to convert the
source schema and code to match that of the target database, and then use the AWS
Database Migration Service to migrate data from the source database to the target
database. All the required data type conversions will automatically be done by the AWS
Database Migration Service during the migration. The source database can be located in
your own premises outside of AWS, running on an Amazon EC2 instance, or it can be an
Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS.
Trimble is a global leader in telematics solutions. They had a significant investment in on-
premises hardware in North America and Europe running Oracle databases. Rather than
refresh the hardware and renew the licenses, they opted to migrate the databases to AWS.
They ran the AWS Schema Conversion Tool to analyze the effort, and then migrated their
complete database to a managed PostgreSQL service on Amazon RDS. "Our projections are
that we will pay about one quarter of what we were paying in our private infrastructure." -
Todd Hofert, Director of Infrastructure Operations, Trimble.

Development and Test

AWS Database Migration Service can be used to migrate data both into and out of the cloud
for development purposes. There are two common scenarios. The first is to deploy
development, test or staging systems on AWS, to take advantage of the cloud’s scalability
and rapid provisioning. This way, developers and testers can use copies of real production
data, and can copy updates back to the on-premises production system. The second
scenario is when development systems are on-premises (often on personal laptops), and
you migrate a current copy of an AWS Cloud production database to these on-premises
systems either once or continuously. This avoids disruption to existing DevOps processes
while ensuring the up-to-date representation of your production system.
Database Consolidation

You can use AWS Database Migration Service to consolidate multiple source databases into
a single target database. This can be done for homogeneous and heterogeneous migrations,
and you can use this feature with all supported database engines. The source databases can
be located in your own premises outside of AWS, running on an Amazon EC2 instance, or it
can be an Amazon RDS database. The sources databases can also be spread across different
locations. For example, one of the source databases can be in your own premises outside of
AWS, while the second one in Amazon EC2, and the third one in an Amazon RDS database.
The target can be a database in Amazon EC2 or Amazon RDS.

Continuous Data Replication

You can use AWS Database Migration Service to perform continuous data replication.
Continuous data replication has a multitude of use cases including Disaster Recovery
instance synchronization, geographic database distribution and Dev/Test environment
synchronization. You can use DMS for both homogeneous and heterogeneous data
replications for all supported database engines. The source or destination databases can be
located in your own premises outside of AWS, running on an Amazon EC2 instance, or it can
be an Amazon RDS database. You can replicate data from a single database to one or more
target databases or consolidate and replicate data from multiple databases to one or more
target databases.
Practical-9 AWS iOT
AWS offers Internet of Things (IoT) services and solutions to connect and manage billions of
devices. Collect, store, and analyze IoT data for industrial, consumer, commercial, and
automotive workloads.

Accelerate innovation with the most complete set of IoT services

Scale, move quickly, and save money, with AWS IoT. From secure device connectivity to
management, storage, and analytics, AWS IoT has the broad and deep services you need to
build complete solutions.

Secure your IoT applications from the cloud to the edge

AWS IoT services address every layer of your application and device security. Safeguard
your device data with preventative mechanisms, like encryption and access control, and
consistently audit and monitor your configurations with AWS IoT Device Defender.

Build intelligent IoT solutions with superior AI and ML integration

Create models in the cloud and deploy them to devices with up to 25x better performance
and less than 1/10th the runtime footprint. AWS brings artificial intelligence (AI), machine
learning (ML), and IoT together to make devices more intelligent.

Scale easily and reliably

Build innovative, differentiated solutions on secure, proven, and elastic cloud infrastructure
that scales to billions of devices and trillions of messages. AWS IoT easily integrates with
other AWS services.
AWS IoT services
Device software

Connect your devices and operate them at the edge.

FreeRTOS
Deploy an operating system for microcontrollers that makes small, low-power edge devices easy to manage

AWS IoT Greengrass


Build, deploy, and manage intelligent IoT applications at the edge with an open-source edge runtime and cloud service

AWS IoT ExpressLink


Quickly transform any embedded device into an IoT–connected device with minimal design effort using these hardware
modules.

Connectivity and control services

Secure, control, and manage your devices from the cloud.

AWS IoT Core


Connect IoT devices to AWS without the need to provision or manage servers

AWS IoT Device Defender


Continuously audit your IoT configurations and secure your fleet of IoT devices

AWS IoT Device Management


Easily register, organize, monitor, and remotely manage your IoT devices at scale
AWS IoT FleetWise (Preview)
Easily collect, transform, and transfer vehicle data to the cloud at scale

Analytics services

Work with IoT data faster to extract value from your data.

AWS IoT SiteWise


Collect and analyze industrial data at scale and make better, data-driven decisions

AWS IoT Events


Easily detect and respond to events from many IoT sensors and applications

AWS IoT Analytics


Run analytics on volumes of IoT data easily—without building an analytics platform

AWS IoT TwinMaker (Preview)


Optimize operations by easily creating digital twins of real-world systems

Use cases

Optimize industrial operations


Create rich and scalable industrial IoT applications to remotely monitor operations, improve quality, and reduce
unplanned downtime.

Build differentiated consumer products

Develop connected consumer applications for home automation, home security and monitoring, and home networking.

Reinvent smart buildings and cities

Build commercial IoT applications that solve challenges in infrastructure, health, and the environment.

Transform mobility

Deliver IoT applications that gather, process, analyze, and act on connected vehicle data, without having to manage any
infrastructure.

Practical -10 Introduction to Amazon AWS Route 53


Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web
service. It is designed to give developers and businesses an extremely reliable and cost
effective way to route end users to Internet applications by translating names like
www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to
connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
Amazon Route 53 effectively connects user requests to infrastructure running in AWS –
such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets
– and can also be used to route users to infrastructure outside of AWS. You can use Amazon
Route 53 to configure DNS health checks, then continuously monitor your applications’
ability to recover from failures and control application recovery with Route 53 Application
Recovery Controller.

Amazon Route 53 Traffic Flow makes it easy for you to manage traffic globally through a
variety of routing types, including Latency Based Routing, Geo DNS, Geoproximity, and
Weighted Round Robin—all of which can be combined with DNS Failover in order to enable
a variety of low-latency, fault-tolerant architectures. Using Amazon Route 53 Traffic Flow’s
simple visual editor, you can easily manage how your end-users are routed to your
application’s endpoints—whether in a single AWS region or distributed around the globe.
Amazon Route 53 also offers Domain Name Registration – you can purchase and manage
domain names such as example.com and Amazon Route 53 will automatically configure
DNS settings for your domains.

Benefits

Highly available and reliable

Amazon Route 53 is built using AWS’s highly available and reliable infrastructure. The
distributed nature of our DNS servers helps ensure a consistent ability to route your end
users to your application. Features such as Amazon Route 53 Traffic Flow and routing
control help you improve reliability with easily-configured failover to reroute your users to
an alternate location if your primary application endpoint becomes unavailable. Amazon
Route 53 is designed to provide the level of dependability required by important
applications. Amazon Route 53 is backed by the Amazon Route 53 Service Level Agreement.

Flexible

Amazon Route 53 Traffic Flow routes traffic based on multiple criteria, such as endpoint
health, geographic location, and latency. You can configure multiple traffic policies and
decide which policies are active at any given time. You can create and edit traffic policies
using the simple visual editor in the Route 53 console, AWS SDKs, or the Route 53 API.
Traffic Flow’s versioning feature maintains a history of changes to your traffic policies, so
you can easily roll back to a previous version using the console or API.

Designed for use with other Amazon Web Services

Amazon Route 53 is designed to work well with other AWS features and offerings. You can
use Amazon Route 53 to map domain names to your Amazon EC2 instances, Amazon S3
buckets, Amazon CloudFront distributions, and other AWS resources. By using the AWS
Identity and Access Management (IAM) service with Amazon Route 53, you get fine grained
control over who can update your DNS data. You can use Amazon Route 53 to map your
zone apex (example.com versus www.example.com) to your Elastic Load Balancing instance,
Amazon CloudFront distribution, AWS Elastic Beanstalk environment, API Gateway, VPC
endpoint, or Amazon S3 website bucket using a feature called Alias record.

Simple

With self-service sign-up, Amazon Route 53 can start to answer your DNS queries within
minutes. You can configure your DNS settings with the AWS Management Console or our
easy-to-use API. You can also programmatically integrate the Amazon Route 53 API into
your overall web application. For instance, you can use Amazon Route 53’s API to create a
new DNS record whenever you create a new EC2 instance. Amazon Route 53 Traffic Flow
makes it easy to set up sophisticated routing logic for your applications by using the simple
visual policy editor.

Fast

Using a global anycast network of DNS servers around the world, Amazon Route 53 is
designed to automatically route your users to the optimal location depending on network
conditions. As a result, the service offers low query latency for your end users, as well as
low update latency for your DNS record management needs. Amazon Route 53 Traffic Flow
lets you further improve your customers’ experience by running your application in multiple
locations around the world and using traffic policies to ensure your end users are routed to
the closest healthy endpoint for your application.

Cost-effective

Amazon Route 53 passes on the benefits of AWS’s scale to you. You pay only for the
resources you use, such as the number of queries that the service answers for each of your
domains, hosted zones for managing domains through the service, and optional features
such as traffic policies and health checks, all at a low cost and without minimum usage
commitments or any up-front fees.

Secure

By integrating Amazon Route 53 with AWS Identity and Access Management (IAM), you can
grant unique credentials and manage permissions for every user within your AWS account
and specify who has access to which parts of the Amazon Route 53 service. When you
enable Amazon Route 53 Resolver DNS firewall, you can configure it to inspect outbound
DNS requests against a list of known malicious domains.

Scalable

Route 53 is designed to automatically scale to handle very large query volumes without any
intervention from you.

Simplify the hybrid cloud


Amazon Route 53 Resolver provides recursive DNS for your Amazon VPC and on-premises
networks over AWS Direct Connect or AWS Managed VPN.

Blog posts and articles

Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit
Gateway

PRATEEK (121519009)

You might also like