Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Messaging and Queuing: The idea of placing messages into a buffer is called messaging and queuing.

Monolithic applications and microservices: Applications are made of multiple components. The
components communicate with each other to transmit data, fulfill requests, and keep the application
running. Suppose that you have an application with tightly coupled components. These components
might include databases, servers, the user interface, business logic, and so on. This type of architecture
can be considered a monolithic application. In this approach to application architecture, if a single
component fails, other components fail, and possibly the entire application fails.
To help maintain application availability when a single component fails, you can design your application
through a microservices approach.
In a microservices approach, application components are loosely coupled. In this case, if a single
component fails, the other components continue to work because they are communicating with each
other. The loose coupling prevents the entire application from failing.
When designing applications on AWS, you can take a microservices approach with services and
components that fulfill different functions.
Two services facilitate application integration:
Amazon Simple Notification Service (Amazon SNS): It is a publish/subscribe or pub/sub service which is
utilized to convey the push messages from the application to the subscribing ends or other applications.
It is a completely managed messaging service for both application to application (A2A) and application
to person (A2P) communication. It provides the ability to create a Topic that is a logical access point and
communication channel. Each topic has a different name that identifies the SNS endpoint for publishers
to post messages and subscribers to register for notifications. 1 million requests free with the AWS Free
Tier
Application-to-application messaging supports subscribers such as Amazon Kinesis Data Firehose
delivery streams, Lambda functions, Amazon SQS queues, HTTP/S endpoints, and AWS Event Fork
Pipelines
Application-to-person notifications provide user notifications to subscribers such as mobile applications,
mobile phone numbers, SMS, and email addresses.
There are two types SNS Topics:
Standard Topic: Standard Topic is used in many scenarios where the order of message is not important.
Standard Topic supports a nearly unlimited number of messages per second.
In standard topic, a message is delivered at least once, but there might be more than one copy of a
message is delivered.
Messages can be sent to a variety of endpoints (Amazon SQS, AWS Lambda, HTTPS webhooks, SMS,
mobile push, and email).
In standard topic, each account can support 100K standard topic and each topic support up to 12.5M
Subscriptions.
FIFO Topic: FIFO Topic used in messaging between applications where the order of operations and event
is critical.
FIFO Topic supports up to 300 messages per second or 10 MB per second per FIFO topic.
In FIFO topic, duplicate messages are not delivered.
Messages can be sent to FIFO queues.
In FIFO topic, each account can support 1000 FIFO topics and each topic supports up to 100
Subscription.
Amazon Simple Queue Service (Amazon SQS): It is a messaging queue service which allows you to send,
store, and receive messages between software components at any volume without losing messages or
requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A
user or service retrieves a message from the queue, processes it, and then deletes it from the queue.
Messages can contain up to 256 KB of text in any format such as json, xml, etc. Messages are kept in a
queue from 1 minute to 14 days. The default retention period is 4 days. AWS manages the underlying
infrastructure for you to host those queues. These scale automatically, are reliable, and are easy to
configure and use. 
SQS is pull-based. 1 million requests free with the AWS Free Tier.
SQS offers two types of message queues.
Standard queues offer maximum throughput (support a nearly unlimited number of API calls per second,
per API action), best-effort ordering (Occasionally, messages are delivered in an order different from
which they were sent.), and at-least-once delivery (A message is delivered at least once, but occasionally
more than one copy of a message is delivered.). It is used to send data between applications when the
throughput is important.
FIFO queues are designed to guarantee high throughput (If you use batching, FIFO queues support up to
3,000 messages per second, per API method), messages are processed exactly once (A message is
delivered once and remains available until a consumer processes and deletes it. Duplicates are not
introduced into the queue.), in the exact order that they are sent (The order in which messages are sent
and received is strictly preserved.). It is used to send data between applications when the order of
events is important.
Benefits of using Amazon SQS:
Security – You control who can send messages to and receive messages from an Amazon SQS queue.
Server-side encryption (SSE) lets you transmit sensitive data by protecting the contents of messages in
queues using keys managed in AWS Key Management Service (AWS KMS).
Durability – For the safety of your messages, Amazon SQS stores them on multiple servers.
Availability – Amazon SQS uses redundant infrastructure to provide highly-concurrent access to
messages and high availability for producing and consuming messages.
Scalability – Amazon SQS can process each buffered request independently, scaling transparently to
handle any load increases or spikes without any provisioning instructions.
Reliability – Amazon SQS locks your messages during processing, so that multiple producers can send,
and multiple consumers can receive messages at the same time.
Customization – Your queues do not have to be exactly alike—for example, you can set a default delay
on a queue. You can store the contents of messages larger than 256 KB using Amazon Simple Storage
Service (Amazon S3) or Amazon DynamoDB, with Amazon SQS holding a pointer to the Amazon S3
object, or you can split a large message into smaller messages.
Serverless Computing: The term “serverless” means that your code runs on servers, but you do not
need to provision or manage these servers. Another benefit of serverless computing is the flexibility to
scale serverless applications automatically. Serverless computing can adjust the applications' capacity by
modifying the units of consumptions, such as throughput and memory. 
AWS Lambda: Lambda is a compute service that lets you run code without provisioning or managing
servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the
administration of the compute resources, including server and operating system maintenance, capacity
provisioning and automatic scaling, code monitoring and logging. With Lambda, you can run code for
virtually any type of application or backend service. You organize your code into Lambda functions.
Lambda runs your function only when needed and scales automatically, from a few requests per day to
thousands per second. You pay only for the compute time that you consume—there is no charge when
your code is not running. Lambda functions are stateless – no affinity to the underlying infrastructure.
You choose the amount of memory you want to allocate to your functions and AWS Lambda allocates
proportional CPU power, network bandwidth, and disk I/O. AWS Lambda is SOC, HIPAA, PCI, ISO
compliant. Natively supports the following languages: Node.js, Java, C#, Go, Python, Ruby, PowerShell
You can also provide your own custom runtime. You are charged based on the total number of requests
for your functions and the duration, the time it takes for your code to execute.
Amazon Elastic Container Service: Amazon ECS is a fully managed container orchestration service that
helps you easily deploy, manage, and scale containerized applications. It deeply integrates with the rest
of the AWS platform to provide a secure and easy-to-use solution for running container workloads in the
cloud. It makes it easy to run, stop, and manage containers on a cluster. Your containers are defined in a
task definition that you use to run individual tasks or tasks within a service. These tasks and services run
on a serverless infrastructure that is managed by AWS Fargate. Amazon ECS is a regional service that
simplifies running containers in a highly available manner across multiple Availability Zones within a
Region. AWS Compute SLA guarantees a Monthly Uptime Percentage of at least 99.99% for Amazon ECS.
Amazon ECS supports Docker containers. Docker is a software platform that enables you to build, test,
and deploy applications quickly. AWS supports the use of open-source Docker Community Edition and
subscription-based Docker Enterprise Edition. With Amazon ECS, you can use API calls to launch and
stop Docker-enabled applications.
Amazon Elastic Kubernetes Service: Amazon EKS is a managed service that you can use to run
Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control
plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and
management of containerized applications.
AWS Fargate: AWS Fargate is a serverless compute engine for containers that works with both Amazon
Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for
you to focus on building your applications. Fargate removes the need to provision and manage servers,
lets you specify and pay for resources per application, and improves security through application
isolation by design.
Fargate allocates the right amount of compute, eliminating the need to choose instances and scale
cluster capacity. You only pay for the resources required to run your containers, so there is no over-
provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing
the tasks and pods their own isolated compute environment. This enables your application to have
workload isolation and improved security by design.
You pay for the amount of vCPU and memory resources consumed by your containerized applications.

You might also like