Professional Documents
Culture Documents
Cloud Assignment
Cloud Assignment
Cloud Assignment
Assignment 1: Comparing cloud-native container services available across different public clouds
CLO-3: Distinguish the various characteristics of public, private and hybrid cloud delivery models
Full Name: Qalam ID: Class/Section: BESE-12A
Date of submission: 03.11.2023
Assignment Description:
Containers are package software that are bundled with all of the dependencies which can run quickly and reliably
in computing environments. Modern applications are increasingly being built using cloud containers because of
their incredibly fast deployment speed, workload portability, and the ability to simplify resource provisioning for
time-pressed developers.
Different public cloud providers such as AWS, Azure and GCP provide a full suite of products and services to build,
deploy, and manage containerized environments. The goal of the assignment is to explore and compare different
cloud-native container services available on AWS, Azure and GCP. Compare different cloud-native container
services available across AWS, Azure and GCP and write a short report (max 4 pages). Include figures and graphs to
improve the quality and readability of the report. Following aspects of cloud-native container services should be
discussed.
Plagiarism policy: NUST has a strict plagiarism policy. All submitted work should be the original work
and appropriate references must be included in the report. Any submission found plagiarized will be
treated as per department’s policy. Reports generated with ChatGPT, or other similar applications
will be awarded a straight ‘zero’ and the cases will be reported to the department for appropriate
actions.
Modern applications are increasingly being built using cloud containers because of their
incredibly fast deployment speed, workload portability, and the ability to simplify resource
provisioning for time-pressed developers.
One of the biggest disadvantages of serverless computing is that it is still a relatively new
technology. As a result, it is not yet suitable to meet all potential use cases.
In addition, the intentionally ephemeral nature of serverless and its ability to scale down to zero
make it unsuitable for certain types of applications. It’s not built to execute code for long periods
and cannot always serve applications with strict low-latency requirements, such as financial
services applications.
Container registry: In a container registry, you store and manage these special boxes (containers) in an
organized way. It's a place where you keep all the containers for your applications.
Artifacts Registry: It's a place to store all the parts that make up your software project, not just the fully
packaged containers.
Many IT and DevOps teams are migrating resources from on-premises infrastructure to the
cloud. In addition, organizations are moving to container and serverless architectures,
collectively known as cloud native technologies. Kubernetes has become the de facto standard
for container orchestration. Containerization has compelling benefits, but is also difficult to set
up and manage at large scale.
Serverless containers can help organizations leverage the cloud while easily adopting
containerized infrastructure. The term “serverless containers” refers to technologies that enable
cloud users to run containers, but outsource the effort of managing the actual servers or
computing infrastructure they are running on. This can enable more rapid adoption, and easier
management and maintenance, of large scale containerized workloads in the cloud.
Docker Volume:
A Docker volume is like a special folder that Docker creates to store data. Think of it as a
safe storage space for your application within a Docker container.
It's used to store things like files, databases, or configuration settings that need to be
preserved even if the container is stopped or deleted.
Docker volumes are handy for sharing data between containers or persisting data across
container restarts. They keep your data separate from the container, making it safe and
accessible.
Tmpfs Mount:
Tmpfs is like a temporary, super-fast storage area that Docker creates in memory. It's
perfect for things you need quickly and can afford to lose when the container stops.
It's often used for temporary files or cache because it's incredibly fast but doesn't survive
container restarts.
Tmpfs is like a whiteboard that gets erased when you're done - it's great for quick,
temporary storage.
Bind Mount:
A bind mount is like connecting a folder on your computer to a folder inside a Docker
container. It's a way to share files and data between your computer and the container.
With a bind mount, changes in the folder on your computer instantly show up in the
container and vice versa.
It's like having a shared folder where you can work on files that both your computer and
the container can access.
In summary, Docker volumes provide safe and persistent storage for your container, tmpfs is
super-fast but temporary memory storage, and a bind mount is like connecting a shared folder
between your computer and the container for easy data exchange.
An Important question that comes in mind after reading serverless computing:
Google Cloud Run and Google Compute Engine (GCE) are both services offered by Google
Cloud, but they serve different purposes and have distinct characteristics. Let's compare them
and address your question about deploying containerized applications on GCE:
1. Serverless: Google Cloud Run is a serverless compute service designed for running
containerized applications in a serverless manner. It automatically manages the
underlying infrastructure, scaling your application in response to incoming requests. You
don't have to provision or manage virtual machines (VMs).
2. Container Focus: Cloud Run is specifically designed for containerized applications. You
package your application in a container, and Cloud Run takes care of the rest. It's
optimized for fast container startup and response times.
3. Event-Driven: Cloud Run is well-suited for event-driven applications, such as API
endpoints, webhooks, and microservices, where you want your application to respond to
HTTP requests or other events.
4. Pay-as-You-Go: You pay only for the compute resources used during request processing.
When your service is idle, you're not charged.
5. Managed Scaling: Scaling is handled automatically by Google Cloud, and you don't
need to configure VMs or manage load balancers.
So, to address your question, you can indeed use Google Compute Engine to deploy
containerized applications. You would create VM instances and configure them to run
containers, and you have full control over the environment. However, this approach requires
more management, including scaling, load balancing, and VM provisioning.
Google Cloud Run, on the other hand, abstracts away much of the infrastructure management,
making it easier to deploy containerized applications in a serverless and event-driven manner,
which can be more efficient and cost-effective for certain use cases. Your choice between the
two depends on your specific requirements and how much control and customization you need
over the underlying infrastructure.
AWS Fargate
AWS Fargate is a serverless compute engine that provides on-demand, right-sized compute
capacity for cloud containers. By reducing the operational overhead of scaling, patching,
securing, and managing servers, Fargate allows DevOps to focus on what they care about most—
building applications. Fargate works for both Amazon EKS and ECS.
With Google Cloud Run, develop and deploy highly scalable containerized applications on a
fully managed serverless platform. Cloud Run enables users to deploy stateless HTTP containers,
which means developers can use the programming language of their choice (‘any language, any
library, any binary’), removes the overhead associated with resource provisioning, and pairs with
both Docker and GKE.
Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS) are two of the most
popular container services offered by AWS. ECS is a fully managed container orchestration service that allows
users to run Docker containers on a cluster of EC2 instances. EKS, on the other hand, is a fully managed
Kubernetes service that allows users to run Kubernetes clusters on AWS.
As applications grow to span multiple containers across multiple services, operating them
becomes more complex. How do you coordinate and schedule containers, update applications
without service interruption, or monitor and diagnose problems over time?
Kubernetes (K8s) is a powerful container orchestration platform that helps you manage and
automate the deployment, scaling, and operation of containerized applications. It deals with the
orchestration of containers and the underlying infrastructure.
KNative, on the other hand, is a set of components that builds on top of Kubernetes. KNative is
focused on simplifying the deployment and scaling of serverless workloads, such as functions
and applications, on a Kubernetes cluster. It provides a higher-level abstraction and tools for
building and running serverless applications. KNative components include serving (for
deploying and serving serverless applications), eventing (for event-driven applications), and
build (for building container images).
In essence, KNative is more about simplifying serverless application management and event-
driven workloads on top of a Kubernetes cluster, rather than serving as a general container
orchestration platform like Kubernetes.
Cloud-native container services are services that enable users to build, deploy and manage containerized
applications on the cloud over serverless infrastructure. The three major cloud providers, AWS, Azure and
GCP, offer a variety of cloud-native container services that differ in features, functionality, and pricing.
This report will compare some of the key aspects of these services.
The three cloud providers offer serverless container services that are compatible with Kubernetes. These
services are:
AWS Fargate: A service that runs containers on AWS-managed servers within ECS (Elastic Container
Service) or EKS (Elastic Kubernetes Service) clusters. Fargate allocates the right amount of compute
resources for each container and scales them up or down as needed.
Azure Container Instances (ACI): A service that runs containers on Azure-managed servers within
virtual networks. ACI allocates the right amount of compute resources for each container and scales
them up or down as needed.
Google Cloud Run: A service that runs containers on Google-managed servers within Google Cloud
regions. Cloud Run allocates the right amount of compute resources for each container and scales
them up or down as needed. Cloud Run integrates with other Google Cloud services such as Pub/Sub,
Firestore, Cloud Storage and more.
Differences
Service Pricing Features
The three cloud providers offer container registry services that are compatible with the Docker Registry API.
These services are:
Amazon Elastic Container Registry (ECR): A service that stores container images in highly available
and scalable repositories within AWS. ECR integrates with EKS, ECS, Fargate and other AWS
services. ECR supports image scanning for vulnerabilities, encryption at rest and in transit, IAM
policies for access control and more.
Azure Container Registry (ACR): A service that stores container images in geo-replicated repositories
within Azure. ACR integrates with AKS, ACI, App Service (web app hosting service) and other
Azure services. ACR supports image scanning for vulnerabilities, encryption at rest and in transit,
role-based access control and more.
Google Container Registry (GCR): A service that stores container images in regional or multi-regional
repositories within Google Cloud. GCR integrates with GKE, Cloud Run, App Engine (web app
hosting service) and other Google Cloud services. GCR supports image scanning for vulnerabilities,
encryption at rest and in transit, IAM policies for access control and more.
Service Features
ECR
Supports image immutability, which prevents images from being overwritten or deleted.
ACR Supports geo-replication, which synchronizes images across multiple regions for faster access and disaster
Service Features
recovery.
Supports multi-regional storage, which stores images in multiple regions within a continent for higher
GCR availability and durability.
The three cloud providers offer managed Kubernetes services that simplify the installation, operation and
maintenance of Kubernetes clusters. These services are:
AWS Elastic Kubernetes Service (EKS): A service that runs Kubernetes control plane across multiple
Availability Zones for high availability and fault tolerance. EKS integrates with other AWS services
such as Elastic Load Balancing, IAM, CloudWatch, CloudFormation and more. EKS supports both
Linux and Windows containers.
Azure Kubernetes Service (AKS): A service that runs Kubernetes control plane on Azure-managed
nodes for scalability and reliability. AKS integrates with other Azure services such as Azure Active
Directory, Azure Monitor, Azure Policy and more. AKS supports both Linux and Windows
containers.
Google Kubernetes Engine (GKE): A service that runs Kubernetes control plane on Google-managed
nodes for performance and security. GKE integrates with other Google Cloud services such as Cloud
Storage, Cloud Logging, Cloud Monitoring and more. GKE supports both Linux and Windows
containers.
The three cloud providers offer serverless compute services that allow users to run code without provisioning
or managing servers. These services are:
AWS Lambda: A service that runs code in response to events such as HTTP requests, database
changes, queue messages and more. Lambda supports multiple languages such as Node.js, Python,
Java, Go and more. Lambda integrates with other AWS services such as API Gateway, S3,
DynamoDB and more.
Azure Functions: A service that runs code in response to events such as HTTP requests, blob storage
changes, queue messages and more. Functions supports multiple languages such as C#, Java,
JavaScript, Python and more. Functions integrates with other Azure services such as Logic Apps,
Event Grid, Cosmos DB and more.
Google Cloud Functions: A service that runs code in response to events such as HTTP requests, Cloud
Storage changes, Pub/Sub messages and more. Cloud Functions supports multiple languages such as
Node.js, Python, Go and more. Cloud Functions integrates with other Google Cloud services such as
Firebase, BigQuery, Cloud Vision and more.
Hybrid and multi-cloud containers are services that enable users to run containers across different cloud
platforms or on-premises environments. They offer benefits such as workload portability, cost optimization,
performance improvement and risk mitigation.
All three cloud providers offer hybrid and multi-cloud container services that are based on Kubernetes. These
services are:
AWS Outposts: A service that delivers AWS infrastructure and services to on-premises locations.
Outposts allows users to run EKS clusters on AWS-managed servers within their own data centers or
colocation facilities. Outposts integrates with other AWS services such as ECR, ELB,
CloudFormation and more.
Azure Arc: A service that extends Azure management and services to any infrastructure. Arc allows
users to run AKS clusters on Azure-managed or self-managed servers within their own data centers or
other cloud platforms. Arc integrates with other Azure services such as ACR, Monitor, Policy and
more.
Google Anthos: A service that enables consistent application management across any environment.
Anthos allows users to run GKE clusters on Google-managed or self-managed servers within their
own data centers or other cloud platforms. Anthos integrates with other Google Cloud services such as
GCR, Logging, Monitoring and more.
Supports both Linux and Windows containers, local processing of data, low latency access to on-premises
Outposts
systems and applications.
Supports both Linux and Windows containers, policy enforcement across environments, unified monitoring
Arc and governance.
Anthos Supports only Linux containers, service mesh across environments, configuration management across clusters.
References:
https://www.toptal.com/kubernetes/k8s-aws-vs-gcp-vs-azure-aks-eks-gke
https://blogs.vmware.com/cloudhealth/cloud-container-services-aws-azure-gcp/
https://www.aquasec.com/cloud-native-academy/serverless-architecture-platforms-benefits-best-practices/
serverless-containers/