Cloud Assignment

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

National University of Sciences & Technology

School of Electrical Engineering and Computer Science


Faculty of Computing

CS 315: Cloud Computing (2+1) BESE-12AB Fall 2023

Assignment 1: Comparing cloud-native container services available across different public clouds
CLO-3: Distinguish the various characteristics of public, private and hybrid cloud delivery models
Full Name: Qalam ID: Class/Section: BESE-12A
Date of submission: 03.11.2023

Assignment Description:

Containers are package software that are bundled with all of the dependencies which can run quickly and reliably
in computing environments. Modern applications are increasingly being built using cloud containers because of
their incredibly fast deployment speed, workload portability, and the ability to simplify resource provisioning for
time-pressed developers.

Different public cloud providers such as AWS, Azure and GCP provide a full suite of products and services to build,
deploy, and manage containerized environments. The goal of the assignment is to explore and compare different
cloud-native container services available on AWS, Azure and GCP. Compare different cloud-native container
services available across AWS, Azure and GCP and write a short report (max 4 pages). Include figures and graphs to
improve the quality and readability of the report. Following aspects of cloud-native container services should be
discussed.

1. Cloud container services

2. Cloud container registry management

3. Container orchestration and Kubernetes engine

4. Serverless compute engine

Plagiarism policy: NUST has a strict plagiarism policy. All submitted work should be the original work
and appropriate references must be included in the report. Any submission found plagiarized will be
treated as per department’s policy. Reports generated with ChatGPT, or other similar applications
will be awarded a straight ‘zero’ and the cases will be reported to the department for appropriate
actions.
Modern applications are increasingly being built using cloud containers because of their
incredibly fast deployment speed, workload portability, and the ability to simplify resource
provisioning for time-pressed developers.

What is serverless computing?


Serverless computing is a cloud computing execution model that allocates machine resources on
an as-used basis. Under a serverless model, developers can build and run applications without
having to manage any servers and pay only for the exact amount of resources used. Instead, the
cloud service provider is responsible for provisioning, managing, and scaling the cloud
infrastructure that runs the application code.

Disadvantages of serverless computing

One of the biggest disadvantages of serverless computing is that it is still a relatively new
technology. As a result, it is not yet suitable to meet all potential use cases.

In addition, the intentionally ephemeral nature of serverless and its ability to scale down to zero
make it unsuitable for certain types of applications. It’s not built to execute code for long periods
and cannot always serve applications with strict low-latency requirements, such as financial
services applications.

Kubernetes 101: Pods, Nodes, Containers, and Clusters

Storage types in docker

More on serverless containers and storage in docker:

Container registry: In a container registry, you store and manage these special boxes (containers) in an
organized way. It's a place where you keep all the containers for your applications.

Artifacts Registry: It's a place to store all the parts that make up your software project, not just the fully
packaged containers.

GKE: Google Kubernetes Engine

What are Serverless Containers?

Many IT and DevOps teams are migrating resources from on-premises infrastructure to the
cloud. In addition, organizations are moving to container and serverless architectures,
collectively known as cloud native technologies. Kubernetes has become the de facto standard
for container orchestration. Containerization has compelling benefits, but is also difficult to set
up and manage at large scale.

Serverless containers can help organizations leverage the cloud while easily adopting
containerized infrastructure. The term “serverless containers” refers to technologies that enable
cloud users to run containers, but outsource the effort of managing the actual servers or
computing infrastructure they are running on. This can enable more rapid adoption, and easier
management and maintenance, of large scale containerized workloads in the cloud.

Certainly, let's break down these concepts in simple terms:

Docker Volume:

 A Docker volume is like a special folder that Docker creates to store data. Think of it as a
safe storage space for your application within a Docker container.
 It's used to store things like files, databases, or configuration settings that need to be
preserved even if the container is stopped or deleted.
 Docker volumes are handy for sharing data between containers or persisting data across
container restarts. They keep your data separate from the container, making it safe and
accessible.

Tmpfs Mount:

 Tmpfs is like a temporary, super-fast storage area that Docker creates in memory. It's
perfect for things you need quickly and can afford to lose when the container stops.
 It's often used for temporary files or cache because it's incredibly fast but doesn't survive
container restarts.
 Tmpfs is like a whiteboard that gets erased when you're done - it's great for quick,
temporary storage.

Bind Mount:

 A bind mount is like connecting a folder on your computer to a folder inside a Docker
container. It's a way to share files and data between your computer and the container.
 With a bind mount, changes in the folder on your computer instantly show up in the
container and vice versa.
 It's like having a shared folder where you can work on files that both your computer and
the container can access.

In summary, Docker volumes provide safe and persistent storage for your container, tmpfs is
super-fast but temporary memory storage, and a bind mount is like connecting a shared folder
between your computer and the container for easy data exchange.
An Important question that comes in mind after reading serverless computing:

Google Cloud Run and Google Compute Engine (GCE) are both services offered by Google
Cloud, but they serve different purposes and have distinct characteristics. Let's compare them
and address your question about deploying containerized applications on GCE:

Google Cloud Run:

1. Serverless: Google Cloud Run is a serverless compute service designed for running
containerized applications in a serverless manner. It automatically manages the
underlying infrastructure, scaling your application in response to incoming requests. You
don't have to provision or manage virtual machines (VMs).
2. Container Focus: Cloud Run is specifically designed for containerized applications. You
package your application in a container, and Cloud Run takes care of the rest. It's
optimized for fast container startup and response times.
3. Event-Driven: Cloud Run is well-suited for event-driven applications, such as API
endpoints, webhooks, and microservices, where you want your application to respond to
HTTP requests or other events.
4. Pay-as-You-Go: You pay only for the compute resources used during request processing.
When your service is idle, you're not charged.
5. Managed Scaling: Scaling is handled automatically by Google Cloud, and you don't
need to configure VMs or manage load balancers.

Google Compute Engine (GCE):

1. Infrastructure Control: GCE is Infrastructure as a Service (IaaS), which means you


have full control over virtual machines (VMs). You can install and configure the software
and OS on these VMs.
2. Flexible Workloads: GCE is suitable for a wide range of workloads, including running
containerized applications. You can choose to use container runtimes like Docker on
GCE VMs, but you're responsible for VM management.
3. Persistent VMs: VMs in GCE are persistent, meaning they don't automatically scale
down to zero. You need to manage VM scaling and load balancing yourself if you want
to handle traffic spikes.
4. Full Customization: GCE allows complete customization of VMs, including selecting
the OS, configuring security settings, and setting up any required software stack.

So, to address your question, you can indeed use Google Compute Engine to deploy
containerized applications. You would create VM instances and configure them to run
containers, and you have full control over the environment. However, this approach requires
more management, including scaling, load balancing, and VM provisioning.

Google Cloud Run, on the other hand, abstracts away much of the infrastructure management,
making it easier to deploy containerized applications in a serverless and event-driven manner,
which can be more efficient and cost-effective for certain use cases. Your choice between the
two depends on your specific requirements and how much control and customization you need
over the underlying infrastructure.

AWS Fargate

AWS Fargate is a serverless compute engine that provides on-demand, right-sized compute
capacity for cloud containers. By reducing the operational overhead of scaling, patching,
securing, and managing servers, Fargate allows DevOps to focus on what they care about most—
building applications. Fargate works for both Amazon EKS and ECS.

Google Cloud Run

With Google Cloud Run, develop and deploy highly scalable containerized applications on a
fully managed serverless platform. Cloud Run enables users to deploy stateless HTTP containers,
which means developers can use the programming language of their choice (‘any language, any
library, any binary’), removes the overhead associated with resource provisioning, and pairs with
both Docker and GKE.

Amazon ECS vs Amazon EKS

Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS) are two of the most
popular container services offered by AWS. ECS is a fully managed container orchestration service that allows
users to run Docker containers on a cluster of EC2 instances. EKS, on the other hand, is a fully managed
Kubernetes service that allows users to run Kubernetes clusters on AWS.

1. Container Orchestration Technology:


o Amazon ECS: ECS is a fully managed container orchestration service that allows
you to run Docker containers in a scalable and highly available manner. It is not
limited to a specific container orchestration technology and is often used with
Amazon's own orchestrator called ECS Anywhere. ECS is a good choice if you
want a simpler and more opinionated way to manage containers without the
complexity of Kubernetes.
o Amazon EKS: EKS is a managed Kubernetes service. Kubernetes is a popular
open-source container orchestration platform that provides a more flexible and
extensible way to manage containers. EKS is ideal for users who are already
familiar with Kubernetes or require its advanced features and extensive ecosystem
of tools.
2. Abstraction Level:
o Amazon ECS: ECS abstracts many of the underlying details of managing the
infrastructure, making it easier to get started with container deployment. It
provides a higher-level service abstraction and is less concerned with the specific
details of nodes (the underlying compute resources).
o Amazon EKS: EKS, being a Kubernetes service, provides a lower-level
abstraction. It gives you more control and responsibility over the management of
the Kubernetes cluster, including the nodes, networking, and configurations.
3. Ecosystem and Compatibility:
o Amazon ECS: It works well with other AWS services like AWS Fargate for
serverless container management.
o Amazon EKS: EKS allows you to run Kubernetes workloads on AWS but also
integrates well with external Kubernetes-compatible tools and services. It may be
a better choice if you have an existing investment in Kubernetes or are looking for
a solution that can work in multi-cloud or hybrid cloud scenarios.
4. Use Cases: If you want a more straightforward container orchestration service, and don’t
need the advanced features of Kubernetes, then use Amazon ECS. Amazon EKS is
well-suited for complex multi-container, multi-service applications that need fine grained
control.

As applications grow to span multiple containers across multiple services, operating them
becomes more complex. How do you coordinate and schedule containers, update applications
without service interruption, or monitor and diagnose problems over time?

The answer to these questions sits with container orchestration tools.

Kubernetes (K8s) is a powerful container orchestration platform that helps you manage and
automate the deployment, scaling, and operation of containerized applications. It deals with the
orchestration of containers and the underlying infrastructure.

KNative, on the other hand, is a set of components that builds on top of Kubernetes. KNative is
focused on simplifying the deployment and scaling of serverless workloads, such as functions
and applications, on a Kubernetes cluster. It provides a higher-level abstraction and tools for
building and running serverless applications. KNative components include serving (for
deploying and serving serverless applications), eventing (for event-driven applications), and
build (for building container images).

In essence, KNative is more about simplifying serverless application management and event-
driven workloads on top of a Kubernetes cluster, rather than serving as a general container
orchestration platform like Kubernetes.

What is a Service Mesh?


A service mesh is a software layer that handles all communication between services in
applications. This layer is composed of containerized microservices. As applications scale and
the number of microservices increases, it becomes challenging to monitor the performance of the
services. To manage connections between services, a service mesh provides new features like
monitoring, logging, tracing, and traffic control. It’s independent of each service’s code, which
allows it to work across network boundaries and with multiple service management systems.

Cloud-native container services are services that enable users to build, deploy and manage containerized
applications on the cloud over serverless infrastructure. The three major cloud providers, AWS, Azure and
GCP, offer a variety of cloud-native container services that differ in features, functionality, and pricing.

This report will compare some of the key aspects of these services.

Cloud container services

The three cloud providers offer serverless container services that are compatible with Kubernetes. These
services are:

 AWS Fargate: A service that runs containers on AWS-managed servers within ECS (Elastic Container
Service) or EKS (Elastic Kubernetes Service) clusters. Fargate allocates the right amount of compute
resources for each container and scales them up or down as needed.
 Azure Container Instances (ACI): A service that runs containers on Azure-managed servers within
virtual networks. ACI allocates the right amount of compute resources for each container and scales
them up or down as needed.
 Google Cloud Run: A service that runs containers on Google-managed servers within Google Cloud
regions. Cloud Run allocates the right amount of compute resources for each container and scales
them up or down as needed. Cloud Run integrates with other Google Cloud services such as Pub/Sub,
Firestore, Cloud Storage and more.

Differences
Service Pricing Features

Both Linux and Windows


containers and allows to
specify CPU and memory
Charges per vCPU and GB of memory allocated for each limits for each container.
Fargate
container per second, with a minimum of one minute.

Supports both Linux and


Windows containers, GPU-
based workloads, and allows
users to deploy multiple
Charges per vCPU and GB of memory allocated for each containers together as a
ACI container per second, with no minimum. group.

Supports only Linux


containers, HTTP-based
Charges per vCPU, GB of memory and GB of network egress workloads, and allows users
Cloud used for each container per request, with a minimum of 100 to deploy only one container
Run milliseconds. at a time.

Cloud container registry management

The three cloud providers offer container registry services that are compatible with the Docker Registry API.
These services are:

 Amazon Elastic Container Registry (ECR): A service that stores container images in highly available
and scalable repositories within AWS. ECR integrates with EKS, ECS, Fargate and other AWS
services. ECR supports image scanning for vulnerabilities, encryption at rest and in transit, IAM
policies for access control and more.
 Azure Container Registry (ACR): A service that stores container images in geo-replicated repositories
within Azure. ACR integrates with AKS, ACI, App Service (web app hosting service) and other
Azure services. ACR supports image scanning for vulnerabilities, encryption at rest and in transit,
role-based access control and more.
 Google Container Registry (GCR): A service that stores container images in regional or multi-regional
repositories within Google Cloud. GCR integrates with GKE, Cloud Run, App Engine (web app
hosting service) and other Google Cloud services. GCR supports image scanning for vulnerabilities,
encryption at rest and in transit, IAM policies for access control and more.

Some of the key differences among these services are:

Service Features

ECR
Supports image immutability, which prevents images from being overwritten or deleted.

ACR Supports geo-replication, which synchronizes images across multiple regions for faster access and disaster
Service Features

recovery.

Supports multi-regional storage, which stores images in multiple regions within a continent for higher
GCR availability and durability.

Container orchestration and Kubernetes engine

The three cloud providers offer managed Kubernetes services that simplify the installation, operation and
maintenance of Kubernetes clusters. These services are:

 AWS Elastic Kubernetes Service (EKS): A service that runs Kubernetes control plane across multiple
Availability Zones for high availability and fault tolerance. EKS integrates with other AWS services
such as Elastic Load Balancing, IAM, CloudWatch, CloudFormation and more. EKS supports both
Linux and Windows containers.
 Azure Kubernetes Service (AKS): A service that runs Kubernetes control plane on Azure-managed
nodes for scalability and reliability. AKS integrates with other Azure services such as Azure Active
Directory, Azure Monitor, Azure Policy and more. AKS supports both Linux and Windows
containers.
 Google Kubernetes Engine (GKE): A service that runs Kubernetes control plane on Google-managed
nodes for performance and security. GKE integrates with other Google Cloud services such as Cloud
Storage, Cloud Logging, Cloud Monitoring and more. GKE supports both Linux and Windows
containers.

Some of the key differences among these services are:


Feature EKS AKS GKE
Charge for cluster
Flat fee per hour per No charge for cluster
Pricing management and nodes, but
cluster management, only for nodes
offers a free tier
Up to 4 minor
Version Up to 3 minor versions, 1-2 Up to 5 minor versions, no
versions, 1-2 month
support week lag lag
lag
Second in Stack Overflow Most Stack Overflow posts,
Least Stack
posts, decent number of apps decent number of apps in
Overflow posts,
Ecosystem in marketplace, most marketplace, most
benefits from AWS
comprehensive comprehensive
Marketplace
documentation documentation
Supports up to 5,000 nodes,
Scalability and Allows bare-metal extensive documentation on
performance nodes scaling, all high availability
features available
No service mesh out
of the box, but can Developed its own service Released its own integration
Networking
manually install mesh called App Mesh with Istio (beta)
Istio
Serverless compute engine

The three cloud providers offer serverless compute services that allow users to run code without provisioning
or managing servers. These services are:

 AWS Lambda: A service that runs code in response to events such as HTTP requests, database
changes, queue messages and more. Lambda supports multiple languages such as Node.js, Python,
Java, Go and more. Lambda integrates with other AWS services such as API Gateway, S3,
DynamoDB and more.
 Azure Functions: A service that runs code in response to events such as HTTP requests, blob storage
changes, queue messages and more. Functions supports multiple languages such as C#, Java,
JavaScript, Python and more. Functions integrates with other Azure services such as Logic Apps,
Event Grid, Cosmos DB and more.
 Google Cloud Functions: A service that runs code in response to events such as HTTP requests, Cloud
Storage changes, Pub/Sub messages and more. Cloud Functions supports multiple languages such as
Node.js, Python, Go and more. Cloud Functions integrates with other Google Cloud services such as
Firebase, BigQuery, Cloud Vision and more.

Some of the key differences among these services are:


Provider Pricing Features
Lambda Request, GB-second Layers
Functions Execution, GB-second Durable functions
Cloud Functions Invocation, GHz-second Background functions
Note: Layers allow users to reuse code and dependencies across functions. Durable functions allow users to
orchestrate complex workflows using stateful functions. Background functions allow users to handle events
that do not require an immediate response.

Hybrid and multi-cloud containers

Hybrid and multi-cloud containers are services that enable users to run containers across different cloud
platforms or on-premises environments. They offer benefits such as workload portability, cost optimization,
performance improvement and risk mitigation.

All three cloud providers offer hybrid and multi-cloud container services that are based on Kubernetes. These
services are:

 AWS Outposts: A service that delivers AWS infrastructure and services to on-premises locations.
Outposts allows users to run EKS clusters on AWS-managed servers within their own data centers or
colocation facilities. Outposts integrates with other AWS services such as ECR, ELB,
CloudFormation and more.
 Azure Arc: A service that extends Azure management and services to any infrastructure. Arc allows
users to run AKS clusters on Azure-managed or self-managed servers within their own data centers or
other cloud platforms. Arc integrates with other Azure services such as ACR, Monitor, Policy and
more.
 Google Anthos: A service that enables consistent application management across any environment.
Anthos allows users to run GKE clusters on Google-managed or self-managed servers within their
own data centers or other cloud platforms. Anthos integrates with other Google Cloud services such as
GCR, Logging, Monitoring and more.

Some of the key differences among these services are:


Service Features

Supports both Linux and Windows containers, local processing of data, low latency access to on-premises
Outposts
systems and applications.

Supports both Linux and Windows containers, policy enforcement across environments, unified monitoring
Arc and governance.

Anthos Supports only Linux containers, service mesh across environments, configuration management across clusters.

References:

https://www.toptal.com/kubernetes/k8s-aws-vs-gcp-vs-azure-aks-eks-gke

https://blogs.vmware.com/cloudhealth/cloud-container-services-aws-azure-gcp/

https://www.aquasec.com/cloud-native-academy/serverless-architecture-platforms-benefits-best-practices/
serverless-containers/

Cloud Container Services Comparison Chart


Image Source

You might also like