Cloud Computing

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Cloud computing Cloud computing is a model for enabling ubiquitous,

convenient, on-demand network access to a shared pool of configurable


computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. This cloud model is
composed of five essential characteristics, three service models, and four
deployment models.

According to NIST there are five essential characteristics of cloud


computing:

1. On Demand Self Service


2. Broad network access
3. Resource pooling
4. Rapid elasticity
5. Measured service

OBRrM

● On-Demand Self Service: Users can independently provision computing


resources such as storage, processing power, and applications without
requiring human intervention, enabling quick access to needed resources.

● Broad Network Access: Cloud services are accessible over the internet
from a variety of devices, ensuring widespread availability and allowing
users to connect and use resources from virtually anywhere.

● Resource Pooling: Cloud providers consolidate and share computing


resources among multiple users, optimizing efficiency and enabling cost
savings through the pooling of resources like servers, storage, and
networking equipment.

● Rapid Elasticity: Cloud resources can quickly and automatically scale up


or down based on demand, ensuring that applications and services have
the necessary resources available during periods of high usage and
scaling back during periods of lower demand.
● Measured Service: Cloud providers track and monitor resource usage,
allowing users to be billed based on their actual consumption. This
pay-as-you-go model provides transparency and cost efficiency by aligning
expenses with resource utilization.

DEPLOYMENT MODELS

Public Cloud:

Definition: A public cloud is a type of cloud computing where services and infrastructure are
provided by third-party providers, made available to the general public over the internet.

Characteristics:

1. Shared Resources: Resources (such as computing power and storage) are shared
among multiple users, leading to cost efficiencies.
2. Scalability: Public clouds offer on-demand scalability, allowing users to scale resources
up or down based on their needs.
3. Cost-Effective: Users typically pay for what they consume, which can be more
cost-effective than maintaining on-premises infrastructure.
4. Public cloud services are accessible from anywhere with an internet connection.

Private Cloud:

Definition: A private cloud is a cloud infrastructure operated exclusively for a single organization.
It can be hosted on-premises or by a third-party provider.

Characteristics:

Dedicated Resources: Resources are dedicated to a single organization, providing greater


control and customization.
Enhanced Security: Private clouds offer more control over security measures, making them
suitable for organizations with strict data privacy and compliance requirements.
Customization: Organizations have more flexibility to customize the infrastructure and services
according to their specific needs.
Higher Initial Costs: Setting up and maintaining a private cloud may involve higher initial costs
compared to using public cloud services.

Hybrid Cloud:
Definition: A hybrid cloud combines public and private cloud infrastructure, allowing data and
applications to be shared between them.
Characteristics:

1. Flexibility: Organizations can leverage the scalability of the public cloud for
non-sensitive operations while keeping critical workloads in the private cloud.
2. Data Portability: Applications and data can be moved between public and private
environments based on changing needs.
3. Cost Optimization: Organizations can optimize costs by using public cloud resources
for peak demand while maintaining a baseline of dedicated resources in a private cloud.
4. Improved Redundancy: Hybrid cloud configurations can provide improved redundancy
and business continuity.

Differences:

Ownership:

Public: Owned and operated by third-party providers.


Private: Owned and operated by a single organization or a third-party exclusively for that
organization.
Hybrid: Combination of both public and private cloud resources.

Resource Sharing:

Public: Resources are shared among multiple users and organizations.


Private: Resources are dedicated to a single organization.
Hybrid: Mix of shared and dedicated resources.

Customization and Control:

Public: Limited customization and control.


Private: High level of customization and control.
Hybrid: Offers a balance between customization and control.

Security:

Public: Security is managed by the service provider.


Private: Organizations have more control over security measures.
Hybrid: Security measures can vary based on the deployment and management of each
environment.

Choosing between public, private, or hybrid models depends on factors such as the
organization's specific needs, security requirements, budget considerations, and the nature of
the workloads being managed.
Scenario:

DevTech Solutions:
1. Size: Medium-sized with around 200 employees.
2. Operations: Software development, testing, and deployment.
3. Key Considerations: Cost-effectiveness, scalability, data security, and compliance with
industry regulations.

Use Case: Software Development and Testing


Cloud Model Recommendation: Public Cloud

Why:

Scalability: DevTech can easily scale resources up or down based on development and
testing demands. This flexibility is crucial during peak development cycles.

Cost-Efficiency: With pay-as-you-go pricing, DevTech pays only for the resources
consumed during active development and testing periods, optimizing costs.

Distributed Teams: Public cloud services facilitate collaboration among distributed


development teams, providing seamless access to resources from different locations.

Why Not Private or Hybrid Cloud:

Limited Budget: Setting up and maintaining a private cloud may involve higher upfront
costs, making it less suitable for a medium-sized company with budget constraints.

Development Flexibility: Public clouds offer a broad range of development tools and
services, allowing DevTech to adapt quickly to changing development needs.
In this scenario, the public cloud is recommended for its scalability, cost-efficiency, and
support for distributed development teams.

SERVICE MODELS

Software as a Service (SaaS):


Definition: SaaS is a cloud computing model where software applications are
provided over the internet, and users can access them through a web browser
without the need for installation or maintenance.

Characteristics:

● Accessibility: Software applications are accessible from any device with


an internet connection and a web browser.
● Automatic Updates: The service provider handles software updates,
ensuring users always have access to the latest features and security
patches.
● Multi-Tenancy: Multiple users or businesses can share the same
application, with each having a customized instance.
● Subscription-Based Billing: SaaS is typically offered on a subscription
basis, with users paying for the services they use.

Examples of Companies Using SaaS:

● Salesforce: Offers a cloud-based customer relationship management


(CRM) platform.
● Microsoft 365 (formerly Office 365): Provides a suite of productivity
applications, including Word, Excel, and Outlook, accessible online.
● Zendesk: Offers a cloud-based customer service platform for businesses.
● Slack: Provides a team collaboration and messaging platform.
● Adobe Creative Cloud: Offers a suite of creative software applications like
Photoshop and Illustrator on a subscription basis.
Platform as a Service (PaaS):
Definition: PaaS is a cloud computing model that provides a platform allowing
customers to develop, run, and manage applications without dealing with the
complexity of infrastructure.

Characteristics:

● Development Tools: PaaS provides tools and services to support the


complete application development lifecycle.
● Automated Deployment: Streamlines the deployment process, allowing
developers to focus on coding rather than managing infrastructure.
● Scalability: PaaS platforms offer automatic scalability to accommodate
changing workloads.
● Database Integration: PaaS often includes built-in databases and
services for data storage and retrieval.

Examples of Companies Using PaaS:

● Heroku(remember this): A cloud platform that enables developers to


build, deploy, and scale applications easily.
● Google App Engine: A fully managed serverless platform for developing
and deploying applications.
● Microsoft Azure App Service: Provides a platform for building, deploying,
and scaling web apps.
● Red Hat OpenShift: An open-source container platform for automating the
deployment, scaling, and management of applications.
● AWS Elastic Beanstalk: A fully managed service for deploying and running
applications in various languages.

Infrastructure as a Service (IaaS):


Definition: IaaS is a cloud computing model that provides virtualized computing
resources over the internet, including virtual machines, storage, and
networking.

Characteristics:
● Flexibility: Users have control over virtualized computing resources and
can customize their infrastructure.
● Scalability: IaaS allows for the dynamic scaling of resources based on
demand.
● Pay-as-You-Go Billing: Users pay for the resources they consume,
typically on a per-hour or per-minute basis.
● Self-Service Provisioning: Users can provision and manage resources
independently through a web-based interface or API.
Examples of Companies Using IaaS:

● Amazon Web Services (AWS): Offers a wide range of IaaS services,


including Amazon EC2 for virtual servers.
● Microsoft Azure: Provides virtual machines, storage, and networking
services as part of its IaaS offerings.
● Google Cloud Compute Engine: Allows users to run virtual machines on
Google's infrastructure.
● IBM Cloud Infrastructure: Offers virtual servers, storage, and networking
resources.
● DigitalOcean: Provides cloud computing services, including scalable virtual
machines known as Droplets.

Each of these cloud computing models (SaaS, PaaS, and IaaS) serves different
needs, offering varying levels of control and abstraction to cater to diverse
business requirements.

Examples

Scenario 1: Launching a Customer Relationship Management


(CRM) System

Scenario: TechCo is planning to implement a CRM system to manage customer


interactions, sales, and support.

Cloud Model Recommendation: Software as a Service (SaaS)


Why:

● Accessibility: TechCo can quickly deploy the CRM system without the need
for complex installation processes. Users can access it from any device
with an internet connection and a web browser.
● Automatic Updates: SaaS ensures that TechCo always has access to the
latest features and security patches without managing updates, reducing
the burden on the IT team.
● Subscription-Based Billing: With a subscription model, TechCo can
manage costs efficiently, paying for the CRM service based on the number
of users or features required.

Why Not PaaS or IaaS:

Focus on Functionality: Since TechCo's primary goal is to leverage a CRM


system without the need to manage underlying infrastructure or development
platforms, SaaS is the most straightforward and efficient choice.

Scenario 2: Developing and Deploying a New Web Application

Scenario: TechCo is planning to develop and deploy a new web application to


support its business operations.

Cloud Model Recommendation: Platform as a Service (PaaS)

Why:

● Development Tools: PaaS platforms offer a suite of development tools that


streamline the application development lifecycle. TechCo can focus on
coding and business logic rather than managing infrastructure.
● Automated Deployment: PaaS automates the deployment process, making
it easier for TechCo to release and update the application without dealing
with the complexities of server management.
● Scalability: As the new web application gains popularity, TechCo can easily
scale resources up or down based on demand without worrying about the
underlying infrastructure.

Why Not SaaS or IaaS:

Development Flexibility: TechCo requires more control over the development


process and infrastructure customization than what a SaaS model offers. PaaS
strikes a balance between development flexibility and abstraction.

Scenario 3: Hosting a High-Performance Computing (HPC)


Environment

Scenario: TechCo needs to run complex simulations and data-intensive tasks,


requiring significant computing power.

Cloud Model Recommendation: Infrastructure as a Service (IaaS)

Why:

● Flexibility: IaaS provides TechCo with full control over virtualized computing
resources, allowing customization of the infrastructure to meet the specific
requirements of high-performance computing workloads.
● Scalability: For HPC environments, TechCo may need to scale resources
dynamically based on the complexity of simulations. IaaS allows for
granular control over resource scaling.
● Pay-as-You-Go Billing: Since HPC workloads can be resource-intensive, a
pay-as-you-go billing model in IaaS ensures cost optimization, where
TechCo pays for the computing resources consumed.

Why Not SaaS or PaaS:

Customization Needs: HPC environments often have specific hardware and


software requirements. IaaS provides the level of customization and control
needed for running specialized workloads.
In each scenario, the choice of cloud computing model depends on the specific
requirements, goals, and constraints of TechCo's projects. SaaS, PaaS, and IaaS
cater to different use cases, offering varying levels of abstraction and control to
suit diverse business needs.
Virtualization:

Virtualization is a technology that allows the creation of virtual representations or


instances of computing resources, such as servers, storage, or networks, which
enables efficient utilization and management of physical hardware.

Virtual Machine (VM):

A virtual machine is a software-based emulation of a physical computer, running


an operating system and applications. Multiple VMs can coexist on a single
physical server, each isolated and capable of running different operating systems
or workloads.
Example: Oracle Virtual Box and VMware Workstation.

Hypervisor:

A hypervisor, or Virtual Machine Monitor (VMM), is a computer software or


hardware that manages and allocates physical resources to multiple virtual
machines. It allows for the simultaneous operation of multiple operating systems
on a single physical machine.

Types of Hypervisors:

● Type 1 (Bare-Metal): Installed directly on the host hardware, Type 1


hypervisors, like VMware ESXi and Microsoft Hyper-V Server, provide
direct access to resources for better performance.
● Type 2 (Hosted): Installed on top of an existing operating system, Type 2
hypervisors, like Oracle VirtualBox and VMware Workstation, run as
applications and are suitable for development or testing environments.

Types of Virtualization:

● Server Virtualization: Involves creating multiple virtual servers on a single


physical server to improve resource utilization and flexibility.
● Storage Virtualization: Abstracts physical storage resources into a unified
pool, allowing efficient management, scalability, and ease of data
migration.
● Network Virtualization: Separates network services from the underlying
hardware, enabling the creation of virtual networks for improved agility and
resource utilization.

Virtualization enhances efficiency, flexibility, and scalability in computing


environments, making it a fundamental aspect of modern IT infrastructure.

Certainly! Let's examine the advantages and disadvantages of both Type 1


and Type 2 hypervisors:

**Advantages of Type 1 Hypervisors:**

1. **Performance:** Type 1 hypervisors typically offer better performance and


efficiency compared to Type 2 hypervisors since they run directly on the host
hardware without the overhead of an underlying operating system.

2. **Resource Utilization:** They allow for optimal utilization of hardware


resources by efficiently managing multiple virtual machines (VMs) on a single
physical server, maximizing resource allocation and reducing wastage.

3. **Security:** Type 1 hypervisors provide stronger isolation between VMs and


the host system, reducing the attack surface and enhancing security compared to
Type 2 hypervisors that run within a host operating system.

4. **Scalability:** They are well-suited for large-scale virtualization deployments


in data centers and cloud environments, offering scalability and performance to
support numerous VMs across multiple hosts.

5. **Reliability:** Type 1 hypervisors are designed for enterprise-grade reliability


and stability, with features such as failover clustering, live migration, and fault
tolerance to ensure high availability of virtualized workloads.

**Disadvantages of Type 1 Hypervisors:**

1. **Complexity:** Setting up and managing Type 1 hypervisors may require


specialized skills and expertise in virtualization technologies, networking, and
storage configurations, which can add complexity to deployment and
maintenance.

2. **Hardware Dependence:** Type 1 hypervisors may require specific


hardware support or compatibility, limiting deployment options and hardware
flexibility compared to Type 2 hypervisors that run on a broader range of
hardware.

3. **Limited Compatibility:** Some legacy or specialized software applications


may not be compatible with Type 1 hypervisors, requiring additional effort or
resources to migrate or virtualize such applications.

**Advantages of Type 2 Hypervisors:**

1. **Ease of Deployment:** Type 2 hypervisors are easier to deploy and


configure since they run within a host operating system, requiring minimal setup
and no specialized hardware support.

2. **Hardware Compatibility:** They offer broader hardware compatibility,


allowing users to run virtual machines on a wide range of hardware platforms,
including desktops, laptops, and workstations.

3. **Integration with Host OS:** Type 2 hypervisors integrate seamlessly


with the host operating system, enabling users to leverage familiar
management tools, applications, and networking configurations.

**Disadvantages of Type 2 Hypervisors:**

1. **Performance Overhead:** Type 2 hypervisors introduce additional


overhead since they run on top of a host operating system, which can impact the
performance and responsiveness of virtualized workloads compared to Type 1
hypervisors.

2. **Resource Sharing:** They compete for resources with the host operating
system and other applications running on the host system, potentially leading to
resource contention and degraded performance for virtual machines.
3. **Security Concerns:** Type 2 hypervisors may have security implications
since they rely on the security of the underlying host operating system.
Vulnerabilities or compromises in the host OS could potentially impact the
security of virtualized environments.

In summary, while Type 1 hypervisors offer superior performance, security,


and scalability for enterprise virtualization deployments, Type 2
hypervisors provide ease of deployment and broader hardware
compatibility for desktop and development environments. Organizations
should evaluate their specific requirements and consider these factors
when choosing between Type 1 and Type 2 hypervisors for their
virtualization needs.

Certainly, here are the advantages and disadvantages of virtualization:

**Advantages:**

1. **Resource Optimization:** Virtualization allows for efficient utilization of


physical hardware resources by creating multiple virtual machines (VMs) on a
single physical server. This leads to better resource utilization and cost savings.

2. **Cost Savings:** By consolidating multiple physical servers into virtual


machines, organizations can reduce hardware and operational costs, including
power consumption, cooling, and physical space requirements.

3. **Scalability:** Virtualization provides scalability by enabling organizations to


quickly provision and deploy new virtual machines as needed, without the need
for additional physical hardware.

4. **Flexibility and Agility:** Virtualization allows for rapid deployment and


migration of virtual machines, making it easier to adapt to changing business
needs, scale resources up or down, and respond to workload demands.
5. **Improved Disaster Recovery:** Virtualization facilitates easier backup,
replication, and recovery of virtual machines, enabling faster and more reliable
disaster recovery processes compared to traditional physical infrastructure.

6. **Isolation and Security:** Virtualization provides strong isolation between


virtual machines, reducing the risk of security breaches and malware
propagation. It allows for the segmentation of applications and services,
enhancing security and compliance.

**Disadvantages:**

1. **Resource Overhead:** Virtualization introduces some overhead due to


the hypervisor layer and virtualization management processes, which can
impact performance and resource utilization compared to running
applications on bare-metal servers.

2. **Complexity:** Managing virtualized environments can be complex,


requiring expertise in virtualization technologies, networking, storage, and
security. Organizations may face challenges in configuration, optimization,
and troubleshooting.

3. **Single Point of Failure:** Virtualization introduces a single point of


failure with the hypervisor, which, if compromised or experiencing issues,
can impact multiple virtual machines and services running on the host
server.

4. **Licensing Costs:** Some software vendors may have licensing


restrictions or additional costs for virtualized environments, leading to
increased licensing expenses for organizations deploying virtual machines.

5. **Performance Degradation:** In some cases, virtualization can lead to


performance degradation, especially for latency-sensitive or
resource-intensive applications that require direct access to hardware
resources.

6. **Vendor Lock-in:** Organizations may become dependent on specific


virtualization vendors and technologies, leading to vendor lock-in and
limited flexibility in migrating virtualized workloads to alternative platforms
or cloud environments.

Overall, while virtualization offers numerous benefits in terms of resource


optimization, cost savings, and flexibility, organizations should carefully
consider the potential drawbacks and plan accordingly to mitigate risks
and maximize the value of virtualization deployments.
Loose Coupling:
Imagine you're working on a group project with your friends, but each of you
works on your part independently without having to rely too much on each other.
Even if one person changes something in their part, it doesn't affect the others
too much because everyone's work is somewhat independent. This is like "loose
coupling."

Advantages of Loose Coupling:

Flexibility: Changes in one part don't affect the other parts much, so it's easier to
adapt and make adjustments.
Scalability: You can add or remove parts without affecting the rest of the system
too much.
Fault Isolation: If one part fails, it's less likely to bring down the entire system
because the parts are somewhat independent.

Disadvantages of Loose Coupling:


Complexity: Sometimes, managing all those independent parts can get
complicated.
Communication Overhead: Since parts are less dependent on each other, they
might need to communicate more to get things done, which can slow things
down.
Consistency: Keeping everything in sync and consistent can be a challenge.

Tight Coupling:
Now, imagine you're doing a synchronized dance routine with your friends. Each
move depends on what the others are doing, and if one person messes up or
changes their move, it can throw off the whole routine. This is like "tight
coupling."

Advantages of Tight Coupling:

Efficiency: Since everything is closely connected, you can work together


smoothly and quickly.
Simplicity: It's often simpler to design and manage tightly coupled systems
because everything is interconnected and flows together.
Consistency: Since everything is so connected, it's easier to ensure that
everything stays consistent and in sync.

Disadvantages of Tight Coupling:


Rigidity: It's harder to make changes because everything is so interconnected. A
change in one part can have a big impact on other parts.
Dependency: You're heavily reliant on other parts, so if one part fails or needs to
be changed, it can cause problems for the whole system.
Scalability: It can be harder to scale up or down because changes in one part
might require changes in many other parts.
In summary, loose coupling is like working independently on a project, where
parts are more flexible and independent, while tight coupling is like a
synchronized dance routine, where everything is closely connected and
dependent on each other. Each has its advantages and disadvantages, and the
choice depends on the s

Parallel Computing:
Definition: Parallel computing involves performing multiple tasks simultaneously
by dividing a single task into smaller sub-tasks that can be executed concurrently
on multiple processing units or cores within a single computer or system.

Advantages:

Faster Execution: Parallel computing can significantly reduce the time required
to complete computational tasks by leveraging the combined processing power
of multiple cores or processors.
Scalability: It allows for easy scalability by adding more processing units or
cores to further increase computational speed.
Efficiency: Parallel computing can make more efficient use of hardware
resources by utilizing idle processing units to execute tasks concurrently.
High Performance: It enables the execution of complex computations and
simulations that would otherwise be impractical or infeasible with sequential
processing.
Disadvantages:
Complexity: Designing parallel algorithms and managing concurrency can be
complex and requires expertise to avoid issues such as race conditions and
deadlocks.
Synchronization Overhead: Coordinating and synchronizing the execution of
parallel tasks can introduce overhead, reducing the overall performance gain.
Limited Scalability: The performance improvement may plateau when adding
more processing units due to factors such as communication overhead and
contention for shared resources.

Distributed Computing:

Definition: Distributed computing involves the coordination and execution of


tasks across multiple interconnected computers or nodes, often geographically
dispersed, to solve a single computational problem.

Advantages:

High Availability: Distributed computing systems can provide high availability


and fault tolerance by distributing tasks across multiple nodes, reducing the
impact of individual node failures.
Scalability: They offer excellent scalability by adding more nodes to the network,
allowing for increased computational power and storage capacity.
Geographic Flexibility: Distributed computing enables collaboration and
resource sharing across different locations, making it suitable for applications
requiring global access and collaboration.
Resource Sharing: It allows for efficient resource utilization by distributing
computational tasks to idle or underutilized nodes in the network.
Disadvantages:

Network Overhead: Communication between distributed nodes introduces


latency and overhead, which can impact performance, especially for
latency-sensitive applications.
Complexity: Designing and managing distributed systems is complex, requiring
considerations for network topology, data consistency, and fault tolerance.
Security Risks: Distributed computing systems are more vulnerable to security
threats such as network attacks and data breaches due to their distributed nature
and reliance on network communication.
Consistency Challenges: Ensuring consistency and coherence of data across
distributed nodes can be challenging, leading to issues such as data
inconsistency and synchronization problems.

Responsibility sharing between users and cloud service providers

Responsibility sharing between users and cloud service providers is a


fundamental concept in cloud computing that outlines the division of
responsibilities for managing and securing cloud-based systems and data. This
model clarifies the roles and obligations of both parties to ensure the security,
integrity, and compliance of cloud environments. Here's how the responsibilities
are typically shared:

1. **Cloud Service Provider Responsibilities:**


- **Infrastructure:** The cloud service provider is responsible for securing the
underlying infrastructure, including data centers, networking, and physical
security measures.
- **Platform Security:** They manage the security of the cloud platform,
including the hypervisor, operating systems, and virtualization layers.
- **Compliance Certifications:** Cloud providers often obtain certifications
and comply with industry standards to ensure the security and privacy of
customer data, such as SOC 2, ISO 27001, HIPAA, and GDPR.
- **Data Backup and Redundancy:** Providers typically offer data backup,
replication, and redundancy services to ensure data availability and disaster
recovery.

2. **User Responsibilities:**
- **Data Security:** Users are responsible for securing their data and
applications deployed on the cloud platform. This includes implementing access
controls, encryption, and data loss prevention measures.
- **Identity and Access Management (IAM):** Users manage user access
and permissions within their cloud environment, ensuring that only authorized
individuals have access to sensitive resources.
- **Configuration Management:** Users are responsible for configuring and
managing their cloud resources securely, including virtual machines, databases,
and networking settings.
- **Compliance with Regulations:** Users must ensure that their use of the
cloud platform complies with relevant regulations and industry standards
applicable to their business, such as data protection laws and industry-specific
regulations.

3. **Shared Responsibilities:**

- **Security Patching:** While cloud providers may patch and update their
infrastructure and platform components, users are responsible for patching their
applications and operating systems deployed on the cloud.

- **Incident Response:** Cloud providers typically handle incidents related to


the underlying infrastructure, while users are responsible for responding to
security incidents and breaches within their own applications and data.

By clearly defining the responsibilities of both parties, the shared responsibility


model helps to mitigate security risks, ensure regulatory compliance, and
establish accountability in cloud computing environments. This collaborative
approach enables organizations to leverage the benefits of cloud services while
effectively managing security and compliance requirements.

You might also like