Professional Documents
Culture Documents
CC Merged
CC Merged
Theory:
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is
something, which is present at remote location. Cloud can provide services over public and
private networks, i.e., WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM)
execute on cloud.
Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-
you-go pricing. Instead of buying, owning, and maintaining physical data centers and
servers, you can access technology services, such as computing power, storage, and
databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS).
Basic Concepts
There are certain services and models working behind the scene making the cloud computing
feasible and accessible to end users. Following are the working models for cloud computing:
• Deployment Models
• Service Models
Deployment Models
Deployment models define the type of access to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of access: Public, Private, Hybrid, and Community.
Public Cloud
The public cloud allows systems and services to be easily accessible to the general public.
Public cloud may be less secure because of its openness.
Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is
more secured because of its private nature.
Community Cloud
The community cloud allows systems and services to be accessible by a group of organizations.
Hybrid Cloud
The hybrid cloud is a mixture of public and private cloud, in which the critical activities are
performed using private cloud while the non-critical activities are performed using public
cloud.
Service Models
Cloud computing is based on service models. These are categorized into three basic service
models which are
-
• Infrastructure-as–a-Service (IaaS)
• Platform-as-a-Service (PaaS)
• Software-as-a-Service (SaaS)
The concept of Cloud Computing came into existence in the year 1950 with implementation
of mainframe computers, accessible via thin/static clients. Since then, cloud computing has
been evolved from static clients to dynamic ones and from software to services. The
following diagram explains the evolution of cloud computing:
Benefits
Cloud Computing has numerous advantages. Some of them are listed below -
• One can access applications as utilities, over the Internet.
• One can manipulate and configure the applications online at any time.
• It does not require to install a software to access or manipulate cloud application.
• Cloud Computing offers online development and deployment tools,
programming runtime environment through PaaS model.
• Cloud resources are available over the network in a manner that provide platform
independent access to any type of clients.
• Cloud Computing offers on-demand self-service. The resources can be used without
interaction with cloud service provider.
• Cloud Computing is highly cost effective because it operates at high efficiency
with optimum utilization. It just requires an Internet connection
• Cloud Computing offers load balancing that makes it more reliable.
Although cloud Computing is a promising innovation with various benefits in the world of
computing, it comes with risks. Some of them are discussed below:
It is the biggest concern about cloud computing. Since data management and infrastructure
management in cloud is provided by third-party, it is always a risk to handover the sensitive
information to cloud service providers.
Although the cloud computing vendors ensure highly secured password protected
accounts, any sign of security breach may result in loss of customers and businesses.
Lock In
It is very difficult for the customers to switch from one Cloud Service Provider (CSP) to
another. It results in dependency on a particular CSP for service.
Isolation Failure
This risk involves the failure of isolation mechanism that separates storage, memory, and
routing between the different tenants.
In case of public cloud provider, the customer management interfaces are accessible through the
Internet.
It is possible that the data requested for deletion may not get deleted. It happens
because either of the following reasons
• Extra copies of data are stored but are not available at the time of deletion
• Disk that stores data of multiple tenants is destroyed.
There are four key characteristics of cloud computing. They are shown in the following
diagram:
On Demand Self Service
Cloud Computing allows the users to use web services and resources on demand. One can
logon to a website at any time and use them.
Since cloud computing is completely web based, it can be accessed from anywhere and at any
time
Resource Pooling
Cloud computing allows multiple tenants to share a pool of resources. One can share single
physical instance of hardware, database and basic infrastructure.
Rapid Elasticity
It is very easy to scale the resources vertically or horizontally at any time. Scaling of
resources means the ability of resources to deal with increasing or decreasing demand.
The resources being used by customers at any given point of time are automatically monitored.
Measured Service
In this service cloud provider controls and monitors all the aspects of cloud service.
Resource optimization, billing, and capacity planning etc. depend on it.
Limitation of cloud computing
By embracing cloud-based services, From Software has strengthened its position as a leading
innovator in the gaming industry, delivering immersive and engaging experiences to millions of
players worldwide. With the scalability, reliability, and flexibility offered by cloud computing,
From Software continues to push the boundaries of interactive entertainment, setting new
standards for excellence in game development and online gaming experiences.
Theory:
Hosted virtualization refers to the practice of running multiple virtual machines (VMs) on a
single physical host operating system. Virtualization technologies like Virtual Box and KVM
enable users to create and manage VMs, each of which can run its own operating system and
applications independently.
Overview of VirtualBox and KVM: Provide an overview of VirtualBox and KVM, two popular
hosted virtualization solutions. Explain their features, capabilities, and differences in terms of
architecture, performance, and supported platforms.
Installation and Setup: Detail the steps for installing and configuring VirtualBox and KVM on a
host operating system (e.g., Linux distribution). Include instructions for installing necessary
dependencies, enabling virtualization extensions in the BIOS/UEFI firmware, and setting up
networking for VM communication.
Creating Virtual Machines: Demonstrate how to create and configure VMs using VirtualBox
and KVM. Explain the process of allocating resources (CPU, memory, disk space) to VMs,
selecting the guest operating system, and configuring virtual hardware settings.
Managing Virtual Machines: Explore the management capabilities provided by VirtualBox and
KVM. Discuss tasks such as starting, stopping, pausing, cloning, and snapshotting VMs, as well
as managing virtual storage and networking.
Implementation of KVM
Update and Upgrade Ubuntu 22.04 sudo apt update sudo apt upgrade
Install KVM on Ubuntu 22.04 sudo apt install -y qemu-kvm virt-manager libvirt-daemon-
system virtinst libvirt-clients bridge-utils
Enable the virtualization daemon sudo systemctl enable --now libvirtd sudo systemctl start
libvirtd
Add Your User to the KVM and Libvirt Group sudo usermod -aG kvm $USER sudo
usermod -aG libvirt $USER
Theory:
AWS IAM (Identity and Access Management) is a service provided by Amazon
Web Services (AWS) that helps you manage access to your AWS resources. It's
like a security system for your AWS account.
IAM allows you to create and manage users, groups, and roles. Users represent
individual people or entities who need access to your AWS resources. Groups
are collections of users with similar access requirements, making it easier to
manage permissions. Roles are used to grant temporary access to external
entities or services.
With IAM, you can control and define permissions through policies. Policies are
written in JSON format and specify what actions are allowed or denied on
specific AWS resources. These policies can be attached to IAM entities (users,
groups, or roles) to grant or restrict access to AWS services and resources.
IAM follows the principle of least privilege, meaning users and entities are
given only the necessary permissions required for their tasks, minimizing
potential security risks. IAM also provides features like multi-factor
authentication (MFA) for added security and an audit trail to track user activity
and changes to permissions.
By using AWS IAM, you can effectively manage and secure access to your
AWS resources, ensuring that only authorized individuals have appropriate
permissions and actions are logged for accountability and compliance purposes.
Overall, IAM is an essential component of AWS security, providing granular
control over access to your AWS account and resources, reducing the risk of
unauthorized access and helping maintain a secure environment.
Output:
1. Adding user
Conclusion:
After launching the instance we can select the instance name, the OS eg->linux,
the storage, the instance type and other networking capabilities.
Once our instance is running we can access it over the internet using SSH and
install the necessary applications.
We can host web servers, web applications, process data and perform analytics
etc.
OUTPUT
3. Select OS
Theory:
Elastic Beanstalk is a platform within AWS that is used for deploying and scaling web applications. In
simple terms this platform as a service (PaaS) takes your application code and deploys it while
provisioning the supporting architecture and compute resources required for your code to run. Elastic
Beanstalk also fully manages the patching and security updates for those provisioned resources.
There are many PaaS solutions in the cloud computing space including Redhat Open Shift, Google
App Engine, Scalingo, Python Anywhere, Azure App Service, however AWS Elastic Beanstalk
remains one of the leading PaaS choices among app developers.
There is no charge to use Elastic Beanstalk to deploy your applications, you are only charged for the
resources that are created to support your application.
If you are planning to deploy Elastic Beanstalk, you can use Hava to visualise your architecture.
Output:
Experiment No. 6
Aim: To study and Implement Storage as a Service using AWS S3, Glaciers/
Azure Storage.
Theory: AWS S3 stands for simple storage service which is used to store files ,
objects, folders on the cloud which can easily be accessible from anywhere.
It provides
1. High scalability
2. High availability
3. Secure
4. Cost effective
5. High performance
According to AWS , they provide infinite storage which means they give very
high storage.
It allows us to store and retrieve any amount of data from anywhere on the web.
S3 buckets are containers used for storing files.Each S3 bucket has a unique
name over the aws.
They are commonly used for backup and restore, data archiving, content storage
for websites, and as a data source for big data analytics.
They are also commonly used for hosting static websites and maintaining user
history over years
For Example
1. A hospital wishes to store user history of the past 30-40 years.
2. A company can store compliance files along with their reports so that
they can be accessed easily
Output:
1. Search S3 in the services and create a bucket from the right section.
2. Choose region and give a unique name such that it is available over the
aws web
10. Finally access Your Objects on the web by pasting the public url in
incognito (For testing purposes).
Experiment No. 7
Aim: To study and Implement Database as a Service on SQL/NOSQL databases like AWS
RDS, AZURE SQL/ MongoDB Lab/ Firebase.
1. Amazon Aurora.
2. MySQL.
3. MariaDB.
4. Oracle.
5. SQL Server.
6. PostgreSQL
Databases can be installed on the EC2 instances also through the CLI.
The problem is that is if by chance the EC2 instance gets down or gets crashed
this will also affect the Database in it. Hence AWS recommends to use Rds
which is a better alternative for local databases.
Once we create the RDS on AWS we can access it through our MYSQL
Workbench and through programming languages like python , java , etc
Output:
Password : admin1234
6. Select Default VPC (Virtual private Cloud ) and If want to connect to ec2 , then ` select
connect to EC2`.
7. Also select Default Subnet or else create a subnet group from Subnet group section.
8. Allow Public access to DB , so that we can access it globally.
9. Choose Firewall ,VPC security groups and Availability zone.
14. Try setting Up connection of the newly created RDS in Mysql Workbench
Steps:
● UPDATE OS:
➢ sudo apt update &&sudo apt upgrade -y
OUTPUT:
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled;
vendor preset: enabled)
Active: active (running) since Thu 2024-04-04 12:45:34 IST;
51s ago
Main PID: 36775 (mysqld)
Status: "Server is operational"
Tasks: 38 (limit: 9284)
Memory: 365.6M
CPU: 1.462s
CGroup: /system.slice/mysql.service
└─36775 /usr/sbin/mysqld
➢ sudomysql_secure_installation
OUTPUT:
Securing the MySQL server deployment.
Connecting to MySQL using a blank password.
VALIDATE PASSWORD COMPONENT can be used to test
passwords and improve security. It checks the strength of password
and allows the users to set only those passwords which are secure
enough. Would you like to setup VALIDATE PASSWORD
component?
Press y|Y for Yes, any other key for No: y
There are three levels of password validation policy:
LOW Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and
dictionary file
Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 0
Skipping password set for root as authentication with auth_socket is
used by default.
If you would like to use password authentication instead, this can be
done with the "ALTER_USER" command. See
https://dev.mysql.com/doc/refman/8.0/en/alter-user.html#alter-user-
password-management for more information.
By default, a MySQL installation has an anonymous user,allowing
anyone to log into MySQL without having to have a user account
created for them. This is intended only for testing, and to make the
installation go a bit smoother. You should remove them before
moving into a production environment.
Remove anonymous users? (Press y|Y for Yes, any other key for No) :
y
Success.
Normally, root should only be allowed to connect from 'localhost'.
This ensures that someone cannot guess at the root password from the
network.
Disallow root login remotely? (Press y|Y for Yes, any other key for
No) : y
Success.
By default, MySQL comes with a database named 'test' that anyone
can access. This is also intended only for testing, and should be
removed before moving into a production
environment.
Remove test database and access to it? (Press y|Y for Yes, any other
key for No) : y
- Dropping test database...
Success.
- Removing privileges on test database...
Success.
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? (Press y|Y for Yes, any other key for
No) : y
Success.
All done!
➢ sudomysql -u root -p
➔ above command allows you to login into mysql shell
➔ Follow the creation of database in mysql for
owncloudinstallation
OUTPUT:
hp@hp-HP-EliteBook-840-G4:~$ sudomysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 27
Server version: 8.0.36-0ubuntu0.22.04.1 (Ubuntu)
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
● INSTALL OWNCLOUD:
➢ Download the owncloud using following command:
➔ sudowgethttps://download.owncloud.com/server/stable/owncloud-complete-
latest.zip
➢ Unzip and extract the file in folders /var/www/
➢ Make a directory to store user data
➔ sudomkdir -p /var/www/owncloud/data
<Directory /var/www/owncloud/>
Options +FollowSymlinks
AllowOverride All
Require all granted
</Directory>
ErrorLog /var/log/apache2/cloud.vivek.com_error.log
CustomLog /var/log/apache2/cloud.vivek.com_access.log
combined
</VirtualHost>
Theory:
Containerization has revolutionized the way software applications are developed, deployed, and
managed. Among the leading containerization platforms, Docker stands out as a powerful tool
for creating, distributing, and running containers. This endeavor aims to delve into the realm of
containerization with Docker, providing a comprehensive study and practical implementation of
its capabilities.
Understanding Containerization: The journey begins with a thorough understanding of
containerization and its significance in modern software development practices. Containerization
enables developers to package applications and their dependencies into lightweight, portable
units called containers, ensuring consistency across different environments and simplifying
deployment processes.
Introduction to Docker: Participants are introduced to Docker, the leading containerization
platform known for its simplicity, flexibility, and scalability. They gain insights into Docker's
architecture, including Docker Engine, Docker images, and Docker containers. Through hands-
on exercises, participants learn how to install Docker, interact with Docker CLI (Command Line
Interface), and manage Docker containers.
Creating Docker Images: The course delves into the process of creating Docker images, which
serve as the blueprints for containers. Participants learn how to write Dockerfiles, which define
the configuration and dependencies of an application, and build Docker images using Docker
build commands. They explore best practices for optimizing Docker images and minimizing
image size to enhance efficiency and performance.
Orchestrating Containers with Docker Compose: Docker Compose is introduced as a tool for
orchestrating multi-container Docker applications. Participants learn how to define multi-
container environments using Docker Compose YAML files, specify dependencies between
containers, and manage complex application stacks with ease. They explore Docker Compose
commands for building, starting, stopping, and scaling application services.
Networking and Storage in Docker: Networking and storage are crucial aspects of
containerized environments. Participants learn about Docker networking modes, including
bridge, host, and overlay networks, and how to configure network settings for containers.
Additionally, they explore Docker storage options, such as volumes and bind mounts, for
persisting data and sharing files between containers and the host system.
Security and Best Practices: Security is paramount in containerized environments, and
participants learn about Docker security features and best practices for securing Docker
containers and images. Topics include container isolation, user namespaces, Docker Content
Trust (DCT), and vulnerability scanning with Docker Security Scanning. Participants also
explore Docker best practices for optimizing performance, scalability, and resource utilization.
Output:
App.js
const http = require('http');
Dockerile
# Use the official Node.js image from the Docker Hub
FROM node:14
At its core, Kubernetes operates on the principles of declarative configuration and automation.
Understanding Kubernetes starts with familiarizing oneself with its key building blocks: Pods,
Services, Deployments, and StatefulSets. Pods encapsulate one or more containers and are the
basic unit of deployment. Services provide network access to a set of Pods, enabling load
balancing and service discovery. Deployments manage the lifecycle of Pods, ensuring that the
desired state is maintained and automatically handling scaling and rolling updates.
StatefulSets are similar to Deployments but are tailored for stateful applications, preserving
stable network identities and storage.
To delve deeper into Kubernetes, it's essential to grasp its architecture and components.
Kubernetes follows a master-worker architecture, where a cluster consists of a control plane
(master) and multiple nodes (workers). The control plane components include the API server,
scheduler, controller manager, and etcd, which collectively manage and orchestrate the
cluster's resources. Worker nodes host the Pods and run various Kubernetes components such
as the kubelet, kube-proxy, and container runtime (e.g., Docker or containerd).
Output:
Install the Kubernetes, kubectl, minikube and docker
Create the pod file for the pod
Check docker is enabled or not