Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

Experiment No.

Aim: Case Study of cloud computing.

Theory:

What is Cloud?

The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is
something, which is present at remote location. Cloud can provide services over public and
private networks, i.e., WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM)
execute on cloud.

What is Cloud Computing?

Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-
you-go pricing. Instead of buying, owning, and maintaining physical data centers and
servers, you can access technology services, such as computing power, storage, and
databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS).

Basic Concepts

There are certain services and models working behind the scene making the cloud computing
feasible and accessible to end users. Following are the working models for cloud computing:
• Deployment Models
• Service Models

Deployment Models

Deployment models define the type of access to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of access: Public, Private, Hybrid, and Community.

Public Cloud

The public cloud allows systems and services to be easily accessible to the general public.
Public cloud may be less secure because of its openness.

Private Cloud

The private cloud allows systems and services to be accessible within an organization. It is
more secured because of its private nature.
Community Cloud

The community cloud allows systems and services to be accessible by a group of organizations.

Hybrid Cloud

The hybrid cloud is a mixture of public and private cloud, in which the critical activities are
performed using private cloud while the non-critical activities are performed using public
cloud.

Service Models

Cloud computing is based on service models. These are categorized into three basic service
models which are
-
• Infrastructure-as–a-Service (IaaS)
• Platform-as-a-Service (PaaS)
• Software-as-a-Service (SaaS)

Anything-as-a-Service (XaaS) is yet another service model, which includes Network-


as-a-Service, Business-as-a-Service, Identity-as-a-Service, Database-as-a-Service or
Strategy-as-a-Service.
Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of the service
models inherit the security and management mechanism from the underlying model, as shown
in the following diagram:
Infrastructure-as-a-Service (IaaS)
IaaS provides access to fundamental resources such as physical machines, virtual machines,
virtual storage, etc.
Platform-as-a-Service (PaaS)
PaaS provides the runtime environment for applications, development and deployment tools,
etc.
Software-as-a-Service (SaaS)
SaaS model allows to use software applications as a service to end-users.

History of Cloud Computing

The concept of Cloud Computing came into existence in the year 1950 with implementation
of mainframe computers, accessible via thin/static clients. Since then, cloud computing has
been evolved from static clients to dynamic ones and from software to services. The
following diagram explains the evolution of cloud computing:
Benefits

Cloud Computing has numerous advantages. Some of them are listed below -
• One can access applications as utilities, over the Internet.
• One can manipulate and configure the applications online at any time.
• It does not require to install a software to access or manipulate cloud application.
• Cloud Computing offers online development and deployment tools,
programming runtime environment through PaaS model.
• Cloud resources are available over the network in a manner that provide platform
independent access to any type of clients.
• Cloud Computing offers on-demand self-service. The resources can be used without
interaction with cloud service provider.
• Cloud Computing is highly cost effective because it operates at high efficiency
with optimum utilization. It just requires an Internet connection
• Cloud Computing offers load balancing that makes it more reliable.

Risks related to Cloud Computing

Although cloud Computing is a promising innovation with various benefits in the world of
computing, it comes with risks. Some of them are discussed below:

Security and Privacy

It is the biggest concern about cloud computing. Since data management and infrastructure
management in cloud is provided by third-party, it is always a risk to handover the sensitive
information to cloud service providers.
Although the cloud computing vendors ensure highly secured password protected
accounts, any sign of security breach may result in loss of customers and businesses.

Lock In

It is very difficult for the customers to switch from one Cloud Service Provider (CSP) to
another. It results in dependency on a particular CSP for service.

Isolation Failure

This risk involves the failure of isolation mechanism that separates storage, memory, and
routing between the different tenants.

Management Interface Compromise

In case of public cloud provider, the customer management interfaces are accessible through the
Internet.

Insecure or Incomplete Data Deletion

It is possible that the data requested for deletion may not get deleted. It happens
because either of the following reasons
• Extra copies of data are stored but are not available at the time of deletion
• Disk that stores data of multiple tenants is destroyed.

Characteristics of Cloud Computing

There are four key characteristics of cloud computing. They are shown in the following
diagram:
On Demand Self Service

Cloud Computing allows the users to use web services and resources on demand. One can
logon to a website at any time and use them.

Broad Network Access

Since cloud computing is completely web based, it can be accessed from anywhere and at any
time

Resource Pooling

Cloud computing allows multiple tenants to share a pool of resources. One can share single
physical instance of hardware, database and basic infrastructure.

Rapid Elasticity

It is very easy to scale the resources vertically or horizontally at any time. Scaling of
resources means the ability of resources to deal with increasing or decreasing demand.
The resources being used by customers at any given point of time are automatically monitored.
Measured Service

In this service cloud provider controls and monitors all the aspects of cloud service.
Resource optimization, billing, and capacity planning etc. depend on it.
Limitation of cloud computing

• Without internet connectivity, user cannot access data on cloud.


• It is difficult moving data from one cloud to another.
• As, cloud infrastructure is completely owned, managed, and monitored by the service
provider, so the cloud users have less control over the function and execution of
services within a cloud infrastructure.
• All the information stored on cloud will be shared with third party, i.e., Cloud
computing service provider.

Case study: Cloud Services Used by From Software


Introduction:
From Software, a video game developer known for titles like the Dark Souls series,
Bloodborne, and Sekiro: Shadows Die Twice, likely uses various cloud services to support
their game development and online infrastructure. While specific details about their cloud
service providers may not be publicly disclosed, we can make educated guesses based on
common industry practices. Here are some cloud services commonly used by game developers
like From Software:

Cloud-Based Services Utilized by From Software:


1. Amazon Web Services (AWS): AWS is one of the leading cloud service
providers, offering a wide range of services such as computing power (EC2), storage
(S3), databases (RDS), and more. Many game developers leverage AWS for hosting
game servers, storing game assets, and scaling their infrastructure to handle player
loads.
2. Microsoft Azure: Azure provides similar cloud services to AWS, including
virtual machines, storage solutions, and databases. Azure's integration with
Microsoft's development tools and services makes it a popular choice for game
developers, especially those targeting the Xbox platform.
3. Google Cloud Platform (GCP): GCP offers infrastructure services like
computing, storage, and networking, along with advanced data analytics and machine
learning capabilities. Game developers may use GCP for hosting multiplayer game
servers, analyzing player data, or leveraging AI for game development.
4. Game-specific services: Some cloud providers offer specialized services tailored
to the gaming industry. For example, Amazon GameLift provides dedicated game server
hosting and scaling, while Microsoft PlayFab offers backend services for managing
player accounts, matchmaking, and in-game economies.
5. Content Delivery Networks (CDNs): CDNs like Cloudflare, Akamai, and AWS
CloudFront help optimize content delivery by caching game assets (such as textures,
audio files, and updates) on servers located closer to players, reducing latency and
improving download speeds.

Benefits of Cloud-Based Services for From Software:


• Scalability: Easily scale infrastructure resources based on player demand,
ensuring optimal performance during peak gaming periods.
• Global Reach: Distribute game assets and updates efficiently to players worldwide,
reducing latency and improving overall gaming experiences.
• Cost Efficiency: Pay only for the resources consumed, avoiding upfront hardware
investments and minimizing operational costs.
• Agility: Accelerate game development cycles with managed services and
automated deployment pipelines, enabling faster time-to-market for new titles and
updates.
• Data-Driven Insights: Leverage analytics platforms to gain valuable insights into
player behavior, preferences, and performance metrics, informing future game
development decisions.

By embracing cloud-based services, From Software has strengthened its position as a leading
innovator in the gaming industry, delivering immersive and engaging experiences to millions of
players worldwide. With the scalability, reliability, and flexibility offered by cloud computing,
From Software continues to push the boundaries of interactive entertainment, setting new
standards for excellence in game development and online gaming experiences.

Conclusion: We have successfully study Introduction and overview of cloud computing.


Experiment No. 2
Aim: To study and implement Hosted Virtualization using Virtual Box and KVM.

Theory:
Hosted virtualization refers to the practice of running multiple virtual machines (VMs) on a
single physical host operating system. Virtualization technologies like Virtual Box and KVM
enable users to create and manage VMs, each of which can run its own operating system and
applications independently.

Introduction to Virtualization: Begin by introducing the concept of virtualization and its


importance in modern computing environments. Explain how virtualization allows for better
resource utilization, improved scalability, and enhanced flexibility in managing IT
infrastructure.

Types of Virtualization: Discuss the different types of virtualization, focusing on hosted


Virtualization (Type 2 hypervisor) as opposed to bare-metal virtualization (Type 1 hypervisor).
Highlight the benefits and use cases of hosted virtualization, especially in scenarios where ease
of setup and management are priorities.

Overview of VirtualBox and KVM: Provide an overview of VirtualBox and KVM, two popular
hosted virtualization solutions. Explain their features, capabilities, and differences in terms of
architecture, performance, and supported platforms.

Installation and Setup: Detail the steps for installing and configuring VirtualBox and KVM on a
host operating system (e.g., Linux distribution). Include instructions for installing necessary
dependencies, enabling virtualization extensions in the BIOS/UEFI firmware, and setting up
networking for VM communication.

Creating Virtual Machines: Demonstrate how to create and configure VMs using VirtualBox
and KVM. Explain the process of allocating resources (CPU, memory, disk space) to VMs,
selecting the guest operating system, and configuring virtual hardware settings.

Managing Virtual Machines: Explore the management capabilities provided by VirtualBox and
KVM. Discuss tasks such as starting, stopping, pausing, cloning, and snapshotting VMs, as well
as managing virtual storage and networking.

Performance and Resource Management: Investigate the performance characteristics of


virtualized environments created with VirtualBox and KVM. Measure metrics such as CPU
utilization, memory usage, disk I/O, and network throughput under different workloads and
configurations. Discuss strategies for optimizing resource allocation and managing performance
bottlenecks.

Security Considerations: Address security concerns related to virtualization, including isolation


between VMs, securing host and guest operating systems, and protecting virtualized
infrastructure against potential vulnerabilities and attacks.
Use Cases and Applications: Explore real-world use cases and applications of hosted
virtualization in various domains, such as software development and testing, server
consolidation, cloud computing, and educational environments.

Implementation of KVM

Update and Upgrade Ubuntu 22.04 sudo apt update sudo apt upgrade

Check if Virtualization is enabled egrep -c '(vmx|svm)' /proc/cpuinfo

Verify if KVM virtualization is enabled kvm-ok

Install the cpu-checker package sudo apt install -y cpu-checker

Install KVM on Ubuntu 22.04 sudo apt install -y qemu-kvm virt-manager libvirt-daemon-
system virtinst libvirt-clients bridge-utils

Enable the virtualization daemon sudo systemctl enable --now libvirtd sudo systemctl start
libvirtd

Check virtualization daemon is running sudo systemctl status libvirtd

Add Your User to the KVM and Libvirt Group sudo usermod -aG kvm $USER sudo
usermod -aG libvirt $USER

Run KVM Virtual Machines Manager


Implementation of VirtualBox
Configuration of guest machine:

Conclusion: We have successfully implemented KVM.


Experiment No. 3

Aim: To study and implement AWS IAM.

Theory:
AWS IAM (Identity and Access Management) is a service provided by Amazon
Web Services (AWS) that helps you manage access to your AWS resources. It's
like a security system for your AWS account.
IAM allows you to create and manage users, groups, and roles. Users represent
individual people or entities who need access to your AWS resources. Groups
are collections of users with similar access requirements, making it easier to
manage permissions. Roles are used to grant temporary access to external
entities or services.
With IAM, you can control and define permissions through policies. Policies are
written in JSON format and specify what actions are allowed or denied on
specific AWS resources. These policies can be attached to IAM entities (users,
groups, or roles) to grant or restrict access to AWS services and resources.
IAM follows the principle of least privilege, meaning users and entities are
given only the necessary permissions required for their tasks, minimizing
potential security risks. IAM also provides features like multi-factor
authentication (MFA) for added security and an audit trail to track user activity
and changes to permissions.
By using AWS IAM, you can effectively manage and secure access to your
AWS resources, ensuring that only authorized individuals have appropriate
permissions and actions are logged for accountability and compliance purposes.
Overall, IAM is an essential component of AWS security, providing granular
control over access to your AWS account and resources, reducing the risk of
unauthorized access and helping maintain a secure environment.
Output:

1. Adding user

2. Assigning username and password.


3. Giving full access on EC2 instance
4. Signing in as IAM user from incognito tab

Conclusion:

We successfully implemented IAM is AWS by giving permissions to user to


access EC2 fully.
Experiment No. 4

Aim: To study and Implement Infrastructure as a Service using AWS/Microsoft


Azure

Theory: Infrastructure-as-a-Service (IaaS) is the foundational service model


offered by AWS. With IaaS, businesses can access virtualized computing
resources, including virtual machines (EC2 instances), storage (S3, EBS), and
networking (VPC). Key features of AWS IaaS include:

1. Scalability: AWS allows businesses to scale their infrastructure up or


down based on demand, providing flexibility and cost optimization.

2. Control and Customization: Organizations have granular control over


their infrastructure configurations, enabling them to customize their
virtual machines, networks, and storage setups.

3. Cost-Efficiency: IaaS pricing models are based on usage, eliminating the


need for up front hardware investment. Businesses only pay for the
resources they consume.

EC2 is a service provided by AWS.

EC2 instance is a virtual server/computer which provides computing resources


in the cloud which we can rent from AWS to run our applications and carry
other tasks.

After launching the instance we can select the instance name, the OS eg->linux,
the storage, the instance type and other networking capabilities.

Once our instance is running we can access it over the internet using SSH and
install the necessary applications.

We can host web servers, web applications, process data and perform analytics
etc.
OUTPUT

1. Click launch instances

2. Enter Instance name …. Eg=>(my_web_server)

3. Select OS

4. Select Instance type (Only t2.micro available in free tier)


5. Create key value pair to access from local machine using ssh.
Click on launch instance

Check EC2 dashboard

Access using SSH


Experiment No. 5
Aim: To study and Implement Platform as a Service using AWS Elastic Beanstalk

Theory:

Platform-as-a-Service (PaaS) is a cloud computing service model that provides a platform


and environment for developers to build, deploy, and manage applications without the need
to worry about the underlying infrastructure. In PaaS, the cloud provider manages the
infrastructure components, such as servers, storage, and networking, allowing developers to
focus on application development and deployment.PaaS offers a range of services and tools
that facilitate application development, testing, and deployment. It provides a complete
development and runtime environment, including development frameworks, libraries,
databases, and middleware. Some key features of PaaS include:

1. Application Deployment: PaaS simplifies the deployment process by providing


automated provisioning and configuration of the underlying infrastructure needed to
run applications.
2. Scalability and Elasticity: PaaS platforms typically offer built-in scalability features
that allow applications to scale automatically based on demand. This ensures that
applications can handle increased traffic and workload without manual intervention.
3. Development Tools and Frameworks: PaaS provides a variety of development tools,
software development kits (SDKs), and frameworks that enable developers to build
applications using their preferred programming languages and development
environments.
4. Database and Middleware Services: PaaS platforms often offer managed database
services, message queues, caching, and other middleware components to facilitate
application development and data management.
5. Collaboration and Teamwork: PaaS platforms often include collaboration features that
allow multiple developers to work together on the same project, facilitating teamwork
and version control.
6. Monitoring and Analytics: PaaS platforms typically provide monitoring and analytics
tools to track application performance, resource utilization, and user behavior. This
helps developers identify bottlenecks and optimize their applications.

PaaS provides a higher level of abstraction compared to Infrastructure-as-a-Service (IaaS). It


abstracts away the underlying infrastructure and allows developers to focus on application
development rather than infrastructure management. By leveraging PaaS, developers can
accelerate the development process, improve collaboration, and reduce the time and effort
required for application deployment.

Popular examples of PaaS offerings include AWS Elastic Beanstalk, RDS

Elastic Beanstalk is a platform within AWS that is used for deploying and scaling web applications. In
simple terms this platform as a service (PaaS) takes your application code and deploys it while
provisioning the supporting architecture and compute resources required for your code to run. Elastic
Beanstalk also fully manages the patching and security updates for those provisioned resources.

There are many PaaS solutions in the cloud computing space including Redhat Open Shift, Google
App Engine, Scalingo, Python Anywhere, Azure App Service, however AWS Elastic Beanstalk
remains one of the leading PaaS choices among app developers.

There is no charge to use Elastic Beanstalk to deploy your applications, you are only charged for the
resources that are created to support your application.

If you are planning to deploy Elastic Beanstalk, you can use Hava to visualise your architecture.
Output:
Experiment No. 6

Aim: To study and Implement Storage as a Service using AWS S3, Glaciers/
Azure Storage.

Theory: AWS S3 stands for simple storage service which is used to store files ,
objects, folders on the cloud which can easily be accessible from anywhere.

It comes under Ifrastructure as a service.

It has high durability of 99.99999999999 which is 99. and 11 9s procceeding


which means that data cannot be lost 99.11 9s % if we store data in s3.

It provides
1. High scalability
2. High availability
3. Secure
4. Cost effective
5. High performance

According to AWS , they provide infinite storage which means they give very
high storage.
It allows us to store and retrieve any amount of data from anywhere on the web.
S3 buckets are containers used for storing files.Each S3 bucket has a unique
name over the aws.
They are commonly used for backup and restore, data archiving, content storage
for websites, and as a data source for big data analytics.
They are also commonly used for hosting static websites and maintaining user
history over years

For Example
1. A hospital wishes to store user history of the past 30-40 years.
2. A company can store compliance files along with their reports so that
they can be accessed easily
Output:
1. Search S3 in the services and create a bucket from the right section.

2. Choose region and give a unique name such that it is available over the
aws web

3. Choose Bucket ownership and allow public access.


4. Disable Bucket Versioning

5. Click on Create bucket to finally create a bucket.


6. Check out Buckets and View your newly created bucket.

7. Click on the bucket and upload a file.

8. Check public url for newly uploaded file


9. Set up Policy for accessing S3 objects publicly.

10. Finally access Your Objects on the web by pasting the public url in
incognito (For testing purposes).
Experiment No. 7

Aim: To study and Implement Database as a Service on SQL/NOSQL databases like AWS
RDS, AZURE SQL/ MongoDB Lab/ Firebase.

Theory: Amazon Relational Database Service (Amazon RDS) is a managed


service that you can use to launch and manage relational databases on AWS.

Amazon RDS is an Online Transaction Processing (OLTP) type of database.

The primary use case is a transactional database (rather than an analytical


database)
It aims to be drop-in replacement for existing on-premises instances of the same
databases.

Automated backups and patching are applied in customer-defined maintenance


windows.

Push-button scaling, replication, and redundancy.

Amazon RDS supports the following database engines:

1. Amazon Aurora.
2. MySQL.
3. MariaDB.
4. Oracle.
5. SQL Server.
6. PostgreSQL

Databases can be installed on the EC2 instances also through the CLI.

The problem is that is if by chance the EC2 instance gets down or gets crashed
this will also affect the Database in it. Hence AWS recommends to use Rds
which is a better alternative for local databases.

Once we create the RDS on AWS we can access it through our MYSQL
Workbench and through programming languages like python , java , etc
Output:

1. Check Out Amazon RDS.

2. Choose DB creation method and DBMS

3. Choose Free tier

4. Configure RDS settings for MySql.

Instance name :myfirstrds


Username : admin

Password : admin1234

5. Select instance configuration and storage specifications.

6. Select Default VPC (Virtual private Cloud ) and If want to connect to ec2 , then ` select
connect to EC2`.
7. Also select Default Subnet or else create a subnet group from Subnet group section.
8. Allow Public access to DB , so that we can access it globally.
9. Choose Firewall ,VPC security groups and Availability zone.

10. Set Up authentication type

11. Createinitial database eg test

12. Create/Launch RDS instance


13. Check Once Instance is running

14. Try setting Up connection of the newly created RDS in Mysql Workbench

If connection getting failed or timed out…..Follow below steps


Click the rds to connect to  click vpc security group click inbound rulesedit inbound
rulesClick add ruleset all traffic and anywhereDone
15. Use from Mysql workbench
Experiment No. 8

Aim: To study and implement Storage as a Service in ownCloud

Steps:
● UPDATE OS:
➢ sudo apt update &&sudo apt upgrade -y

● INSTALL APACHE WEB SERVER:


➢ sudo apt install apache2

➢ systemctl start apache2


(Enter [sudo] password for authentication)
➢ systemctl enable apache2

➢ systemctl status apache2


OUTPUT:
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled;
vendor preset: enabled)
Active: active (running) since Thu 2024-04-04 12:01:17 IST;
14min ago
Docs: https://httpd.apache.org/docs/2.4/
Main PID: 26248 (apache2)
Tasks: 55 (limit: 9284)
Memory: 5.1M
CPU: 72ms
CGroup: /system.slice/apache2.service
├─26248 /usr/sbin/apache2 -k start
├─26249 /usr/sbin/apache2 -k start
└─26250 /usr/sbin/apache2 -k start

● INSTALL PHP AND REQUIRED EXTENSIONS:


➔ For required version of PHP, add Ondrej PPA repository. (PPA stands for
“Personal Package Archive” and is a kind of software repository developed
and published by application developers and Linux users to store and
distribute software packages that cannot be found in official operating
system repositories.)
➢ sudo add-apt-repository ppa:ondrej/php
➢ sudo apt update
➢ sudo apt install php7.4 php7.4-
{opcache,gd,curl,mysqlnd,intl,json,ldap,mbstring,mysqlnd,xml,zip}

● INSTALL MYSQL AND CREATE A DATABASE:


➢ sudo apt install mysql-server
➢ Systemctl start mysql
➢ Systemctl enable mysql
➢ systemctl status mysql

OUTPUT:
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled;
vendor preset: enabled)
Active: active (running) since Thu 2024-04-04 12:45:34 IST;
51s ago
Main PID: 36775 (mysqld)
Status: "Server is operational"
Tasks: 38 (limit: 9284)
Memory: 365.6M
CPU: 1.462s
CGroup: /system.slice/mysql.service
└─36775 /usr/sbin/mysqld

➢ sudomysql_secure_installation

OUTPUT:
Securing the MySQL server deployment.
Connecting to MySQL using a blank password.
VALIDATE PASSWORD COMPONENT can be used to test
passwords and improve security. It checks the strength of password
and allows the users to set only those passwords which are secure
enough. Would you like to setup VALIDATE PASSWORD
component?
Press y|Y for Yes, any other key for No: y
There are three levels of password validation policy:
LOW Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and
dictionary file
Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 0
Skipping password set for root as authentication with auth_socket is
used by default.
If you would like to use password authentication instead, this can be
done with the "ALTER_USER" command. See
https://dev.mysql.com/doc/refman/8.0/en/alter-user.html#alter-user-
password-management for more information.
By default, a MySQL installation has an anonymous user,allowing
anyone to log into MySQL without having to have a user account
created for them. This is intended only for testing, and to make the
installation go a bit smoother. You should remove them before
moving into a production environment.
Remove anonymous users? (Press y|Y for Yes, any other key for No) :
y
Success.
Normally, root should only be allowed to connect from 'localhost'.
This ensures that someone cannot guess at the root password from the
network.
Disallow root login remotely? (Press y|Y for Yes, any other key for
No) : y
Success.
By default, MySQL comes with a database named 'test' that anyone
can access. This is also intended only for testing, and should be
removed before moving into a production
environment.
Remove test database and access to it? (Press y|Y for Yes, any other
key for No) : y
- Dropping test database...
Success.
- Removing privileges on test database...
Success.
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? (Press y|Y for Yes, any other key for
No) : y
Success.
All done!

➢ sudomysql -u root -p
➔ above command allows you to login into mysql shell
➔ Follow the creation of database in mysql for
owncloudinstallation

OUTPUT:
hp@hp-HP-EliteBook-840-G4:~$ sudomysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 27
Server version: 8.0.36-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2024, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its


affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.

mysql> create database owncloud;


Query OK, 1 row affected (0.00 sec)
mysql> create user 'owncloud'@'md' identified by vivek@15';
Query OK, 0 rows affected (0.02 sec)
mysql> grant all privileges on owncloud. * to 'owncloud’@’md';
Query OK, 0 rows affected (0.01 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
mysql> exit
Bye

● INSTALL OWNCLOUD:
➢ Download the owncloud using following command:
➔ sudowgethttps://download.owncloud.com/server/stable/owncloud-complete-
latest.zip
➢ Unzip and extract the file in folders /var/www/
➢ Make a directory to store user data
➔ sudomkdir -p /var/www/owncloud/data

➢ Change the ownership of directory


➔ sudochown -R www-data:www-data /var/www/owncloud/

● CONFIGURE APACHE FOR OWNCLOUD:


➢ Navigate:
➔ hp@hp-HP-EliteBook-840-G4:~$ cd ..
➔ hp@hp-HP-EliteBook-840-G4:/home$ cd ..
➔ hp@hp-HP-EliteBook-840-G4:/$ cd /etc/apache2/sites-
available

➢ Creating a configuration file for your owncloud


sudo nano owncloud_vivek.conf
➔ <VirtualHost *:80>
ServerName cloud.vivek.com
ServerAdmin webmaster@vivek.com
DocumentRoot /var/www/owncloud

<Directory /var/www/owncloud/>
Options +FollowSymlinks
AllowOverride All
Require all granted
</Directory>

ErrorLog /var/log/apache2/cloud.vivek.com_error.log
CustomLog /var/log/apache2/cloud.vivek.com_access.log
combined

</VirtualHost>

➢ Reload apache, Enable owncloud virtual host, and restart apache.


➔ sudo a2ensite owncloud_vivek.conf
➔ systemctl reload apache2
● OPEN BROWSER AND ENTER:

Conclusion: Hence, we successfully implemented, SaaS using ownCloud


Experiment No. 9
Aim: To study and Implement Containerization in Docker .

Theory:
Containerization has revolutionized the way software applications are developed, deployed, and
managed. Among the leading containerization platforms, Docker stands out as a powerful tool
for creating, distributing, and running containers. This endeavor aims to delve into the realm of
containerization with Docker, providing a comprehensive study and practical implementation of
its capabilities.
Understanding Containerization: The journey begins with a thorough understanding of
containerization and its significance in modern software development practices. Containerization
enables developers to package applications and their dependencies into lightweight, portable
units called containers, ensuring consistency across different environments and simplifying
deployment processes.
Introduction to Docker: Participants are introduced to Docker, the leading containerization
platform known for its simplicity, flexibility, and scalability. They gain insights into Docker's
architecture, including Docker Engine, Docker images, and Docker containers. Through hands-
on exercises, participants learn how to install Docker, interact with Docker CLI (Command Line
Interface), and manage Docker containers.
Creating Docker Images: The course delves into the process of creating Docker images, which
serve as the blueprints for containers. Participants learn how to write Dockerfiles, which define
the configuration and dependencies of an application, and build Docker images using Docker
build commands. They explore best practices for optimizing Docker images and minimizing
image size to enhance efficiency and performance.
Orchestrating Containers with Docker Compose: Docker Compose is introduced as a tool for
orchestrating multi-container Docker applications. Participants learn how to define multi-
container environments using Docker Compose YAML files, specify dependencies between
containers, and manage complex application stacks with ease. They explore Docker Compose
commands for building, starting, stopping, and scaling application services.
Networking and Storage in Docker: Networking and storage are crucial aspects of
containerized environments. Participants learn about Docker networking modes, including
bridge, host, and overlay networks, and how to configure network settings for containers.
Additionally, they explore Docker storage options, such as volumes and bind mounts, for
persisting data and sharing files between containers and the host system.
Security and Best Practices: Security is paramount in containerized environments, and
participants learn about Docker security features and best practices for securing Docker
containers and images. Topics include container isolation, user namespaces, Docker Content
Trust (DCT), and vulnerability scanning with Docker Security Scanning. Participants also
explore Docker best practices for optimizing performance, scalability, and resource utilization.

Output:

App.js
const http = require('http');

const server = http.createServer((req, res) => {


res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello, Docker World!\n');
});

const port = process.env.PORT || 3000;


server.listen(port, () => {
console.log(`Server running on http://localhost:${port}`);
});

Dockerile
# Use the official Node.js image from the Docker Hub
FROM node:14

# Set the working directory in the container


WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the container


COPY package*.json ./

# Install application dependencies


RUN npm install

# Copy the rest of the application source code to the container


COPY . .

# Expose the port your application will run on


EXPOSE 3000

# Define the command to run your application


CMD ["node", "app.js"]

Conclusion: Thus we have successfully implemented Containerization


using Docker.
Experiment No. 10

Aim: To study and Implement container orchestration using kubernetes

Theory: Container orchestration has become indispensable in modern software development,


especially with the proliferation of microservices architectures. Kubernetes stands out as the
de facto standard for container orchestration, offering a robust, scalable, and flexible platform
to manage containerized applications seamlessly. To study and implement container
orchestration using Kubernetes, one must first grasp the fundamental concepts and
components that constitute the Kubernetes ecosystem.

At its core, Kubernetes operates on the principles of declarative configuration and automation.
Understanding Kubernetes starts with familiarizing oneself with its key building blocks: Pods,
Services, Deployments, and StatefulSets. Pods encapsulate one or more containers and are the
basic unit of deployment. Services provide network access to a set of Pods, enabling load
balancing and service discovery. Deployments manage the lifecycle of Pods, ensuring that the
desired state is maintained and automatically handling scaling and rolling updates.
StatefulSets are similar to Deployments but are tailored for stateful applications, preserving
stable network identities and storage.

To delve deeper into Kubernetes, it's essential to grasp its architecture and components.
Kubernetes follows a master-worker architecture, where a cluster consists of a control plane
(master) and multiple nodes (workers). The control plane components include the API server,
scheduler, controller manager, and etcd, which collectively manage and orchestrate the
cluster's resources. Worker nodes host the Pods and run various Kubernetes components such
as the kubelet, kube-proxy, and container runtime (e.g., Docker or containerd).

Practical implementation of Kubernetes involves setting up a Kubernetes cluster, either


locally using tools like Minikube or Kind for development and testing or on cloud providers
like AWS, GCP, or Azure for production deployments. Once the cluster is up and running,
deploying applications involves creating Kubernetes manifests, which are YAML or JSON
files describing the desired state of the application. These manifests typically include
specifications for Pods, Services, Deployments, and other resources required by the
application.

Output:
Install the Kubernetes, kubectl, minikube and docker
Create the pod file for the pod
Check docker is enabled or not

Unset the default env KUBECONFIG and start the minikube

Conclusion: We have successfully implemented container orchestration using kubernetes.

You might also like