Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Experiment No 1

Aim: Introduction and overview of cloud computing.


Theory:
Origin of cloud computing
Cloud computing has its origins in the 1960s, when computer scientists at the Massachusetts Institute of
Technology (MIT) began experimenting with ways to share resources among multiple users. The concept of
"cloud computing" as we know it today, however, began to take shape in the late 1990s and early 2000s, when
companies like Amazon, Google, and Salesforce began offering web-based services that allowed customers
to access and use resources (such as storage and computing power) over the internet.
In 2006, Amazon Web Services (AWS) launched a set of web services that allowed developers to rent
computing power, storage, and other resources on a pay-as-you-go basis. This was one of the first examples of
the modern "cloud computing" model, where companies could rent resources on-demand and pay only for
what they used.
Over the next several years, other companies, including Google and Microsoft, entered the market with their
own cloud computing platforms, and the cloud computing industry continued to grow and evolve. Today,
cloud computing is a mainstream technology that is used by organizations of all sizes to reduce costs, increase
efficiency, and improve their ability to scale their resources as needed.

NIST Model
The National Institute of Standards and Technology (NIST) has developed a cloud computing reference
architecture, known as the NIST cloud computing reference architecture (NIST SP 500-292). This reference
architecture provides a common framework for understanding, describing, and comparing cloud computing
systems.
The NIST model defines five essential characteristics of cloud computing, which are:
 On-demand self-service: Users can provision computing resources (such as servers and storage) as
needed, without requiring human interaction with a service provider.
 Broad network access: Resources can be accessed over a network, such as the internet, using
standard protocols.
 Resource pooling: Resources are dynamically assigned and reassigned according to user demand.
 Rapid elasticity: Resources can be rapidly and automatically scaled up or down as needed.
 Measured service: Cloud providers monitor and measure usage of resources, and users are billed
based on usage.
The NIST reference architecture also defines three service models:
 Infrastructure as a Service (IaaS)
 Platform as a Service (PaaS)
 Software as a Service (SaaS)
Additionally, it defines four deployment models:
 Private cloud
 Community cloud
 Public cloud
 Hybrid cloud

(NITS Visual Model of Cloud Computing)


The NIST cloud computing reference architecture is widely used as a framework for understanding,
evaluating, and comparing different cloud computing systems. It provides a common set of terms and
definitions that can be used to describe cloud computing systems and services, regardless of the specific
technology or vendor.

Characteristics of cloud
 Scalability: Cloud computing allows organizations to scale up or down their computing resources as
needed, without the need for large capital investments.
 Cost savings: Cloud computing allows organizations to pay only for the resources they use, rather
than having to make large upfront investments in hardware and software.
 High availability: Cloud providers typically offer built-in redundancy and disaster recovery options,
so that users' data is always available, even in the event of a failure.
 Flexibility: Cloud computing allows organizations to access a wide range of resources, such as servers,
storage, and software, on-demand, which can be used to support different business needs.
 Security: Cloud providers offer various security measures such as encryption, access controls, and
monitoring to protect the data and applications of their customers.
 Mobility: Cloud computing allows users to access their data and applications from anywhere with an
internet connection.
 Automation: Many cloud services are designed to automatically handle tasks such as provisioning,
scaling, and backup, reducing the need for manual intervention.

Deployment Model
In cloud computing, deployment models refer to the different ways in which cloud services can be deployed
and accessed by users. The National Institute of Standards and Technology (NIST) defines four deployment
models:
 Public cloud: Public clouds are owned and operated by a third-party provider, and resources are made
available to the public over the internet. Public clouds offer the greatest level of scalability and the most
flexibility, but they also come with the least amount of control and customization options.
 Private cloud: Private clouds are owned and operated by a single organization, and resources are made
available only to that organization. Private clouds offer more control and customization options than
public clouds, but they also come with higher costs and less scalability.
 Community cloud: Community clouds are owned and operated by a group of organizations that have
common requirements. Resources are made available only to the members of that group. Community
clouds offer a balance of control and scalability, but they also come with higher costs and less flexibility
than public clouds.
 Hybrid cloud: Hybrid clouds are a combination of two or more of the above deployment models.
Organizations can use a private cloud for sensitive workloads, and a public cloud for less critical
workloads. Hybrid clouds offer the most flexibility, but they also come with higher costs and more
complexity.
In summary, the deployment model chosen depends on the organization's specific needs, such as the level of
security, control, and scalability required, as well as the cost constraints. Public clouds are typically more cost-
effective and offer more scalability, while private clouds offer more control and security, but at a higher cost.
Community clouds offer a balance of control, scalability and security but also comes at a higher cost. Hybrid
clouds offer the most flexibility, but they also come with higher costs and more complexity.

(Deployment Model)
Service Model
 Infrastructure as a Service (IaaS): IaaS provides users with access to virtualized computing resources,
such as servers, storage, and networking. IaaS providers allow users to rent these resources on a pay- as-
you-go basis, and users have full control over the operating system and software that runs on the virtual
machines. Examples of IaaS providers include Amazon Web Services (AWS), Microsoft Azure and
Google Cloud Platform.
 Platform as a Service (PaaS): PaaS provides users with a platform for developing, testing, and
deploying their own software applications. PaaS providers handle the underlying infrastructure (such as
servers and storage), and users only need to focus on developing and deploying their applications.
Examples of PaaS providers include AWS Elastic Beanstalk, Google App Engine, and Heroku.
 Software as a Service (SaaS): SaaS provides users with access to software applications that are hosted
and maintained by the SaaS provider. Examples of SaaS include email, customer relationship
management (CRM), and office productivity software like Microsoft Office 365, Google G Suite,
Salesforce.

(Cloud Service Model)


Each service model has its own set of characteristics and benefits. IaaS is the most flexible and allows for the
most control over the underlying infrastructure. PaaS is a middle ground between IaaS and SaaS, it allows for
more control than SaaS but less control than IaaS. SaaS is the least flexible, but it is also the easiest to use and
requires the least amount of technical expertise. The choice of service model depends on the specific needs and
requirements of the organization.

Advantages of cloud computing


 Scalability: Cloud computing allows organizations to scale up or down their computing resources as
needed, without the need for large capital investments. This enables organizations to respond quickly to
changing business conditions, without having to make significant investments in new hardware and
software.
 Cost savings: Cloud computing allows organizations to pay only for the resources they use, rather than
having to make large upfront investments in hardware and software. This can result in significant cost
savings, particularly for small and medium-sized businesses.
 High availability: Cloud providers typically offer built-in redundancy and disaster recovery options, so
that users' data is always available, even in the event of a failure. This can help organizations avoid
costly downtime and ensure continuity of operations.
 Flexibility: Cloud computing allows organizations to access a wide range of resources, such as servers,
storage, and software, on-demand, which can be used to support different business needs.
 Security: Cloud providers offer various security measures such as encryption, access controls, and
monitoring to protect the data and applications of their customers.
 Mobility: Cloud computing allows users to access their data and applications from anywhere with an
internet connection, enabling remote work and collaboration.
 Automation: Many cloud services are designed to automatically handle tasks such as
provisioning, scaling, and backup, reducing the need for manual intervention.
 Innovation: Cloud computing allows organizations to focus on their core business and leave the
underlying infrastructure to be managed by cloud providers. This allows organizations to take
advantage of new technologies and innovations faster than they would be able to do so by
themselves
 Global access: Cloud providers have data centres around the world, this allows organizations to
access their data and applications from anywhere in the world with low latency.
 Elasticity: Cloud computing allows organizations to scale resources up or down depending on
their usage, this allows them to save on costs by only paying for the resources they use.

Disadvantages of cloud computing


 Security risks: Storing data and running applications in the cloud can increase the risk of data
breaches and cyber attacks. Organizations must ensure that their data is properly secured and
that they have adequate controls in place to prevent unauthorized access.
 Dependence on internet connectivity: Cloud computing relies on internet connectivity, so if an
organization's internet connection goes down, they will lose access to their data and
applications.
 Limited control and customization: Organizations that use public cloud services have less control
over the underlying infrastructure and may have fewer options for customization.
 Vendor lock-in: Organizations that use a specific cloud provider's services may find it difficult
and costly to switch to a different provider in the future.
 Compliance and regulatory issues: Some organizations may have compliance and regulatory
requirements that prevent them from using certain types of cloud services.
 Limited visibility and transparency: Organizations may not have full visibility and transparency
into the underlying infrastructure of the cloud services they use, this can make it difficult to
troubleshoot issues and ensure compliance.
 Limited physical access to data: With cloud computing, organizations do not have physical
access to their data and applications. This can make it more difficult to conduct forensic
investigations or perform other types of security assessments.
 Latency: Depending on the location of the data center, the data might have to travel a long
distance over the internet, this can cause delays or latency issues.
 Data Sovereignty: Data Sovereignty laws can prevent certain types of data to be stored in certain
countries, this can limit the options for organizations that want to use cloud services.
 Limited control over upgrades: Cloud providers will be in charge of upgrades and maintenance
of the cloud infrastructure, this can limit the control of the organizations over their own
infrastructure.
Conclusion:
In conclusion, cloud computing is a rapidly growing technology that allows individuals and
organizations to access and store data, applications, and services over the internet. This can provide
many benefits, such as cost savings, scalability, and flexibility. Cloud computing services are provided
by companies such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform, and can
be used for a wide range of purposes, from personal storage to enterprise-level data analysis. As
technology continues to advance, the use of cloud computing is likely to become even more widespread,
making it an important area for individuals and businesses to stay informed about.
Experiment No 2

Aim: To study and implement Hosted Virtualization using VirtualBox& KVM


Theory:

Virtualization:
Virtualization is the creation of a virtual version of something, such as an operating system, a server,
a storage device, or network resources. It is a technology that allows multiple systems to run on a
single physical machine, thereby maximizing hardware utilization and reducing costs.
Types of virtualizations:
Virtualization can be achieved in different ways, including:

1. Operating System Virtualization: This type of virtualization allows multiple operating systems
to run on a single physical machine, each operating system running as a separate virtual
machine. This allows multiple applications to run on the same machine, each in its own
operating system environment, without interfering with each other.
2. Server Virtualization: This type of virtualization allows multiple virtual servers to run on a
single physical machine, each virtual server running its own operating system and
applications. This allows multiple servers to share the same physical resources, increasing
utilization and reducing costs.
3. Storage Virtualization: This type of virtualization allows multiple storage devices to appear
as a single, unified storage system. This makes it easier to manage storage, as well as to
allocate and use storage resources more efficiently.
4. Network Virtualization: This type of virtualization allows multiple virtual networks to run
on a single physical network, each virtual network appearing as a separate network to the
systems connected to it. This allows multiple applications to run on the same network, each
in its own virtual network environment, without interfering with each other.

(Virtualization)
Structure of virtualization:
1. Host Machine: The host machine is the physical machine that runs the virtualization
software and provides the underlying resources for the virtual machines.
2. Virtualization Software: The virtualization software creates and manages the virtual
machines, allocating and managing the underlying physical resources.
3. Virtual Machines: Virtual machines are virtual versions of a physical machine, running
their own operating systems and applications, and appearing as separate physical
machines to the systems that use them.
4. Virtual Resources: Virtual resources are the virtual versions of physical resources, such as
virtual processors, virtual memory, virtual storage devices, and virtual networks, which
are created and managed by the virtualization software.

(Structure of
virtualization)

Mechanism of virtualization:
The mechanism of virtualization involves several steps, including:

1. Abstraction: The virtualization software abstracts the underlying physical resources, such
as processors, memory, storage, and networks, and presents them as virtual resources to
the virtual machines.
2. Allocation: The virtualization software allocates virtual resources to the virtual machines,
determining how much of each resource each virtual machine should receive.
3. Isolation: The virtualization software isolates the virtual machines from each other,
ensuring that each virtual machine operates as if it has its own dedicated physical
resources.
4. Emulation: The virtualization software emulates the physical resources for each virtual
machine, providing the virtual machine with a virtual version of the underlying physical
resources that it can use.
5. Management: The virtualization software manages the virtual resources, allocating and
deallocating resources as needed and monitoring the performance of the virtual machines
to ensure that they are using resources efficiently.
Advantages of virtualization:
1. Improved Resource Utilization: Virtualization allows multiple systems to share the same
physical resources, maximizing utilization and reducing waste.
2. Increased Scalability: Virtualization allows for the easy addition and removal of virtual
resources, making it easier to scale systems as needed.

3. Improved Disaster Recovery: Virtualization makes it easier to quickly recover from


disaster by allowing virtual systems to be easily moved to alternative physical resources.
4. Enhanced Security: Virtualization allows systems to be isolated from each other, reducing
the risk of security breaches and making it easier to secure systems.
5. Cost Savings: Virtualization reduces costs by allowing multiple systems to run on a single
physical machine, reducing the need for multiple physical machines and reducing the costs
associated with managing them.

Disadvantages of virtualization:
1. Complexity: Virtualization can add complexity to an IT environment, requiring
specialized knowledge and expertise to manage and maintain.
2. Overhead: Virtualization can introduce overhead, such as the need for additional
processing power, memory, and storage, which can reduce performance and increase
costs.
3. Security Risks: Virtualization can create new security risks, such as the risk of data
breaches, due to the potential for security vulnerabilities in the virtualization software or
the virtual machines.

Conclusion:
In conclusion, virtualization is a transformative technology that has the potential to significantly
improve the way organizations use and manage their IT resources. With virtualization,
organizations can maximize resource utilization, increase scalability, improve disaster recovery,
enhance security, and reduce costs. By creating virtual versions of operating systems, servers,
storage devices, and network resources, virtualization allows organizations to take advantage of
the benefits of virtualization while minimizing the costs and complexity of managing multiple
physical systems.
EXPERIMENT NO. 04
Step 6-: Configure security group to provide access to VM using different protocols. In
thisexample we have selected default RDP protocol.
Step 13-: Once you click on connect, you will see the running Windows virtual
machine as shown below.
Step 15-: You can delete the instance permanently by selecting instance state followed by stop
Experiment no 5

Aim: To Study and Implement Platform as a Service using AWS ElasticBeanstalk.

SET UP ELASTIC BEANSTALK:


Step 1: Go to https://aws.amazon.com/elasticbeanstalk/ and select “Getstarted with AWS
Elastic Beanstalk”
Step 2: Create a new account and set up a billing account that is required forthe AWS
services to work.
After the setup, follow these steps to deploy your application.

DEPLOYING STEPS:
Step 1: Create Environment
Login to AWS console, once locate Elastic Beanstalk, select “create a new
environment”. Make sure you select the required platform as per your projectand select
“upload your code” from local.
Step 2: After uploading the code you can “Configure more options” or “Create
Application.”
Step 3: Go to Elastic Beanstalk > Environments to view the details ofapplications
uploaded along with its health.

The application can be accessed using the public URL provided by AWS.

Reference video: https://youtu.be/aAk4lRinNu4


Experiment No 6

Aim: To study and Implement storage as a service using Own Cloud/AWS S3. Glaciers/Azure
Storage

Implementation:

Step 1: In AWS, Services->Storage-> S3

Step 2: Click on Create bucket.

Step 3: Adding Bucket name and choosing AWS Region.


Step 4: Enable Bucket Versioning

Step 5: Disable Default encryption and click Create bucket

Step 6: Bucket 'achufirst' is created


Step 7: Selecting 'achufirst' and uploading files

Step 8: Uploaded files successfully

Step 9: After uploading the same png file one by one, clicking on 'Show versions', we can
see that the Version ID is different for both
Step 10: Copying ARN for 'achufirst'

Step 11: Going to Permissions -> Edit Bucket Policy -> Policy Generator Do as
shown

Step 12: Successfully edited bucket policy


Step 13: Delete objects inside bucket

Step 14: Deleting bucket


Experiment No: 07

Aim: Steps to create and connect to a Microsoft SQL Server Database with Amazon RDS are as
follows:

1. Setting up the RDS environment

Let us begin configuring the RDS MySQL Environment by first signing up for an AWS Account.
Once you have successfully created the AWS account, search for RDS in the Find Services bar
and hit enter.

Figure 1 – Searching for AWS RDS

Open the RDS from the drop-down menu and proceed to create the RDS MySQL Environment. On
the next page that appears, click on Create Database. This will open another page where you can
define the necessary details required to set up the MySQL Database.

Figure 2 – Create Database Button in AWS RDS


2. Creating the MySQL database

Once you click on the Create Database, a new page opens as follows, where you can define the
database creation method and other options. Let us go ahead and apply the settings as defined in
the figure below. We are going to select Standard Create as the database creation method. This
will allow us to configure all the necessary settings on our own. Next, select the Engine Type as
MySQL and the latest version. At the time of writing this article, the latest MySQL version in
8.0.16.

Figure 3 – Selecting the Database Engine

In the next step, we are going to provide the name and connection details for the MySQL Database
that we are going to create. Since we are going to create the database in the free tier, select the
Free Tier from the template and proceed. Provide a suitable name for the database instance, for
example, I’m going to use the database instance as “mysql-db-test01”. Similarly, provide a suitable
master username and password for the same. This is the username and the password, that you will
be using later to connect to this MySQL instance later. Also, you must keep these credentials safe
so that it can be used later again.
Figure 4 – Setting up the Instance Credentials

Now, that the instance credentials have been set up, let’s go ahead and set some other properties
which are essential to set up the RDS MySQL Environment. Select the Database Instance Size as
“db.t2.micro” and Storage Type as General Purpose SSD. By default, the memory size is
allocated to 20GB which is fine for the moment.

Figure 5 – Specifying the DB Instance Size

In the next step, we should define the Connectivity settings for the RDS Database instance. Select
the default VPC connection that is already available within your login. For my use case, I have
already created some RDS instances previously, so I’ll be using the same VPC for this instance as
well. Additionally, we should also add a Subnet Group within the VPC connection. Since we will
be accessing the database instance from outside the AWS Environment, we should enable the
Publicly Accessible to Yes. Finally, for VPC Security Group, select “Choose Existing” and
proceed forward.
Figure 6 – Configuring Connectivity Settings for AWS RDS
Now that most of the configuration is done, the final step in creating the database is to select the
Database Authentication Mode as Password Authentication. Once completed, click on Create
Database.

Figure 7 – Create Database in AWS RDS

3. Configuring the RDS MySQL Environment

Once you click on Create Database in the previous step, it might take a while for AWS to create the
RDS instance and make it available for use. After a few moments, you will receive a notification
that says the database has been created successfully.

Figure 8 – RDS Instance for MySQL Created Successfully


As you can see in the figure above, I have already created an RDS instance for SQL Server
previously, the newly added MySQL Community instance is also added to the Databases list. Go
ahead and click on the DB Identifier for the MySQL Database. A new page will open
containing more information about the MySQL database instance. The important thing to note here
is the Endpoint which is available. This endpoint information will be used later to connect to the
instance using the MySQL Workbench tool.

Figure 9 – MySQL Database Instance

The next step here is to allow connections from the public network to allow and connect to the
instance. In order to enable this, click on the VPC Security Groups and then open the new
page.

Figure 10 – VPC Security Groups


In the following page that appears, select the Security Group ID and open it.

Figure 11 – Security Group ID

Figure 12 – Select Edit Inbound Rules

In the Security Group page, select the Edit Inbound Rules button. This will allow us to edit the
IP addresses that will have access to the MySQL Database instance.

The Edit Inbound Rule page appears. On this page, we will add a custom rule which allows any IP
address to connect to the RDS Instance on the port 3306. The port 3306 is the default port on
which MySQL is usually configured. If you are using any other port, you should allow traffic to
that specific port instead.

Click on the Add Rule button and select the Source as Anywhere. This will allow all traffic from
outside the AWS environment to connect to the MySQL instance on the RDS. Click on Save Rules
once done.
Figure 13 – Allow Inbound Connections

You can see that the new rules have been added to the list and are now effective.

Figure 14 – Added Inbound Rules for MySQL


4. Connecting to the RDS environment using MySQL Workbench

Once we have created the database and all the necessary configurations are done, it is now time
that we go ahead and connect to the instance. We will be using MySQL Workbench to connect to
the RDS instance. You can also choose any other tool to connect to the instance and it will work
the same.

Enter the endpoint that we copied in the previous steps as the hostname and the master username
as the username here and click on Test Connection.

Figure 15 – Connecting to the RDS Instance


You might be prompted to provide the password in the next step.

Figure 16 – Password to connect to RDS

If the connection is successful, you will receive a notification saying the connection has been
successful.

Figure 17 – MySQL Connection Successful

You can now go ahead and create your own schemas and tables in the RDS Instance.
Figure 18 – Connected to RDS Instance

Thus we have created, connected to, and deleted a Microsoft SQL Server database instance with Amazon RDS.
Experiment No 8

Aim: To study and Implement Security as a Service on AWS

Implementation :
Step 1 : This is AWS Security Hub then click on Go To Security Hub

Step 2 : Enable Security Standard it takes permission to security check.

Step 3 : This is security hub dashboard, In this dashboard you have to see about security summary and
Resourses with the most failed security check
Step 4 : Click on Security Standards, In that you have to check about results of security check

Step 5 : Then click on View result in that you have to check about how many time checking failed or
not
Experiment No 9

Aim: To study and implement Identity and Access Management(IAM) practices on AWS/Azure
cloud.

Step 1: Selecting IAM from AWS Services

Step 2: Admin Dashboard of IAM, Where Admin can create users


Step 3: Adding user details while creating users

Step 4: Providing types of permissions to created user.

Step 5: The user is created successfully.


Experiment No 10

Aim: To study and implement Containerization using Docker


Downloading and Installing Docker

With all the theory behind us, it's time to get your hands dirty and experience for yourself
how Docker works.

Using a Web browser, go to https://docs.docker.com/docker-for-windows/install/ and


download Docker for the OS you're using.

Install interactively

1. Double-click Docker Desktop Installer.exe to run the installer.


2. If you haven’t already downloaded the installer (Docker Desktop Installer.exe), you can
get it from Docker Hub. It typically downloads to your Downloads folder, or you can
run it from the recent downloads bar at the bottom of your web browser.
3. When prompted, ensure the Use WSL 2 instead of Hyper-V option on the Configuration
page is selected or not depending on your choice of backend.
4. If your system only supports one of the two options, you will not be able to select which
backend to use.
5. Follow the instructions on the installation wizard to authorize the installer and proceed
with the install.
6. When the installation is successful, click Close to complete the installation process.
7. If your admin account is different to your user account, you must add the user to the
docker-users group. Run Computer Management as an administrator and navigate to
Local Users and Groups > Groups > docker-users. Right-click to add the user to the
group. Log out and log back in for the changes to take effect.

Follow the installation instructions and when you're done, you should see the Docker
Desktop icon (the dolphin logo) in the system tray (for Windows). Clicking on the icon
launches Docker for Windows (see Figure 5).
Figure 5 : Docker Desktop for Windows

A number of administrative tasks in Docker can be accomplished through the Docker


Desktop app, but you can do more with the Docker CLI. For the rest of this article, I'll
demonstrate the various operations through the CLI.

Creating Your First Docker Container from a Docker Image

Figure 8 : Trying Docker for the first time


The best way to understand the difference between an image and a container is to try a very
simple example. In the command prompt, type the following command: docker run hello-
world. You should see the output as shown in Figure 8.

The docker run hello-world command downloads (or in Docker-speak, pulls) the hello- world
Docker Image from Docker Hub and then creates a Docker Container using this image; it
then assigns a random name to the container and starts it. Immediately, the container exits.
The hello-world image, as much as it's useless, allows you to understand a few important
concepts of Docker. Rest assured that you'll do something useful after this.

Viewing the Docker Container and Image

If you go to the Docker Desktop app and click on the Images item on the left (see Figure 9),
you'll see the hello-world image listed.

Figure 9 : Locating the hello-world image in the Docker Desktop app


If you now click on the Containers / Apps items on the left (see Figure 10), you should now see
a container named elated_bassi (you'll likely see a different name, as names are randomly
assigned to a container) based on the hello-world image. If you click on it, you'll be able to
see logs generated by the container, as well as inspect the environment variables associated
with the container and the statistics of the running container.
Figure 10 : Viewing the container created as well as the logs generated by the container

You can also view the Docker container and image using the command prompt. To view the
currently running container, use the docker ps command. To view all containers (including
those already exited), use the docker ps -a command (see Figure 11).

Figure 11 : Using the docker ps -a command to view all containers

To explicitly name the container when running it, use the –name option, like this:

$ docker run --name helloworld hello-world

To view all the Docker images on your computer, you can use the docker images command
(see Figure 12).
Figure 12: Viewing the Docker images on your computer

Once a docker image is on your local computer, you can simply create another container based
on it using the same docker run command:

C:>docker run hello-world

When you now use the docker ps -a command, you'll see a new container that ran and then
exited most recently:

C:\>docker ps -a PORTS
CONTAINER ID IMAGE COMMAND CREATED STATUS
NAMES
138cd9c6bc5c hello-world "/hello" 24 seconds ago Exited (0) 23
seconds ago jovial_borg
0099984a5fc2 hello-world "/hello" 29 minutes ago Exited (0) 29
minutes ago elated_bassi

Removing a Container

When a container has finished running and is no longer needed, you can delete it using
the docker rm command. To delete a container, you need to first get the container ID of the
container that you want to delete using the docker ps -a command, and then specify the
container ID with the docker rm command:

C:\>docker rm 138cd9c6bc5c138cd9c6bc5c

If you now use the docker ps -a command to view all the containers, you should find that the specified
container no longer exists:

C:\>docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
0099984a5fc2 hello-world "/hello" 37minutes ago Exited (0)
37 minutes ago elated_bassi
Removing a Docker Image

If you no longer need a particular Docker image (especially when you need to free up some space
on your computer), use the docker rmi command, like this:

C:\>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest bf756fb1ae65 12 months ago 13.3kB

C:\>docker rmi bf756fb1ae65


Error response from daemon: conflict: unable to delete bf756fb1ae65 (must be forced) ?
image is being used by stopped container 0099984a5fc2

In the above commands, you first try to get the Image ID of the Docker image that you want to
delete. Then, you use the docker rmi command to try to delete the image using its Image ID.
However, notice that in the above example, you're not able to delete it because the image is in
use by another container. You can verify this by using the docker ps -a command:

C:\>docker ps -a
CONTAINER ID IMAGE COMMAN CREATED STATUS
D
PORTS NAMES
0099984a5fc2 hello-world "/hello" 2 hours ago Exited (0) 2 hours
ago elated_bassi

True enough, there is a container named ( 0099984a5fc2 ) that's using the image. In this case, you
need to remove the container first before you can remove the image:

C:\>docker rm 0099984a5fc2 C:\>docker rmi


bf756fb1ae65 Untagged: hello-world:latest
Untagged: hello-
world@sha256:1a523af650137b8accdaed439c17d684df61ee4d74feac151b5b337 bd29e7eec
Deleted: sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745 f6b
Deleted: sha256:9c27e219663c25e0f28493790cc0b88bc973ba3b1686355f221c38a36978a
c63
A Docker image can only be removed when there's no container associated with it.
Sometimes you might have a lot of images on your computer and you just want to remove all
the images that aren't used by any containers. In this case, you can use the docker image
prune -a command:

C:\>docker image prune -a


WARNING! This will remove all images without at least one container associated to them.
Are you sure you want to continue? [y/N] y
All unused Docker images will now be removed.

You might also like