Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Docker

In this unit, we will cover:

 History of Virtualization
 What is Docker?
 Docker Architecture:
 Docker Engine
 Docker Images
 Docker Registries
 Docker Containers
 Deep Dive Into Docker

History of Virtualization

Earlier, the process for deploying a service was slow and painful. First, the
developers were writing code; then the operations team would deploy it on bare
metal machines, where they had to look out for library versions, patches, and
language compilers for the code to work. If there were some bugs or errors, the
process would start all over again, the developers would fix it, and then again the
operational team was there to deploy.

There was an improvement with the creation of Hypervisors. Hypervisors have


multiple Virtual machines or VMs on the same host, which may be running or
turned off. VMs decreased the waiting time for deploying code and bug fixing in a
big manner, but the real game changer was Docker containers.

What is Docker?

Docker is computer software used for Virtualization in order to have multiple


Operating systems running on the same host. Unlike Hypervisors which are used
for creating VM (Virtual machines), virtualization in Docker is performed on
system-level in so-called Docker containers.
As you can see the difference in the image below, Docker containers run on top of
the host's Operation system. This helps you to improves efficiency. Moreover, we
can run more containers on the same infrastructure than we can run Virtual
machines because containers use fewer resources.

Unlike the VMs which can communicate with the hardware of the host (ex:
Ethernet adapter to create more virtual adapters) Docker containers run in an
isolated environment on top of the host's OS. Even if your host runs Windows OS,
you can have Linux images running in containers with the help of Hyper-V, which
automatically creates small VM to virtualize the system's base image, in this case,
Linux.

Docker Architecture

Let's talk about Docker main components in the Docker Architecture

Docker Engine

Docker is the client-server type of application which means we have clients who
relay to the server. So the Docker daemon called: dockerd is the Docker engine
which represents the server. The docker daemon and the clients can be run on
the same or remote host, and they communicate through command line client
binary, as well as a full RESTful API to interact with the daemon: dockerd.
Docker Images

Docker images are the "source code" for our containers; we use them to build
containers. They can have software pre-installed which speeds up deployment.
They are portable, and we can use existing images or build our own.

Registries

Docker stores the images we build in registries. There are public and private
registries. Docker company has public registry called Docker hub, where you can
also store images privately. Docker hub has millions of images, which you can
start using now.

Docker Containers

Containers are the organizational units of Docker. When we build an image and
start running it; we are running in a container. The container analogy is used
because of the portability of the software we have running in our container. We
can move it, in other words, "ship" the software, modify, manage, create or get
rid of it, destroy it, just as cargo ships can do with real containers.

In simple terms, an image is a template, and a container is a copy of that


template. You can have multiple containers (copies) of the same image.
Below we have an image which perfectly represents the interaction between the
different components and how Docker container technology works.

Docker Architecture Diagram


 A Deep Dive Into Docker !

Let us suppose that a company needs to develop a Java Application. In order to


do so the developer will setup an environment with tomcat server installed in it.
Once the application is developed, it needs to be tested by the tester. Now the
tester will again set up tomcat environment from the scratch to test the
application. Once the application testing is done, it will be deployed on the
production server. Again the production needs an environment with tomcat
installed on it, so that it can host the Java application. If you see the same
tomcat environment setup is done thrice.

There are some issues that I have listed below with this approach:
1) There is a loss of time and effort.
2) There could be a version mismatch in different setups i.e. the developer &
tester may have installed a verion of tomcat, however the system admin may
have installed a different version of tomcat on the production server.

Now, a Docker container can be used to prevent this loss.


In this case, the developer will create a tomcat docker image.

Note: a Docker Image is nothing but a blueprint to deploy multiple


containers of the same configurations.

Using a base image like Ubuntu, which is already existing in Docker Hub (Docker
Hub has some base docker images available for free) . Now this image can be
used by the developer, the tester and the system admin to deploy the tomcat
environment. This is how docker container solves the problem.

This can be done using Virtual Machines as well. However, let’s see a comparison
between a Virtual machine and Docker Container to understand this better.
Virtual Machine and Docker Container are compared on the following three
parameters:

 Size – Resource they utilize.


 Startup – Boot time.
 Integration – Ability to integrate with other tools with ease.

Size

Chart explains how Virtual Machine and Docker Container utilizes the resources
allocated to them.

On a host system with 16 Gigabytes of RAM we can run up to 3 Virtual Machines.


To run the Virtual Machines in parallel, you need to divide te RAM among the
Virtual Machines. Suppose we allocate it as follows:

 6 GB of RAM - VM-1,
 4 GB of RAM – VM-2,
 6 GB to – VM-3.

Once a chunk of memory is allocated to a Virtual Machine, then that memory is


blocked and cannot be re-allocated. We would be wasting 7 GB (2 GB + 1 GB +
4 GB) of RAM in total and thus we cannot setup another Virtual Machine.
Considering RAM is a costly hardware, this would be a major issue because.

Using Docker, the CPU allocates exactly the amount of memory that is
required by the Docker Container.

 container 1 uses 4 GB of RAM – Allotted 4 GB – 0 GB Unused & Blocked


 container 2 uses 3 GB of RAM – Allotted 3 GB – 0 GB Unused & Blocked
 container 3 uses 2 GB of RAM – Allotted 2 GB – 0 GB Unused & Blocked

Since there is no allocated memory (RAM) which is unused, we save 7 GB (16 –


4 – 3 – 2) of RAM by using Docker Container. We can even create additional
containers from the leftover RAM and increase my productivity.
Docker Container can efficiently use the resources as per need. Much more than
Virtual machines.

Start-Up
When it comes to start-up, Virtual Machine takes a lot of time to boot up because
the guest operating system needs to start from scratch, which will then load all
the binaries and libraries. This is time consuming and will prove very costly at
times when quick startup of applications is needed.

In case of Docker Container, since the container runs on your host OS, you can
save precious boot-up time. This is a clear advantage over Virtual Machine.

Consider a situation where you want to install two different versions of Ruby on
my system. Using Virtual Machines, you will need to set up two different Virtual
Machines to run the different versions. Each of these will have its own set of
binaries and libraries while running on different guest operating systems.
Whereas if I use Docker Container, even though you will be creating two different
containers where each container will have its own set of binaries and libraries,
you will be running them on the same host operating system. Running them
straight on the Host operating system makes my Docker Containers lightweight
and faster.

So Docker Container clearly wins again from Virtual Machine based on Startup
parameter.

What about Integration?

Integration of different tools using Virtual Machine maybe possible, but even that
possibility comes with a lot of complications.

You may have only a limited limited number of tools running in a Virtual Machine.
If you want many instances of these tools, then you would need to spin up many
Virtual Machines because each one of them can have only one running instance of
them. Setting up each VM brings infrastructure problems with it.
This is where Docker comes to the rescue. Using Docker Container, we can set up
many instances of applications and tools, all of them may be running in the same
container or running in different containers, which can interact with one another
by just running a few commands. You can also easily scale up by creating
multiple copies of these containers. So configuring them is not be a problem.

Docker is designed to benefit both Developers and System Administrators,


making it a part of many toolchains. Developers can write their code without
worrying about the testing or the production environment and system
administrators need not worry about infrastructure as Docker can easily scale up
and scale down the number of systems for deploying on the servers.

What is Docker Engine?


The Docker Engine is simply the docker application that is installed on one host
machine. Yet it is the heart of the Docker system. It works like a client-server
application which uses:

 A server which is a type of long-running program called a daemon


process
 A command line interface (CLI) client
 REST API is used for communication between the CLI client and Docker
Daemon
In a Linux Operating system, there is a Docker client which can be accessed from
the terminal and a Docker Host which runs the Docker Daemon. We build our
Docker images and run Docker containers by passing commands from the CLI
client to the Docker Daemon.

This Docker Toolbox is an installer to quickly and easily install and setup a Docker
environment on a host. In the case of Windows there is an additional Docker
Toolbox component required inside the Docker host. Docker Toolbox installs
Docker Client, Machine, Compose (for MacOS only), Kitematic and VirtualBox.

Let’s now revew three important terms, i.e. Docker Images, Docker
Containers and Docker Registry.

 Docker Image

Docker Image can be compared to a template which is used to create Docker


Containers. They are the building blocks of a Docker Container. These Docker
Images are created using the build command. These Read only templates are
used for creating containers by using the run command. We will explore Docker
commands in depth in the ―Docker Commands blog‖.
Docker lets people (or companies) create and share software through Docker
images. Also, you don’t have to worry about whether your computer can run the
software in a Docker image — a Docker container can always run it.

You can either use a ready-made docker image from docker-hub or create a new
image as per your requirement.

 Docker Container

Containers are the ready applications created from Docker Images or you can say
a Docker Container is a running instance of a Docker Image and they hold the
entire package needed to run the application. This happens to be the
ultimate utility of Docker.

 Docker Registry

Finally, Docker Registry is where the Docker Images are stored. The Registry can
be either a user’s local repository or a public repository like a Docker Hub
allowing multiple users to collaborate in building an application. Even with
multiple teams within the same organization can exchange or share containers by
uploading them to the Docker Hub. Docker Hub is Docker’s very own cloud
repository similar to GitHub.

There are several dockers commands .. we will talk more about that later.

Command Description

docker info Information Command

docker pull Download an image

docker run -i -t image_name /bin/bash Run image as a container

docker start our_container Start container

docker stop container_name Stop container

docker ps List of al running containers

docker stats Container information

docker images List of images downloaded

- Reasons to use Containers

Following are the reasons to use containers:

 Containers have no guest OS and use the host’s operating system. So,
they share relevant libraries & resources as and when needed.
 Processing and execution of applications are very fast since applications
specific binaries and libraries of containers run on the host kernel.
 Booting up a container takes only a fraction of a second, and also
containers are lightweight and faster than Virtual Machines.

Now that you have understood what containerization is and the reasons to use
containers, let us cover other terms and concepts.
- Docker Compose & Docker Swarm

Docker Compose is a YAML file which contains details about the services,
networks, and volumes for setting up the Docker application. So, you can use
Docker Compose to create separate containers, host them and get them to
communicate with each other. Each container will expose a port for
communicating with other containers.

Docker Swarm is a technique to create and maintain a cluster of Docker


Engines. The Docker engines can be hosted on different nodes, and these nodes,
which are in remote locations, form a Cluster when connected in Swarm mode.
With this, we come to an end to the theory part of this Docker Explained blog,
wherein you must have understood all the basic terminologies.

Summary:

 Virtual Machines are slow and take a lot of time to boot.


 Containers are fast and boots quickly as it uses host operating system and
shares the relevant libraries.
 Containers do not waste or block host resources unlike virtual machines.
 Containers have isolated libraries and binaries specific to the application
they are running.
 Containers are handled by Containerization engine.
 Docker is one of the containerization platforms which can be used to
create and run containers.

You might also like