Professional Documents
Culture Documents
"Docker": A Technical Seminar On
"Docker": A Technical Seminar On
“DOCKER”
Submitted in Partial Fulfillment of the Requirements
For the award of the Degree of
Bachelor of Technology in
Science and Humanities Department(S&H)
By
DACHEPALLY MANJUNATH
18311A1973
1
APRIL 2019
CERTIFICATE
2
DECLARATION
This is to certify that the work reported in the present seminar titled “DOCKER” is a record
work done by me in the Department of Electronics and Computer Engineering, Sreenidhi
Institute of Science and Technology, Yamnampet, Ghatkesar.
The report is based on the seminar work done entirely by me and not copied from any other
source.
DACHEPALLY MANJUNATH
18311A1973
3
ACKNOWLEDGMENT
We convey our sincere thanks to all the faculties of ECM department, Sreenidhi Institute
of Science and Technology, for their continuous help, co-operation, and support to
complete this seminar.
Finally, we extend our sense of gratitude to almighty, our parents, all our friends,
teaching and non-teaching staff, who directly or indirectly helped us in this endeavor.
4
LIST OF FIGURES:
PAGE NUMBERS
FIGURE 1: INTRODUCTION 08
FIGURE 2: HISTORY 09
FIGURE 3: OPERATIONS 11
FIGURE 4: POPULARITY OF DOCKER 12
FIGURE 5: POPULARITY OF DOCKER 14
FIGURE 6: DISADVANTAGES 15
5
CONTENTS
1. ABSTRACT 7
2. INTRODUCTION 8
3. HISTORY 8
4. TECHNOLOGY 9
5.OPERATIONS 11
6. POPULARITY 11
7. DISADVANTAGES 14
8. CONCLUSION 16
9. REFERENCES 17
6
ABSTRACT
Docker is an open source tool that simplifies managing Linux containers. A container is
a sandbox environment that runs a collection of processes. Containers are light-weight
VMs that share the same kernel as the host OS. Docker adds some niceties to linux
containers such as AUFS,version control,docker registry(repository),versioning etc. This
talk will serve as an introduction to working with docker tools. I will cover the basic
concepts behind docker and explain the difference between a docker and a VM. Show a
demo of how easy it is to create a docker image and launch a container from it.
Docker is an open platform for developing ,shipping and running various applications in
a faster way.Docker enables the applications to run separately from the host
infrastructure and treat the infrastructure like a managed application. Docker also helps
to ship code faster ,test faster,deploy faster and shorten the cycle between writing code
and running code,Docker does this by combining a light weight container virtualization
platform with workflows and tooling helps to manage and deploy applications. Docker
uses resource isolation features of Linux kernel features that limits ,accounts for and
isolates the resource use.
Docker also implements a high-level API that provides light weight containers that run
processes in isolation .By using these isolated containers ,resources can be services
restricted , and process provisioned to have a private view of the operating system with
their own process.
7
INTRODUCTION
Docker unlocks the potential of every organization with a container platform that brings
traditional applications and microservices built on Window, Linux and mainframe into an
automated and secure supply chain, advancing dev to ops collaboration.
As a result, organizations report a 300 percent improvement in time to market, while
reducing operational costs by 50 percent. Inspired by open source innovation and a rich
ecosystem of technology and go-to-market partners, Docker’s container platform and
services are used by millions of developers and more than 650 Global 10K commercial
customers including ADP, GE, MetLife, PayPal and Societe Generale.
FIGURE 1
HISTORY
Solomon Hykes started Docker in France as an internal project within dotCloud,
a platform-as-a-service company, with initial contributions by other dot Cloud engineers
including Andrea Luzzardi and Francois-Xavier Bourlet. Jeff Lindsay also became
involved as an independent collaborator. Docker represents an evolution of dot Cloud's
proprietary technology, which is itself built on earlier open-source projects such
as Cloudlets.The software debuted to the public in Santa Clara at PyCon in 2013.
Docker was released as open source in March 2013.On March 13, 2014, with the
8
release of version 0.9, Docker dropped LXC as the default execution environment and
replaced it with its own lib container library written in the Go programming language.
Adoption
FIGURE 2
TECHNOLOGY
Docker is developed primarily for Linux, where it uses the resource isolation
features of the Linux kernel such as cgroups and kernel namespaces, and
a union-capable file system such as OverlayFS and others to allow
9
independent containers to run within a single Linux instance, avoiding the
overhead of starting and maintaining virtual machines (VMs).The Linux kernel's
support for namespaces mostly isolates an application's view of the operating
environment, including process trees, network, user IDs and mounted file
systems, while the kernel's cgroups provide resource limiting for memory and
CPU.Since version 0.9, Docker includes the libcontainer library as its own way
to directly use virtualization facilities provided by the Linux kernel, in addition to
using abstracted virtualization interfaces via libvirt, LXC and systemd-nspawn.
Building on top of facilities provided by the Linux kernel (primarily cgroups and
namespaces), a Docker container, unlike a virtual machine, does not require or
include a separate operating system.Instead, it relies on the kernel's functionality
and uses resource isolation for CPU and memory, and separate namespaces to
isolate the application's view of the operating system. Docker accesses the Linux
kernel's virtualization features either directly using the lib container library, which
is available as of Docker 0.9, or indirectly via libvirt, LXC (Linux Containers)
or systemd-nspawn.
Tools
FIGURE 3
Docker is a tool that can package an application and its dependencies in a virtual
container that can run on any Linux server. This helps enable flexibility and portability
on where the application can run, whether on premises, public cloud, private cloud, bare
metal, etc.
Because Docker containers are lightweight, a single server or virtual machine can run
several containers simultaneously. A 2016 analysis found that a typical Docker use
case involves running five containers per host, but that many organizations run 10 or
more.
Using containers may simplify the creation of highly distributed systems by allowing
multiple applications, worker tasks and other processes to run autonomously on a single
physical machine or across multiple virtual machines. This allows the deployment of
nodes to be performed as the resources become available or when more nodes are
needed, allowing a platform as a service (PaaS)-style of deployment and scaling for
systems such as Apache Cassandra, MongoDB and Riak.
The first advantage of using docker is the ROI. The biggest driver of most management
decisions when selecting a new product is the return on investment. The more a
solution can drive down costs while raising profits, the better the solution is, especially
for large, established companies, that need to generate steady revenue on the long
term.In this sense, Docker can help facilitate this type of savings by dramatically
reducing infrastructure resources. The nature of Docker is that fewer resources are
necessary to run the same application. Because of the reduced infrastructure
requirements that Docker has, organizations are able to save on everything from server
costs to the employees needed to maintain them. Docker allows engineering teams to
be smaller and more effective.
11
2. Standardization & productivity
FIGURE 4
3. CI efficiency
Docker enables you to build a container image and use that same image across every
step of the deployment process. A huge benefit of this is the ability to separate non-
dependent steps and run them in parallel. The length of time it takes from build to
production can be sped up notably.
Eliminate the “it works on my machine” problem once and for all. One of the benefits
that the entire team will appreciate is parity. Parity, in terms of Docker, means that your
images run the same no matter which server or whose laptop they are running on. For
your developers, this means less time spent setting up environments, debugging
environment-specific issues, and a more portable and easy-to-set-up codebase. Parity
also means your production infrastructure will be more reliable and easier to maintain.
12
5. Simplicity & faster configurations
One of the key benefits of Docker is the way it simplifies matters. Users can take their
own configuration, put it into code and deploy it without any problems. As Docker can
be used in a wide variety of environments, the requirements of the infrastructure are no
longer linked with the environment of the application.
6. Rapid Deployment
Docker manages to reduce deployment to seconds. This is due to the fact that it creates
a container for every process and does not boot an OS. Data can be created and
destroyed without worry that the cost to bring it up again would be higher than
affordable.
8. Multi-Cloud Platforms
This is possibly one of Docker’s greatest benefits. Over the last few years, all major
cloud computing providers, including Amazon Web Services (AWS) and Google
Compute Platform (GCP), have embraced Docker’s availability and added individual
support. Docker containers can be run inside an Amazon EC2 instance, Google
Compute Engine instance, Rackspace server or VirtualBox, provided that the host OS
supports Docker. If this is the case, a container running on an Amazon EC2 instance
can easily be ported between environments, for example to VirtualBox, achieving similar
consistency and functionality. Also, Docker works very well with other providers like
13
Microsoft Azure, and OpenStack, and can be used with various configuration managers
like Chef, Puppet, and Ansible,etc.
FIGURE 5
9.ISOLATION
Docker makes sure each container has its own resources that are isolated from other
containers. You can have various containers for separate applications running
completely different stacks. Docker helps you ensure clean app removal since each
application runs on its own container. If you no longer need an application, you can
simply delete its container. It won’t leave any temporary or configuration files on your
host OS.
On top of these benefits, Docker also ensures that each application only uses resources
that have been assigned to them. A particular application won’t use all of your available
resources, which would normally lead to performance degradation or complete
downtime for other applications.
10. Security
And the last benefit of using docker is – security. From a security point of view, Docker
ensures that applications that are running on containers are completely segregated and
isolated from each other, granting you complete control over traffic flow and
management. No Docker container can look into processes running inside another
container. From an architectural point of view, each container gets its own set of
resources ranging from processing to network stacks.
DISADVANTAGES
14
and the host system and so on. If you want 100% bare-metal performance, you
want to apply bare metal, not containers.
The container ecosystem is split – But the core Docker platform is open
source, some container products don’t work with other ones.
Data storage is intricate – By design, all of the data inside a container leaves
forever when it closes down except you save it somewhere else first. There are
ways to store data tenaciously in Docker, such as Docker Data Capacities, but
this is arguably a test that still has yet to be approached in a seamless manner.
Graphical applications do not operate well – Docker was created as a solution
for deploying server applications that don’t need a graphical interface. While
there are some creative approaches that one can practice to run a GUI app
inside a container, these solutions are solid at best.
Few applications do not benefit from Docker Containers – In common, the
applications that are intended to work as a collection of thoughtful microservices
attain to get the most from containers. Contrarily, Docker’s one real benefit is that
it can interpret application delivery by giving an easy packaging mechanism.
Those who are planning to migrate to Docker should have these advantages and
disadvantages in mind. Docker is not the best choice for application deployment
invariably. In remarkable cases, traditional virtual machines or bare-metal servers are
better solutions. Don’t let the Docker hype remove this reality. Fields like networking,
storage and version management (for the contents of containers are shortly
underserved by the present Docker ecosystem and produce opportunities for both
startups and incumbents.
Over time, it’s likely that the difference between virtual machines and containers will
grow less important, which will push consideration to the ‘build’ and ‘ship’ aspects. The
differences here will make the puzzle of ‘What happens to Docker?’ least significant
than ‘What appears to the IT industry as a result of Docker?
FIGURE 6
15
CONCLUSION
Containerization offers a container based solution for virtualization. It encapsulates the
application and its dependencies into a system and language-independent package,
whereas your code runs in isolation from other containers but share the host’s
resources. Therefore you do not need a VM with an entire guest OS and its overhead
due to not used resources, but only your container to run your app.
Docker is that kind of a lightweight container based solution for virtualization.
Comparing the start and stop times of Docker to VMs we see a significant difference,
where VMs need 30-45s to start, 5-10s to stop. Docker is in the start time about 600
times faster and in the stop time about 100 times. Both times you will only need about
50ms.
Docker’s Technology Docker is based on some features - namely namespaces and c
groups - of the Linux Kernel. It provides Linux containers (LXC) or its own
implementation libcontainer as container format. So it is able to provide lightweight
operating system virtualization technology.
The already mentioned Docker Engine is basically all the virtualization technology and
utilities. Docker uses a client-server architecture, in which the user interacts with the
Docker daemon (through the commandline interface on the client). The daemon is
responsible for building images, running containers and their distribution.
16
Reference
17
PingPongPlus: design of an athletic-tangible interface for
computer-supported cooperative play. Proc. CHI ’99, 394-401.
13. Lakshmipathy, V., Schmandt, C., and Marmasse, N. Talk-
Back: a conversational answering machine. In Proc. UIST ’03,
41-50.
14. Lee, J.C., and Tan, D.S. Using a low-cost electroencephalograph
for task classification in HCI research. In Proc. CHI ’06,
81-90.
15. Lyons, K., Skeels, C., Starner, T., Snoeck, C. M., Wong, B.A.,
andAshbrook, D. Augmenting conversations using dualpurpose
speech. In Proc. UIST ’04. 237-246.
16. Mandryk, R.L., and Atkins, M.S. A Fuzzy Physiological Approach
for Continuously Modeling Emotion During Interaction
with Play Environments. Intl Journal of Human-Computer
Studies, 6(4), 329-347, 2007.
17. Mandryk, R.L., Inkpen, K.M., and Calvert, T.W. Using
Psychophysiological
Techniques to Measure User Experience
with Entertainment Technologies. Behaviour and Information
Technology, 25(2), 141-58, March 2006.
18. McFarland, D.J., Sarnacki, W.A., and Wolpaw, J.R. Brain–
computer interface (BCI) operation: optimizing information
transfer rates. Biological Psychology, 63(3), 237-51. Jul 2003.
19. Mistry, P., Maes, P., and Chang, L. WUW - wear Ur world: a
wearable gestural interface. In CHI ‘09 Ext. Abst., 4111-4116.
20. Moore, M., and Dua, U. A galvanic skin response interface for
people with severe motor disabilities. In Proc. ACM
SIGACCESS Accessibility and Comp. ‘04, 48-54.
21. Paradiso, J.A., Leo, C.K., Checka, N., and Hsiao, K. Passive
acoustic knock tracking for interactive windows. In CHI ‘02
Extended Abstracts, 732-733.
22. Post, E.R. and Orth, M. Smart Fabric, or “Wearable Clothing.”
In Proc. ISWC ’97, 167.
23. Rosenberg, R. The biofeedback Pointer: EMG Control of a
Two Dimensional Pointer. In Proc. ISWC ’98, 4-7.
24. Saponas, T.S., Tan, D.S., Morris, D., and Balakrishnan, R.
Demonstrating the feasibility of using forearm electromyography
for muscle-computer interfaces. In Proc. CHI ’09, 515-24.
25. Sturman, D.J. and Zeltzer, D. A Survey of Glove-based Input.
IEEE Comp Graph and Appl, 14.1, Jan 1994.
18
26. Wilson, A. PlayAnywhere: a compact interactive tabletop
projection-vision system. In Proc. UIST ‘05, 83-92.
27. Wilson, A.D. Robust computer vision-based detection of
pinching for one and two-handed gesture input. In Proc. UIST
’06, 255-258.
28. Witten, I.H. and Frank, E. Data Mining: Practical machine
learning tools and techniques, 2nd Edition, Morgan Kaufmann,
San Francisco, 2005
19