Professional Documents
Culture Documents
Docker Hands On Experience
Docker Hands On Experience
Docker Hands On Experience
What is docker
--------------
Build - image
Ship - registry
Run - container
Prerequisites:
Docker CE is supported on CentOS 7.3 64-bit.
Installation steps:
1. Set up the repository. Set up the Docker CE repository on CentOS:
3. Start Docker:
$ docker ps -a
Search image
------------
Search images on repository [Docker Hub by default]
List image
-----------
List images in local repository on your host
docker images
Download image
--------------
You can also download image with a specific tag [usually tag refers to a version].
If no tag is specified, image with tage 'latest' is downloaded
e.g.
docker pull ubuntu
is same as
docker pull ubuntu:latest
docker pull ubuntu:14.04
docker pull ubuntu:16.04
If you have python installed you can pretty print the output
docker inspect --format='' container2 | python -m json.tool
You can also filter out any specific configuration information you are looking for
docker inspect --format='{{json .NetworkSettings.Networks}}' container_name
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
container_name
List containers
---------------
List running containers
docker ps
You can use multiple of the above options same time e.g.
docker run -it <image-name> - If you want to launch container in interactive mode
(and connect to STDIN) with TTY assigned [e.g. You need to assign TTY if you are
running say bash shell on the container]
docker run -d <image-name> - If you want to run the container in detached mode
[in background]
#Launch a container in foreground mode, but this time in interactive mode and with
a tty assigned
docker run -it ubuntu
docker run -it httpd
To exit, you can type the key combination amongst Ctrl+C , Ctrl+D or type exit - It
depends on which image you are using
When you exit from the container, the container also exits/stops
If you want to detach instead of exiting from the container, then type the key
combination of Ctrl+P+Q
If you want to connect back to the container, you have two options
1. docker attach <container-name>
Using this method, for some containers like those launched from httpd image mainly
receive a signal which exits the container, however it works fine for containers
launched from say ubuntu, centos image
Now connect to the container again and kill the process with PID 1
docker exec -it <container-name> bash
ps -eaf
kill 1
You will get disconnected from the container. Also you will see that the container
has Exited
docker ps -a
#Check container's IP address and access web page using that locally from the host
docker inspect <container-id> | grep IPAddress
curl <container-ip>
You can execute a command inside a running docker container using docker exec e.g.
docker exec <container-name> pwd
docker exec -it <container-name> top
docker exec -it <container-name> "/bin/ps" "-eaf"
#Run workload and stop the container after finishing the job
docker run ubuntu echo "My first container run"
docker ps -a
So far we have accessed the container from within the Docker Host. If you want to
access the container from outside the Docker Host, you need to map Host ports to
container ports
-P => map a random high-numbered port on host to the exposed port on container
-p => map a specific port on host to a particular port on container
#Check port on host mapped to container's port and access web page from outside
[web browser] using host's port
docker ps -a
From web browser do http://host-ip:port
So far the container we launched was getting a random name. If we want to give a
specific name we can use --name
#Launch container with specific name
docker run -d -p 8081:80 --name myhttp1 -d httpd
curl http://host-ip:8081
Run a different command on the container than the one built-in to the image
---------------------------------------------------------------------------
#Notes: Typically, when an image is created, it is created to run some command
(e.g. httpd image runs httpd-foreground)
If we use docker run without mentioning any command to be run on launch - the
command defined at the time of image creation is run.
However if you want, you can specify a different command to be run at the time of
container launch. If done so, this will be the command that will run in the
container instead of the one defined in the image
You can inspect the image to check what command has been defined to run at
container launch by default
docker inspect <image-name>
Stop/Start container
----------------------
docker stop container_name
docker start container_name
Delete container
----------------
To delete a container you need to first stop the container
docker stop container_name
docker rm container_name
Remove the previously downloaded image and then load it from the saved file in
above step
docker rmi alpine
docker load < ./apline.tar
Tag image
---------
docker images
docker tag myhttp:v0.1 myhttp:latest
docker images
Delete image
------------
docker images
docker rmi image_id
or
docker rmi repository_name:tag_name
ENTRYPOINT = Same as CMD but used as the main command for the image.
EXPOSE = Instructs the container to listen on network ports when running. The
container ports are not reachable from the host by default.
ENV = Set container environment variables.
ADD = Copy resources (files, directories or files from URLs).
COPY = Copy resources (files, directories).
Each RUN will execute the command on the top writable layer and perform a commit to
image. Thus the layers keep increasing as you execute multiple RUN commands.
You can aggregate multiple RUN instructions using "&&", which will execute all run
commands on the same layer and only after all executions it will perform a commit
to image
Check history of commands run to create image. Also shows size of each layer
docker history <image-name|image-id>
docker history --no-trunc <image-name|image-id>
mkdir demoapp1
cd demoapp1
-----------------------------------------------------------------------------------
-----------------------
sample code for creating image using docker file
#Pull base image
FROM ubuntu
#Install Apache
RUN apt-get update -y && apt-get install apache2 apache2-utils -y#Define
default port
EXPOSE 80
ENTRYPOINT [ "/usr/sbin/apa-che2ctl" ]#Define default command
CMD [ "-D", "FOREGROUND" ]
-----------------------------------------------------------------------------------
----------------------------
#Run container
docker run demoapp1:v1.0
Launch a container with --rm which will delete the container when it exits
docker run --rm demoapp1:v1.0
Sample image creation 2
------------------------
#Using ENTRYPOINT
mkdir demoapp2
cd demoapp2
#Run container
docker run --rm demoapp2:v1.0
mkdir demoapp3
cd demoapp3
#Run container
docker run --rm demoapp3:v1.0
mkdir demoapp4
cd demoapp4
echo '<html><body><h1>Welcome to my first demo app on docker at ACTCoE Training!!!
</h1></body></html>' > index.html
FROM ubuntu:14.04
RUN apt-get update -y \
&& apt-get install -y nginx
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Volumes
-------
## Data Volumes
#Create container to have a blank volume created and made available inside the
container
docker run -d -P --name myhttp1 -v /var/www/html httpd
Check volumes and see the volume used by container appears in the list of volumes
docker volume ls
Data is written directly to the volumes and does not follow / use the overlay
filesystem.
Contents inside the volume are not saved into the container or if you create image
from it
docker stop myhttp1
docker commit myhttp1 testimg:v0.1
If you attach that same volume to a new container, we can see that the data in the
volume is intact (persists)
docker run -ti --name newmyhttp1 -v volume-name-from-list:/var/www/html httpd bash
Check contents of /var/www/html
#Create container and map local host directory as volume available inside the
container
mkdir -p /var/docker-volumes/vol1
docker run -d -P -v /var/docker-volumes/vol1:/vol1 httpd
#Create a volume and define this precreated volume to be used inside container
docker volume ls
docker volume create --name myvol1
docker run -d -P --name myhttp2 -v myvol1:/var/www/html httpd
#Check if index.html file exists on the volume mounted from existing container
ls /var/www/html
docker networks
----------------
Default networks:
default bridge
none
host
Link containers
----------------
https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
#updating-the-etchosts-file
--link
Docker also exposes each Docker originated environment variable from the source
container as an environment variable in the target. For each variable Docker
creates an <alias>_ENV_<name> variable in the target container. The variable�s
value is set to the value Docker used when it started the source container.
all environment variables originating from Docker within a container are made
available to any container that links to it. This could have serious security
implications if sensitive data is stored in them.
In addition to the environment variables, Docker adds a host entry for the source
container to the /etc/hosts
Note: You can link multiple recipient containers to a single source. For example,
you could have multiple (differently named) web containers attached to your db
container.
If you restart the source container, the linked containers /etc/hosts files will be
automatically updated with the source container�s new IP address, allowing linked
communication to continue.
https://docs.docker.com/engine/userguide/networking/work-with-networks/#network-
alias-scoping-example
Run the following commands to run one database and one web server container. Using
--link we can link the database container into the web server container.
docker run -d --name mydb1 -e MYSQL_DATABASE=testdb -e MYSQL_USER=dbadmin -e
MYSQL_PASSWORD=dbpassword -e MYSQL_ROOT_PASSWORD=rootpass mysql
docker run -d --name myhttp1 --link mydb1:dbsrv httpd
Now you can connect from the web server to database server using name of database
server
apt-get update -y
apt-get install -y mysql-client
ENVIRONMENT VARIABLE
---------------------
--env, -e Set environment variables
--env-file Read in a file of environment variables
By default, a container has no resource constraints and can use as much of a given
resource as the host�s kernel scheduler will allow. Docker provides ways to control
how much memory, CPU, or block IO a container can use, setting runtime
configuration flags of the docker run command.
-m or --memory=<value> b,k,m,g
--cpus=<value>
--device-read-bps
--device-read-iops
--device-write-bps
--device-write-iops
Docker machine
--------------
https://github.com/docker/machine/releases/
curl -L https://github.com/docker/machine/releases/download/v0.9.0-rc2/docker-
machine-`uname -s`-`uname -m` >/tmp/docker-machine &&
chmod +x /tmp/docker-machine &&
sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
docker-machine version
docker-machine version 0.10.0, build 76ed2a6
Launch a container
-------------------
$ docker run -d -p 8000:80 --name webserver kitematic/hello-world-nginx
Unable to find image 'kitematic/hello-world-nginx:latest' locally
latest: Pulling from kitematic/hello-world-nginx
77c6c00e8b61: Pull complete
9b55a9cb10b3: Pull complete
e6cdd97ba74d: Pull complete
7fecf1e9de6b: Pull complete
6b75f22d7bea: Pull complete
e8e00fb8479f: Pull complete
69fad424364c: Pull complete
b3ba6e76b671: Pull complete
a956773dd508: Pull complete
26d2b0603932: Pull complete
3cdbb221209e: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:ec0ca6dcb034916784c988b4f2432716e2e92b995ac606e080c7a54b52b87066
Status: Downloaded newer image for kitematic/hello-world-nginx:latest
6e35823e4d5e4e03a9d607a409671f3cc0536570b7d9af63b2cd9aedd6f1d805
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
311758b4aa18 nindate/nin-contactsapp:db "/entrypoint.sh mysql" 7
seconds ago Up 6 seconds 3306/tcp mydb2
6e35823e4d5e kitematic/hello-world-nginx "sh /start.sh" 11
minutes ago Up 11 minutes 0.0.0.0:8000->80/tcp webserver
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
7c5b667a4bce nindate/nin-contactsapp:web "/root/entrypoint.sh" 6
seconds ago Up 5 seconds 0.0.0.0:32768->80/tcp myhttp2
311758b4aa18 nindate/nin-contactsapp:db "/entrypoint.sh mysql" 49
seconds ago Up 48 seconds 3306/tcp mydb2
6e35823e4d5e kitematic/hello-world-nginx "sh /start.sh" 12
minutes ago Up 12 minutes 0.0.0.0:8000->80/tcp webserver
Stop the docker host (Stop the AWS instance running as docker host)
--------------------------------------------------------------------
$ docker-machine stop aws-sandbox
Stopping "aws-sandbox"...
Machine "aws-sandbox" was stopped.
Remove the docker host (Terminate the AWS instance running as docker host)
--------------------------------------------------------------------------
$ docker-machine rm aws-sandbox
About to remove aws-sandbox
WARNING: This action will delete both local reference and remote instance.
Are you sure? (y/n): y
Successfully removed aws-sandbox
Docker compose
---------------
Instead of running docker run commands - you can define all the details into a yaml
file and use docker compose which can read from this yaml file and create the
containers, with defined params, volumes, links etc
sudo su
yum install -y python-pip
pip install --upgrade pip
pip install docker-compose
docker-compose up -d
docker ps -a
docker exec -it <container-name-of-web> bash
ping dbsrv
docker-compose down
$ cat docker-compose.yml
version: '2'
services:
mydb2:
image: "nindate/nin-contactsapp:db"
environment:
- MYSQL_DATABASE=test2
- MYSQL_USER=user1
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=rootpass
volumes:
- mydb2-data-vol:/var/lib/mysql
- mydb2-info:/info
myhttp2:
image: "nindate/nin-contactsapp:web"
volumes:
- mydb2-info:/info
links:
- mydb2:dbsrv
ports:
- "8081:80"
volumes:
mydb2-data-vol:
mydb2-info:
docker-compose up -d
docker-compose down
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Routing mesh:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
To add a manager to this swarm, run 'docker swarm join-token manager' and follow
the instructions.
Run docker info
+++++++++++++++
[ninaddatecloud@test-docker1 ~]$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.03.1-ce
Storage Driver: overlay
Backing Filesystem: xfs
Supports d_type: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: fqt4ok51tufnqdcnt8chotd6z
Is Manager: true
ClusterID: 2hjgkcniqujrhw8jl5q4hr7i4
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 10.128.0.4
Manager Addresses:
10.128.0.4:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-514.16.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.613 GiB
Name: test-docker1
ID: B4LW:NG45:PED5:QBVU:7VS6:3B54:PACC:CSXJ:7LFI:PPPV:ORNI:P77O
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
You can add more nodes to the swarm. You can add a node as a worker node or as a
manager node.
On the 1st manager node run below command to get the token to be used when joined
additional node as manager node
docker swarm join-token manager
On the 1st manager node run below command to get the token to be used when joined
additional node as worker node
docker swarm join-token worker
Run the output of above command on additional nodes to be added to swarm cluster
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
docker swarm join \
--token SWMTKN-1-1tyx8tkxhgcd91egc1uh5xheys0v0odfceivdppd4qufxhvruv-
6lcg96zpg825pdd2q91k3swtl \
10.128.0.4:2377
service: application you launch in swarm which launches and maintains the necessary
number of containers (these are called tasks), and manages load balancing and
connectivity to the containers (tasks)
- tasks: individual containers
service
- global
- replicated
stack: you can launch a set of containers
you can have HA by having multiple manager nodes. These manager nodes should be in
odd numbers 3, 5, 7, ...
EXPOSE 80
CMD ["/usr/local/apache2/scripts/entrypoint.sh"]
echo "Welcome to your web page served from container $(hostname)" >
/usr/local/apache2/htdocs/index.html
echo "This is version 1.0 of the application"
/usr/local/apache2/bin/apachectl -D FOREGROUND
Swarm mode has two types of services, replicated and global. For replicated
services, you specify the number of replica tasks for the swarm manager to schedule
onto available nodes. For global services, the scheduler places one task on each
available node.
You control the type of service using the --mode flag. If you don�t specify a mode,
the service defaults to replicated. For replicated services, you specify the number
of replica tasks you want to start using the --replicas flag.
docker service ls
docker service ls
ID NAME MODE REPLICAS IMAGE
bkkygd2ngq2f my-test-http replicated 2/2 nindate/nin-contactsapp:test-http
From another terminal run, check IP address for eth0 and run below to continuously
check website
while true; do curl 10.128.0.6:8000; sleep 1; done
docker service scale my-test-http=5
docker service ps my-test-http
Rolling update
--------------
echo "Welcome to your web page served from container $(hostname)" >
/usr/local/apache2/htdocs/index.html
echo "This is version 2.0 of the application"
/usr/local/apache2/bin/apachectl -D FOREGROUND
Rollback
--------
Swarm secrets
--------------
You can use secrets to manage any sensitive data which a container needs at runtime
but you don�t want to store in the image or in source control, such as:
Docker secrets are only available to swarm services, not to standalone containers.
To use this feature, consider adapting your container to run as a service with a
scale of 1
Another use case for using secrets is to provide a layer of abstraction between the
container and a set of credentials. Consider a scenario where you have separate
development, test, and production environments for your application. Each of these
environments can have different credentials, stored in the development, test, and
production swarms with the same secret name. Your containers only need to know the
name of the secret in order to function in all three environments.
When you add a secret to the swarm, Docker sends the secret to the swarm manager
over a mutual TLS connection. The secret is stored in the Raft log, which is
encrypted. The entire Raft log is replicated across the other managers, ensuring
the same high availability guarantees for secrets as for the rest of the swarm
management data.
Drain a node
-----------
docker node ls
docker service ps my-test-http
docker node ls
docker service ps my-test-http
docker node ls
docker service ps my-test-http
docker node ls
docker service ps my-test-http
Once it reads 1/1 under REPLICAS, it�s running. If it reads 0/1, it�s probably
still pulling the image.
Create a file called app.py in the project directory and paste this in:
from flask import Flask
from redis import Redis
app = Flask(__name__)
redis = Redis(host='redis', port=6379)
@app.route('/')
def hello():
count = redis.incr('hits')
return 'Hello World! I have been seen {} times.\n'.format(count)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000, debug=True)
Create a file called requirements.txt and paste these two lines in:
flask
redis
services:
web:
image: 127.0.0.1:5000/stackdemo
build: .
ports:
- "8000:8000"
redis:
image: redis:alpine
Note that the image for the web app is built using the Dockerfile defined above.
It�s also tagged with 127.0.0.1:5000 - the address of the registry created earlier.
This will be important when distributing the app to the swarm.
You will see a warning about the Engine being in swarm mode. This is because
Compose doesn�t take advantage of swarm mode, and deploys everything to a single
node. You can safely ignore this.
$ docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in
a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
$ curl http://127.0.0.1:8000
Hello World! I have been seen 2 times.
$ curl http://127.0.0.1:8000
Hello World! I have been seen 3 times.
To distribute the web app�s image across the swarm, it needs to be pushed to the
registry you set up earlier. With Compose, this is very simple:
$ docker-compose push
The stack is now ready to be deployed.
The last argument is a name for the stack. Each network, volume and service name is
prefixed with the stack name.
Once it�s running, you should see 1/1 under REPLICAS for both services. This might
take some time if you have a multi-node swarm, as images need to be pulled.
$ curl http://127.0.0.1:8000
Hello World! I have been seen 2 times.
$ curl http://127.0.0.1:8000
Hello World! I have been seen 3 times.
Thanks to Docker�s built-in routing mesh, you can access any node in the swarm on
port 8000 and get routed to the app:
$ curl http://address-of-other-node:8000
Hello World! I have been seen 4 times.
Create a service to test HA of services also along with swarm manager nodes' HA
docker service create --name myhttp2 --publish 8081:80 --replicas 3 httpd
docker service ls
docker service ps myhttp2
Now stop the 1st node which currently is the docker swarm manager in Leader mode
Observe for a while, when the 1st node becomes unreachable, another manager node is
elected as a Leader
docker node ls
docker service ps myhttp2
Once the 1st node is marked a Down, you will observe that another container is
launched in place of the container that was running on the 1st node which went down
#################
Advanced tasks
Create custom image with multiple commands using Dockerfile and supervisord
---------------------------------------------------------------------------
mkdir myapp
cd myapp
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -
DFOREGROUND"
docker run --name vm1 -d alpine /bin/sh -c 'while true; do echo -n "testing at ";
date; sleep 5; done'
Run following command every 5 secs and you will see that the given commands are
running inside the container
docker logs vm1
### Check this and find out a working method for health check
FROM ubuntu
RUN apt-get update -y \
&& apt-get install -y apache2 curl
HEALTHCHECK --interval=10s --timeout=2s --retries=2 CMD curl localhost/index.html
|| exit 1
CMD ["/usr/sbin/apache2ctl" "-D FOREGROUND"]