Professional Documents
Culture Documents
Continuous Integration and Deployment With Rancher and Docker
Continuous Integration and Deployment With Rancher and Docker
Contents
Introduction .............................................................................................................................. 2
Part 1: Continuous Integration ................................................................................................ 3
1.1 Challenges of Scaling Build Systems................................................................................ 3
1.2 Solutions and Best Practices ............................................................................................ 4
1.3 Leveraging Docker for Build systems................................................................................ 5
2.3.1 Containerizing your build environment ....................................................................... 6
2.3.2 Packaging your application with Docker ..................................................................... 8
2.3.3 Using Docker Compose for build environments .......................................................... 8
1.4 Creating a Continuous Integration Pipeline ......................................................................10
1.4.1 Branching Model .......................................................................................................11
1.4.2 Creating CI pipeline with Jenkins ..............................................................................13
1.5 Summary .........................................................................................................................20
Part 2: Continuous Deployment .............................................................................................21
2.1 Creating long running application environments ..............................................................21
2.1.1 Creating an Integration environment in Rancher .......................................................22
2.1.2 Defining Compose templates ....................................................................................23
2.1.3 Creating an application stack with Rancher Compose ...............................................25
2.1.4 Managing DNS records .............................................................................................27
2.1.5 Enabling HTTPS .......................................................................................................28
2.2 Creating a Continuous Deployment Pipeline ....................................................................32
2.2.1 Publishing Docker images .........................................................................................32
2.2.2 Deploying to Integration environment ........................................................................34
2.2.3 Releasing and deploying a new version ....................................................................37
2.3 Deployment Strategies ....................................................................................................39
2.3.1 In-place updates........................................................................................................40
2.3.2 Blue-Green Deployments ..........................................................................................42
2.4 Summary .........................................................................................................................44
Conclusion ..............................................................................................................................46
Introduction
As the Docker tool matures, it is being used for larger scale projects. As a result coherent
processes and workflows are needed to streamline deployment for such projects. In this guide we
will cover a work-flow for code development, continuous integration and deployment as well as
zero-downtime updates. Such workflows are fairly standard in large organizations, however, we
cover how to replicate some of these workflows for Docker based environments. We also detail
how you can leverage Docker and rancher to automate such workflows. Throughout this paper
we provide detailed examples of each step necessary to implement your own CI system.
We hope that by following this guide, you will be able to apply some of these ideas and make use
of tools such as Docker and Rancher to create continuous integration and deployment pipelines
onto which you can graft custom process as they make sense for your organization.
Before we begin, a note of caution: Both Docker and Rancher are evolving rapidly and therefore
we expect some API and implementation inconsistencies with different versions of these
platforms. For reference, we’re working with Docker 1.7+ and Rancher 0.44.0+ for this guide.
A related but slightly different problem is to manage environment dependencies. This includes
IDE and IDE configurations, tools versions (e.g. maven version, python version) and configuration
e.g. static analysis rule files, code formatting templates. Environmental dependency management
can get tricky because sometimes different parts of the project have conflicting requirements.
Unlike conflicting code level dependencies it is often not possible or easy to resolve these
conflicts. For example, in a recent project we used fabric for deployment automation and s3cmd
for uploading artifacts to Amazon S3. Unfortunately, the latest version of fabric required python2.7
whereas s3cmd required python2.6. A fix required us to either switch to a beta version of s3cmd
or an older version of fabric.
Lastly, a major problem that every large project faces is build times. As projects grow in scope
and complexity, more and more languages get added. Tests get added for various components
which are all interdependent. For example if you have a shared database then tests which mutate
the same data cannot be run at the same time. In addition, we need to make sure that tests setup
expected state prior to execution and clean up after themselves when they finish. This lead to
builds that can take anything from minutes to hours which either slows down development or
leads to a dangerous practice of skipping test runs.
1. Repeatability
○ We must be able to generate/create similar (or identical) build environments with
the same dependencies on different developer machines and automated build
servers.
2. Centralized Management
○ We must be able to control the build environment for all developers and build
servers from a central code repository or server. This includes setting up the build
environment as well as updates overtime.
3. Isolation
○ The various sub-components of the project must be built in isolation other than
well-defined shared dependencies.
4. Parallelization
○ We must be able to run parallel builds for sub-components.
○
To support requirement one we must use centralized dependency management. Most modern
languages and development frameworks have support for automated dependency management.
Maven is used extensively with Java and a few other languages, python uses pip and ruby has
bundler. All these tools have a very similar paradigm, where you would commit an index file
(pom.xml, requirements.txt or gemfile) into your source control. The tool can then be run to
download dependencies onto the build machine. We can manage the index files centrally after
testing them and then push out the change by updating the index in source control. However,
there remains the issue of managing environmental dependencies. For example the correct
version of maven, python and ruby have to be installed. We also need to ensure that the tools are
run by developers. Maven automates the check for dependency updates but for pip and bundler
we must wrap our build commands in scripts which trigger a dependency update run.
In order to setup the dependency management tools and scripts most small teams just use
documentation and leave the onus on developers. This however, does not scale to large teams
especially if the dependencies are updated over time. Further complicating matters is the fact that
installation instructions for these tools can vary by platform and OS of the build machines. You
can use orchestration tools such as Puppet or Chef to manage installation of dependencies and
setting up configuration files. Both Puppet and Chef allow for central servers or shared
configuration in source control to allow centralized management. This allows you to test
configuration changes ahead of time and then push them out to all developers. However, these
tools have some drawbacks, installing and configuring puppet or chef is non-trivial and full
featured versions of these tools are not free. In addition, each has its own language for defining
tasks. This introduces another layer of management overhead for IT teams as well as developers.
Lastly, orchestration tools do not provide isolation hence conflicting tool versions are still a
problem and running parallel tests is still an open problem.
To ensure component isolation and reduce build times we can use an automated virtualization
system such as Vagrant. Vagrant can create and run virtual machines (boxes) which can isolate
the build for various components and also allow for parallel builds. The vagrant configuration files
can be committed into source control and pushed to all developers when ready to ensure
centralized management. In addition, boxes can be tested and deployed to an "Atlas" for all
developers to download. This still has the drawback that you will need a further layer of
configuration to setup vagrant and that virtual machines are a very heavy weight solution for this
problem. Each VM runs an entire OS and network stack just to contain a test run or compiler.
Memory and Disk resources need to be partitioned ahead of time for each of these VMs.
Despite the caveats and drawbacks, using Dependency Management (maven, pip, rake),
orchestration (puppet, chef) and virtualization (vagrant), we can build a stable, testable centrally
managed build system. Not all projects warrant the entire stack of tools, however, any long
running large project will need this level of automation.
components; a RESTful authentication server written in Golang and a session manager which
accepts long running TCP connections from clients and routes messages between clients. For
the purposes of this paper, we will be concentrating on the RESTful Authentication Service (go-
auth). This sub-system consists of an array of stateless web-servers and a database cluster to
store user information.
from golang:1.4
# Install godep
RUN go get github.com/tools/godep
Add compile.sh /tmp/compile.sh
CMD /tmp/compile.sh
We then add a compile script which puts all the steps required to build and test our code in one
place. The script shown below downloads dependencies using godep restore, standardizes
formatting using the go fmt command, runs tests using the "go test" command and then compiles
the project using go build.
#!/bin/bash
set -e
# Set directory to where we expect code to be
cd /go/src/${SOURCE_PATH}
echo "Downloading dependencies"
godep restore
echo "Fix formatting"
go fmt ./...
echo "Running Tests"
go test ./...
echo "Building source"
go build
echo "Build Successful"
To ensure repeatability we can use Docker containers with all tools required to build a component
into a single, versioned container image. This image can be downloaded from Dockerhub or built
from Dockerfile. Now all developers (and build machines) can use the container to build any go
project using the following command:
In the above command we are running the usman/go-builder image version 1.4 and mounting our
source code into the container using the -v switch and specifying the SOURCE_PATH
environment variable using the -e switch. In order to test the go-builder on our sample project you
can use the commands below to run all the steps and create an executable file called go-auth in
the root directory of the go-auth project.
-v $PWD:/go/src/github.com/usmanismail/go-messenger/go-auth/ \
-e SOURCE_PATH=github.com/usmanismail/go-messenger/go-auth/ \
usman/go-builder:1.4
An interesting side-effect of isolating all source from build tools is that we can easily swap out
build tools and configuration. For example in the commands above we have been using golang
1.4. By changing go-builder:1.4 to go-builder:1.5 in the commands above you can test the
impact of using golang 1.5 on the project. In order to centrally manage the image used by all
developers, we can deploy the latest tested version of the builder container to a fixed version (i.e.
latest) and make sure all developers use go-builder:latest to build the source code. Similarly, if
different parts of our project use different versions of build tools we can use different containers
to build them without worrying about managing multiple language versions in a single build
environment. For example, our earlier python problem could be mitigated by using the official
python image which supports various python versions.
FROM ubuntu
ADD ./go-auth /bin/go-auth
EXPOSE 9000
ENTRYPOINT ["/bin/go-auth","-l","debug","run","-p","9000"]
reason that tests cannot be parallelized is because of shared data stores. This is especially true
for integration tests where we would not typically mock out external databases. Our sample
project has a similar issue, we use a MySQL database to store users. We would like to write a
test which ensures that we can register a new user. The second time a registration is attempted
for the same user we expect a conflict error. This forces us to serialize tests so that we can
cleanup our registered users after a test is complete before starting a new one.
To setup isolated parallel builds we can define a Docker Compose template (docker-
compose.yml) as follows. We define a database service which uses the MySQL official image
with required environment variables. We then create a GoAuth service with the container we
created to package our application and link the database container to it.
Database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: messenger
MYSQL_USER: messenger
MYSQL_PASSWORD: messenger
expose:
- "3306"
stdin_open: true
tty: true
Goauth:
image: go-auth
ports:
- "9000:9000"
stdin_open: true
links:
- Database:db
command:
- "--db-host"
- "db"
tty: true
With this docker-compose template defined we can run the application environment by running
docker compose up. We can then simulate our integration tests by running the following curl
command. It should return 200 OK the first time and 409 Conflict the second time. Lastly, after
running tests, we can run docker compose rm to clean up the entire application environment.
In order to run multiple isolated versions of the application we need to update docker-compose
template to add the service database1 and goauth1 with identical configurations to their
counterparts. The only change is that in Goauth1 we need to change the ports entry from
9000:9000 to 9001:9000. This is so that the publicly exposed port of the application does not
conflict. The complete template is available here. When you run docker compose up now you
can run the two integration test runs in parallel. Something like this can be effectively used to
speed up builds for a project with multiple independent sub-components, e.g., a multi-module
maven project.
We are going to be using the gitflow tool to help manage our git branches. To install git-flow, follow
the instructions here. Once you have git-flow installed you can configure your repository by
running the git flow init command as shown below. Git flow is going to ask a few questions and
we recommend going with the defaults. Once you execute the git-flow command, it will create a
develop branch (if it didn't exist) and check it out as the working branch.
Now, let's create a new feature using git flow by typing git flow feature start [feature-name]. It's a
common practice to use ticket/issue id as the name of the feature. For example, if you are using
something like Jira and working on a ticket, the ticket Id (e.g., MSP-123) can become the feature
name. You'll notice that when you create a new feature with git-flow, it will automatically switch to
the feature branch.
At this point you can do all the work needed for the feature and then run your automated suite of
tests to make sure that everything is in order. Once you are ready to ship your work, simply tell
git-flow to finish the feature. You can do as many commits as you need for the feature. For our
purposes, we're just going update the README file and finish off the feature by typing "git flow
feature finish MSP-123".
Note that git flow merges the feature in 'develop', deletes the feature branch and takes you back
to the develop branch. At this point you can push your develop branch to remote repository (git
push origin develop:develop). Once you commit to the develop branch the CI server takes over
to run the Continuous integration pipeline. Note, for a larger team, an alternative and a more
suitable model would be to push feature branches to remote before finishing them off, getting
them reviewed and using Pull requests to merge them into develop.
● Jenkins Plugins
○
Build Pipeline Plugin
○
Copy Artifact Plugin
○
Parameterized Trigger Plugin
○
Git Parameter Plugin
○
Mask Password Plugin
● Docker 1.7+
● Docker Compose
Once you have setup the requisite plugins we can create the first three jobs in our Build Pipeline:
compile, package and integration test. These will serve as the starting point of our continuous
integration and deployment system.
The first job in the sequence will checkout the latest code from source control after each commit
and ensure that it compiles. It will also run units tests. To setup the first job for our sample project
select New Item > Freestyle Project. Select the "This build is parameterized" to add a "Git
Parameter" called GO_AUTH_VERSION as shown below. Next configure the parameter to pick
up any tags matching "v*" (e.g., v2.0) and default to develop (branch) if no value is specified for
the parameter. This is quite useful for getting a list of version tags from Git and populating a
selection menu for the job. If the job is automatically triggered and no value is specified, the value
of GO_AUTH_VERSION defaults to develop.
Next, In the Source Code Management section add the repository url, specify the branch as
*/develop and set a poll interval, e.g., 5 minutes. With this, Jenkins will keep tracking our develop
branch for any changes to automatically trigger the first job in our CI (and CD) pipeline.
Now in the Build section select Add Build Step > Execute Shell and copy the docker run command
from earlier in the chapter. This will get the latest code from Github and build the code into the
go-auth executable.
Following the build step we need to add two post-build steps, Archive the Artifacts to archive the
go-auth binary that we build in this job and Trigger parameterized builds to kick off the next job in
the pipeline as shown below. When adding the Trigger parameterized build action, make sure to
add Current build parameters from Add Parameters. This will make all the parameters (e.g.,
GO_AUTH_VERSION) for the current job available for the next job. Note the name to use for the
downstream job in the trigger parameterized build section as we'll need it in the following step.
The log output form the build job should look something like following. You can see that we use
a dockerized container to run the build. The build will use go fmt to fix an formatting
inconsistencies in our code and also run our unit tests. If any tests fail or if there are compilation
failures, Jenkins will detect the failure. Furthermore, you should configure notifications via email
or chat integrations (e.g. Hipchat or Slack) to notify your team if the build fails so that it can be
fixed quickly.
As before specify the Github project in the source code section and add a build step to execute
shell.
echo ${GO_AUTH_VERSION}
cd go-auth
chmod +x go-auth
chmod +x run-go-auth.sh
chmod +x integration-test.sh
docker build -t usman/go-auth:${GO_AUTH_VERSION} .
In order for us to build the Docker container we also need the executable we built in the previous
step. To do this we add a build step to copy artifacts from the upstream build. This will make sure
that we have the executable available for the Docker build command which can be packaged into
a Docker container. Note that we're using the GO_AUTH_VERSION variable to tag the image
we're building. By default, for changes in develop branch, it would always build usman/go-
auth:develop and overwrite the existing image. In the next chapter, we'll revisit this pipeline for
releasing new versions of our application.
As before use the Trigger parameterized builds (with Current build parameters) post-build action
to trigger the next job in the pipeline which will run integration tests using the docker container we
just built and the docker compose template that we detailed earlier in the chapter.
echo ${GO_AUTH_VERSION}
cd go-auth
chmod +x integration-test.sh
./integration-test.sh
The contents of the script are available here. We use docker compose to bring up our environment
and then use curl to send http requests to the container we brought up. The logs for the job will
be similar to the ones shown below. Compose will launch a database container, and link it to the
goauth container. Once the database is connected you should see a series of "Pass: ..." as the
various tests are run and verified. After the tests are run, the compose template will clean up after
itself by deleting the database and go-auth containers.
Creating goauth_Database_1...
Creating goauth_Goauth_1...
[36m04:02:52.122 app.go:34 NewApplication DEBUG [0m Connecting to database db:3306
[36m04:02:53.131 app.go:37 NewApplication DEBUG [0m Unable to connec to to database: dial tcp
10.0.0.28:3306: connection refused. Retrying...
[36m04:02:58.131 app.go:34 NewApplication DEBUG [0m Connecting to database db:3306
[36m04:02:58.132 app.go:37 NewApplication DEBUG [0m Unable to connec to to database: dial tcp
10.0.0.28:3306: connection refused. Retrying...
[36m04:03:03.132 app.go:34 NewApplication DEBUG [0m Connecting to database db:3306
[36m04:03:03.133 common.go:21 Connect DEBUG [0m Connected to DB db:3306/messenger
With the three jobs now setup you can create a new Build Pipeline view by selecting the + tab in
the Jenkins view and selecting the build pipeline view. In the configuration screen that pops up,
select your compile/build job as the initial job and select OK. You should now see your CI pipeline
take shape. This gives a visual indication of how each commit is progressing through your build
and deployment pipeline.
When you make changes to the develop branch, you'll notice that the pipeline is automatically
triggered by Jenkins. To manually trigger the pipeline, select your first (build) job and run. It would
ask you to select the value of the git parameter (e.g., GO_AUTH_VERSION). Not specifying any
will result in the default value and run the CI pipeline against the latest in the develop branch. You
can also just click 'Run' in the pipeline view, however, at the time of writing, there is an open bug
in Jenkins which prevents it from starting the pipeline if the first job is a parametrized build.
Let's quickly review what we've done so far. We created a CI pipeline for our application with the
following steps:
1. Use git-flow to add new features and merge them into develop
2. Track changes on develop branch and build our application in a containerized environment
1.5 Summary
In this chapter we've seen how to leverage Docker to create a continuous integration pipeline for
our project which is centrally managed, testable, and repeatable across machines and in time.
We were able to isolate the environmental dependencies for various components as needed. This
forms a starting point to a longer Docker based build and deployment pipeline which we'll continue
to build and document in the next chapter. The next step in our pipeline is to setup continuous
deployment. We will show how to use Rancher to deploy an entire server environment to run our
code. We will also cover best-practices for how to setup a long running testing environment and
deployment pipeline for large scale projects.
go-messenger project to demonstrate how to create a test environment. We'll go through the
steps below for creating our integration environment:
Once you have your environment setup, select the Integration environment from the drop down
in the top left corner of the screen. We can now create the application stack for the integration
environment. Also from the menu in the top right corner select API & Keys and Add API Key. This
will load a pop-up screen which allows you to create a named API Key pair. We need the key in
subsequent steps to to use Rancher Compose to create our test environments. We will create
key pair named JenkinsKey to run rancher compose from our Jenkins instance. Copy the key and
secret for use later as you will not be shown these values again. Note that API keys are specific
to the environment and hence you will have to create a new key for each environment.
mysql-master:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: messenger
MYSQL_USER: messenger
MYSQL_PASSWORD: messenger
expose:
- "3306"
stdin_open: true
tty: true
auth-service:
tty: true
command:
- --db-host
- mysql-master
- -p
- '9000'
image: usman/go-auth:${auth_version}
links:
- mysql-master:mysql-master
stdin_open: true
auth-lb:
ports:
- '9000'
expose:
- 9090:9000
tty: true
image: rancher/load-balancer-service
links:
- auth-service:auth-service
stdin_open: true
We are using Rancher Compose to launch the environment in a multi-host environment, this more
closely mirrors production and also allows us to test integration with various services, e.g.
Rancher and Docker Hub etc. Unlike our previous Docker compose based environment which
was explicitly designed to be independent of external services and launched on the CI server itself
without pushing images to dockerhub.
Now that we are going to use Rancher compose to launch a multi-host test environment instead
of Docker compose, we also need to define a rancher compose template. Create a file called
rancher-compose.yml and add the following content. In this file we are defining that we need two
containers of the auth service, one container running the database and another running the load-
balancer container.
auth-service:
scale: 2
mysql-master:
scale: 1
auth-lb
scale: 1
Next we will add a health check to the auth-service to make sure that we detect when containers
are up and able to respond to requests. For this we will use the /health URI of the go-auth service.
The auth-service section of rancher-compose.yml should now look something like this:
auth-service
scale: 1
health_check:
port: 9000
interval: 2000
unhealthy_threshold: 3
request_line: GET /health HTTP/1.0
healthy_threshold: 2
response_timeout: 2000
We are defining a health check on port 9000 of the service container which is run every 2 seconds
(2000 milliseconds). The check makes a http request to the /health URI and 3 consecutive failed
checks mark a container as unhealthy whereas 2 consecutive successes mark a container as
healthy.
instructions here. Once you have rancher-compose setup you can use the create command
shown below to setup your integration environment.
#replace rancher-compose with the latest version you downloaded from rancher UI
./rancher-compose --project-name messenger-int \
--url http://YOUR_RANCHER_SERVER:PORT/v1/ \
--access-key <API_KEY> \
--secret-key <SECRET_KEY> \
--verbose create
In the UI, you should now be able to see the stack and services for your project. Note that "create"
command only creates the stack and doesn't start services. You can either start the services from
the UI or use the rancher-compose start command to start all the services.
To make sure everything is working, head over to the public IP for the host running the "auth-lb"
service and create a user using the command shown below. You should get a 200 OK. Repeating
the above request should return a 409 error indicating a conflict with an existing user in the
database. At this point we have a basic integration environment for our application which is
intended to be a long running environment.
Now that we have our Hosted Zone and IAMs user setup we can add the Route53 integration to
our Rancher Server. The detailed instructions on how to do so can be found here. In short you
need to browse to Applications > Catalog on your rancher server and select Route 53 DNS. You
will be asked to specify the Hosted Zone that you setup earlier as well as the AWS Access and
Secret Keys for you Rancher IAMs user with Route53 access. Once you enter the required
information and click create, you should see new stack created in your environment with a service
called route53.
This service will listen for Rancher events and catch any load balancer instance launches and
terminations. Using this information it will automatically create DNS entries for all the Hosts on
which your load balancer containers are running. The DNS entries are of the form
[Loadbalancer].[stack].[env].[domain], e.g. goauth.integration.testing.gomessenger.com. As
more containers are launched and taken down on your various Rancher compute nodes the
Route53 service will keep your DNS records consistent. This is essential for our integration test
environments because as we will see later we need to relaunch the environment containers in
order to push updates as part of continuous deployment. With Route53 DNS integration we do
not have to worry about getting the latest hostnames to our clients and testers.
certificate is presented to you when making HTTPS requests. In the absence of a trusted
certificate, manually matching fingerprints is the only way to ensure that there aren't any man-in-
the-middle attacks.
Now that you have the certificate and the private key file we need to upload these into Rancher.
We can upload certs by clicking the Add Certificate button in the Certificates Section of the
Infrastructure tab in the Rancher UI. You need to specify a meaningful name for your certificate
and optionally a description as well. Copy the contents of integration.gomessenger.com.key and
integration.gomessenger.com.crt into the Private Key and Certificate fields respectively (or select
Read from File and select the respective files). Once you have completed the form click save and
wait a few moments for the certificate to become active.
Once the certificate is active we can add the HTTPS endpoint to our environment. In order to do
so we have to modify our docker-compose file to include the SSL port configuration. We add a
second port (9001) to the ports section to make it accessible outside the load balancer container
and we use the io.rancher.loadbalancer.ssl.ports label to specify that '9001' will be the public load
balancer port with SSL termination. Furthermore since we are terminating SSL at the load
balancer we can route requests to our actual service container using plain HTTP over the original
9000 port. We specify this mapping from 9001 to 9000 using the
io.rancher.loadbalancer.target.auth-service label.
auth-lb:
ports:
- '9000'
- '9001'
labels:
io.rancher.loadbalancer.ssl.ports: '9001'
io.rancher.loadbalancer.target.auth-service: 9000=9000,9001=9000
tty: true
image: rancher/load-balancer-service
links:
- auth-service:auth-service
stdin_open: true
mysql-master:
environment:
...
...
We also need to update the rancher-compose file to specify the SSL certificate we should use in
the load balancer service for SSL termination. Add the default_cert parameter with the name of
the certificate we uploaded earlier. After these changes you will need to delete and recreate your
stack as there is currently no way to add these properties to a deployed stack.
auth-lb:
scale: 1
default_cert: integration.gomessenger.com_selfsigned
load_balancer_config:
name: auth-lb config
mysql-master:
scale: 1
auth-service:
scale: 1
Now to make sure everything is working, you can use the following curl command. When you try
the same command with the https protocol specifier and the 9001 port you should see a failure
complaining about the use of an untrusted certificate. You can use the --insecure switch to turn
of trusted certificate checking and use https without it.
# Http Request
curl -i -silent -X PUT \
-d userid=<TEST_USERNAME> \
-d password=<TEST_PASS> \
http://integration.gomessenger.com:9000/user
Since this is a continuation of the pipeline we built in our previous chapter, the job will have similar
configuration to go-auth-integration-test job. The first setting you need is to make it parameterized
build and adding the GO_AUTH_VERSION parameter.
In order to actually push the image we will select the Add build step drop down and then the
Execute shell option. In the resulting text box add the commands shown below. In the commands
we are going to log in to DockerHub and push the image we built earlier. We're pushing to the
usman/go-auth repository, however, you will need to push to your own DockerHub repository.
As covered in the previous chapter, we're using git-flow branching model where all feature
branches are merged into the 'develop' branch. To continuously deploy changes to our integration
environment we need a simple mechanism to generate the latest image based off of develop. In
our package job we tagged the docker container using the GO_AUTH_VERSION (e.g., docker
build -t usman/go-auth:${GO_AUTH_VERSION} ....). By default the version will be develop,
however, later in this chapter we'll create new releases for our application and use the CI/CD
pipeline to build, package, test and deploy them to our integration environment. Note that with
this scheme, we're always overwriting the image for our develop branch (usman/go-auth:develop)
which prevents us from referencing historical builds and do rollbacks. One simple change that
you can make to the pipeline is to attach the Jenkins build number to the version itself, e.g.,
usman/go-auth:develop-14.
Note that you will need to specify your DockerHub username, password and email. You can either
use a parameterized build to specify these for each run or use the Jenkins Mask Passwords
Plugin to define these securely, once in the main Jenkins configuration and inject them into the
build. Make sure to enable 'Mask passwords (and enable global passwords)' under Build
Environment for your job.
echo ${GO_AUTH_VERSION}
docker login -u ${DOCKERHUB_USERNAME} -p ${DOCKERHUB_PASSWORD} -e ${DOCKERHUB_EMAIL}
docker push usman/go-auth:${GO_AUTH_VERSION}
Now we have to make sure that this job is triggered after our integration test job. To do that we
need to update our integration test job to trigger parameterized build with current build
parameters. This means that after each successful run of the integration test job we will push the
tested image up to DockerHub.
Lastly, we need to trigger the deployment job once the image is successfully pushed to
DockerHub. Again, we can do that by adding a post-build action as we did for other jobs.
A simple approach would be to stop all services (auth service, load balancer and mysql), pull the
latest images and start all services. This however would be less than ideal for long running
environments where we only want to update the application. To update our application, we're first
going to stop auth-service. You can do this by using the stop command with Rancher Compose.
This will stop all containers running for auth-service which you can verify by opening the stack in
the Rancher UI and verifying that the status of the service is set to Inactive. Next, we'll tell rancher
to pull the image version we want to deploy. Note that the version we specify here will be
substituted in our docker compose file for the auth service ( image: usman/go-
auth:${auth_version} ).
Now that we have pulled the image we want, all that is needed is to start the application.
As of Rancher release version 0.44.0, the three steps listed above can be run by a single up
command using the --force-upgrade switch as follows:
Now that we know how to run our update lets create a Jenkins job in our pipeline to do so. As
before create a new freestyle project and name it deploy-integration. As with all other jobs, this
will also be a parameterized build with GO_AUTH_VERSION as a string parameter. Next we need
to copy over artifacts from the upstream build-go-auth job.
Lastly, we need to add the Execute Shell build step with the Rancher compose up command that
we specified earlier. Note that you will also need to setup rancher-compose on Jenkins ahead of
time and make it available to your build on the system path. We are setting up our job to reinstall
compose every time for the sake of simplicity. You will need to specify the Rancher API key,
Rancher API Secret and your Rancher server URL as part of the execution script. As before you
may use the Parameterized build option or the Masked Passwords plugin to avoid exposing your
secret or having to enter it every time. The complete contents of the execute shell step looks like
the snippet shown below. Note that if you have multiple Rancher compose nodes the load
balancer containers may launch on different host and hence your Route 53 record-set may need
to be updated.
cd deploy
With our two new Jenkins jobs the Pipeline we started in the previous chapter, now looks like the
image shown below. Every check-in to our sample application now gets compiled to make sure
there are not syntax errors and that the automated tests pass. That change then gets packaged,
and tested with integration tests and finally deployed for manual testing. The five steps below
provide a good baseline template for any build pipeline and helps predictably move code from
development to testing and deployment stages. Having a continuous deployment pipeline ensures
that all code is not only tested by automated systems but is available for human testers quickly.
It also serves as a model for production deployment automation and can test the operations
tooling and code to deploy your application on a continual basis.
Once done, we can run the release finish command to merge the release branch into the master
branch. This way master always reflects the latest released code. Further, each release is tagged
so that we have a historical record of what went into each release. Since we don't want any other
changes to go in, let's finalize the release.
If you're using Github for hosting your git repository, you should now have a new release. It is also
a good idea to push images to DockerHub with a version that matches the release name. To do
so, let's trigger our CD pipeline by running the first job. If you recall, we setup Git Parameter plugin
for our CI pipeline to fetch all the tags matching our filter from git. This normally defaults to develop
however, when we trigger the pipeline manually we can choose from git tags. For example in the
section below, we have two releases for our application. Let's select one of them and kick off the
integration and deployment pipeline.
This will go through the following steps and deploy our application with version 1.1 to our long
running integration environment all with a couple of clicks:
Behind the scenes, rancher agent fetches the new image on each host running an auth-
service container. It then stops the old containers and launches new containers in
batches. You can control the size of the batch by using the --batch flag. Additionally, you
can specify a pause interval ( --interval) between batch updates. A large enough interval
can be used to allow you to verify that the new containers are behaving as expected, and
on the whole, the service is healthy. By default, old containers are terminated and new
ones are launched in their place. Alternatively, you can tell rancher to start the new
containers before stopping the old containers by setting the start_first flag in your rancher-
compose.yml.
auth-service:
upgrade_strategy:
start_first: true
If you are not happy with the update and want to roll-back, you can do so with the rollback
flag for the upgrade command. Alternatively, if you want to proceed with the update,
simply tell rancher to complete the update by specifying the confirm-update flag.
You can also perform these updates using the rancher UI, by selecting "upgrade" from a
service's menu (shown below).
In-place updates are quite simple to perform and don't require the additional investment
to manage multiple stacks. There are, however, downsides to this approach for
production environments. First, it is typically difficult to have fine-grained control over
rolling updates, i.e., they tend to be unpredictable under failure scenarios. For example,
dealing with partial failures and rolling back a rolling update can get quite messy. You
have to know which nodes were deployed too, which failed to deploy and which are still
running the previous revision. Second, you have to make sure all updates are not only
backwards compatible but also forward compatible because you will have old and new
versions of your application running concurrently in the same environment. Last,
depending on the use case, in-place updates might not be practical. For example, if
legacy clients need to continue to use the old environment while newer clients roll forward.
In this case separating client requests is much easier with some of the other approaches
we are going to list today.
Next, we specify a port for the load-balancer, configure SSL and pick the load-balancer
for the active stack as the target service from the drop down menu to create the load-
balancer. Essentially we are load balancing to a load-balancer which is then routing traffic
to actual service nodes. With the external load-balancer, you don't need to update the
DNS records for each release. Instead, you can simply update the external load-balancer
to point to the updated stack.
2.4 Summary
In this chapter we covered creating a continuous deployment pipeline which can put our sample
application on an integration environment. We also looked at integrating DNS and HTTPS support
in order to create a more secure and usable environment with which clients can integrate. In the
subsequent chapters we'll look at running production environments. Deploying to production
environments presents it's own set of challenges as we will be expected to deploy under-load,
with little (ideally zero) downtime. Furthermore, Production environments present challenges as
they have to scale out to meet load while also scaling back to control cost. Lastly, we take a more
comprehensive look at DNS management in order to provide automatic failover and high
availability. We’ll also look at operations management of docker environments in production as
well as different types of workloads for example state-full connected services.
Conclusion
This document is necessarily a limited look at a few approaches for implementing a complete
turnkey CI/CD pipeline using containers. We’ve tried to cover common use cases, provide
detailed examples and share some of the best practices we’ve learned from years of working in
DevOps at web services companies. We hope to follow-up this e-book with a companion volume
that will look in more depth at running services in production using containers. As always we will
be posting our latest work on the Rancher blog, and we welcome any feedback you may have
about this paper to info@rancher.com.