Professional Documents
Culture Documents
CICD Jenkins
CICD Jenkins
Continuous Deployment
Module # 4
Continuous Integration Fundamentals
• Continuous Integration, involves a tool that monitors your version control system for changes.
• Whenever a change is detected, this tool automatically compiles and tests your application.
• If something goes wrong, the tool immediately notifies the developers so that they can fix the issue
immediately
• CI can also help you keep tabs on the health of your code base, automatically monitoring code
quality and code coverage metrics, and helping keep technical debt down and maintenance costs
low.
• The publicly-visible code quality metrics can also encourage developers to take pride in the quality
of their code and strive to improve it.
• Combined with automated end-to-end acceptance tests, CI can also act as a communication tool,
publishing a clear picture of the current state of development efforts.
• And it can simplify and accelerate delivery by helping you automate the deployment process, letting
you deploy the latest version of your application either automatically or as a one-click process.
Continuous Integration Fundamentals
• The CI server of Jenkins then checks the repository and pulls the newly
changed code at regular intervals.
• The build server builds the pulled code into an executable file. Feedback is
sent to the developers in case of failure of the build.
• Jenkins deploys the build application on the test server. The developers are
alerted if it fails.
• If the tests are successful and the code is error-free, the tested application is
deployed on the production server.
• In some cases, files may have different code and require multiple builds and
the Jenkins server cannot handle multiple builds simultaneously for this, the
Master distributes the workload and allows us to run different builds on
different environments each called a Slave.
Continuous delivery (CD)
• Source. The first step in the pipeline is where developers write and commit the smallest distributable units of code. The code gets tested by reviews, unit tests and static analysis. Tests on
small amounts of code can be more efficient than end-to-end tests.
• Build. The source code is collected from the repository, linked to libraries, modules and dependencies, and compiled into an executable file. Tools log the process and denote errors to fix.
Some builds may employ scripts to translate the executable file into a packaged or deployable execution environment, such as a VM or a Docker container.
• Test. At this point, the code should be executable. It is tested at the subsystem level, including functional, performance and security tests. These ensure that the developed code meets the
quality standards of an end user and the specifications of the project. Integrated subsystems require UI and networks tests, and may still need other functional, performance and security
reviews. Automate these tests wherever possible. At this phase in the pipeline, the application components can be kept separate to form microservices or integrated together as one
complete system.
• Deploy. Finally, the software should be ready to be deployed into the production environment, but first, the code is manually checked one last time. Blue/green deployment is a common
method of deployment for continuous delivery, where two environments are configured identically -- one environment serves end users, while the other is ready for new code to deploy
and undergo testing, such as a load test to measure performance at a high capacity. When the released code is deemed ready, the environments switch traffic sources, and the new code
becomes available to users. Any issues discovered after the switch are rerouted back to the previous environment where the software team addresses the problem.
CI/CD pipeline
Benefits of continuous delivery
• Simplicity. A development team spends less time preparing a code base for release and doesn't bundle
multiple individual changes together for a large release. Instead, developers continuously update and release
code in small increments.
• Faster debugging. Small releases reveal bugs in new code quickly. For example, if a bug is found in code
deployed into the production environment, developers can isolate the offending code to one of the last
incremental updates, fix the problem, test the code, redeploy it and receive new feedback.
• Faster development cycles. Continuous delivery enables faster application iterations, as it enables many
developers to collaborate on and create code at different times without harming other projects. If an iterative
process becomes unwieldy due to increasing project complexity, continuous delivery offers developers a way
to get back to smaller, more frequent releases that are more reliable, predictable and manageable.
Continuous Deployment
• Continuous Deployment (CD) is an aspect of the Continuous Delivery Pipeline that automates the migration of new functionality
from a staging environment to production, where it is made available for release.
• Continuous deployment (CD) is a software development approach in which code changes to an application are deployed into the
production environment automatically.
• Every time the codebase sees a change, it goes through a set of tests before goes to live, and an automatic check gate
authenticates the changes.
• Changes in the production environment are visible to developers in real-time because it is automated. This allows for quicker
response, allowing the team(s) to inspect and adjust in real-time.
• A feedback loop is included in the Continuous Deployment pipeline, which necessitates continuous monitoring of the process in
order to spot issues and take corrective action.
• The Business gains a competitive advantage by automating the entire process (from code contribution to go live) because
their Time to Market (TTM) is always shorter.
Process of Continuous Deployment
• The continuous deployment pipeline is made up of a set of processes that take the code from development to
production.
• All of the steps are automated, and one step's output feeds into the next.
• Continuous monitoring ensures that the code is bug-free and meets the pre-approved quality check gates.
• Verify – the practices needed to ensure solution changes operate in production as intended before releasing
them to customers
• Monitor – the practices to monitor and report on any issues that may arise in production
• Respond – the practices to address any problems rapidly which may occur during deployment
Step 1: Deploy
• When developers release a new piece of code to the production environment, the deployment process is triggered
automatically, and we refer to this as a "build."
• In continuous deployment, the entire process from code commit to final deployment is ideally a one-click
procedure.
Application Deployment Strategies Considerations:
• Minimum application downtime if any
• Manage and resolve incident with minimal impact on users
• Address failed deployments in a reliable effective way
• Minimize people and process errors to achieve predictable repeatable deployments
Deployment Patterns
• Recreate Deployment Patterns
• Rolling update Deployment Patterns
• Blue/green Deployment Patterns
• Canary Deployment Patterns
• A/B Deployment Patterns
• Shadow Deployment Patterns
Recreate Deployment Pattern
Recreate Deployment Pattern
• Key Benefits
• Simplicity
• Avoid backward compatibility challenges
• Considerations
• Downtime is not an issue
• if application is not Mission critical application
Rolling update deployment Pattern
Rolling update deployment Pattern
• Key Benefits
• No downtime
• Reduced deployment risk
• Considerations
• Slow rollback
• Backward Compatibility
Blue/green deployment pattern
Blue/green deployment pattern
• Key Benefits
• No downtime
• Instant rollback
• Environment separation
• Considerations
• Cost and operational overhead
• Backward Compatibility
• Cutover
Canary Deployment Pattern
Canary Deployment Pattern
• Key Benefits
• Zero downtime
• Ability to test live traffic
• Fast rollback
• Considerations
• Slow rollout
• Observability
• Backward compatibility
A/B Deployment Pattern
A/B Deployment Pattern
• Key Benefits
• Measure
• Target audience and Monitor
• Considerations
• Complex setup
• Validity of result
• Observability
Summary of Deployment Patterns
Step 2: Verification
• In Continuous Deployment, it's critical to test the build/application before releasing it.
• Last-minute verifications frequently reveal flaws that were overlooked earlier in the process and must be
addressed right away to have the greatest impact.
• Various tests are carried out to ensure that the application performs as expected and that the production
environment is not contaminated.
• The verification process involves four practices:
• Test Automation – The ability to automate testing on a regular basis
• Production Testing – The ability to test solutions in production while they are still in a state of ‘darkness'
• Test Data Management – In order to ensure consistency in automatic testing, test data should be kept in
version control
• Federated Monitoring – In the system, unified monitoring across applications gives a comprehensive view
of problems and performance
• User-behavior Tracking – Tracking the user's experience to collect qualitative and quantitative data and
analyze the interactions between the user and the application
Step 4: Response and Recovery
• For business agility, the ability to respond is crucial.
• The Mean Time to Resolve (MTTR) is a metric that measures your team's preparedness to respond to and resolve issues.
• In agile software development, MTTR vs. MTTF is essential and decision-influencing indicators. When an issue or incident is detected,
immediate action should be taken because otherwise, business operations may be harmed or, worse, the brand value may be jeopardized.
The problem's root cause must be recognized and resolved.
• Five practices that can help your team be more responsive:
• Proactive Detection – A practice of proactively introducing flaws into a solution in order to detect prospective problems and scenarios
before they exist
• Collaboration between Team – A collaborative mentality across the Value Stream to identify and resolve issues as they emerge
• Session Reply – The capacity to replay end-user sessions in order to investigate occurrences and pinpoint issues
• Rollback and Fix Forward – The ability to easily rewind a solution to a previous environment, as well as to swiftly correct a problem
through the pipeline without having to rollback
• Immutable infrastructure – This concept suggests that teams never make uncontrolled changes to the production environment, but
rather manage infrastructure modifications through the Continuous Delivery pipeline
Benefits of Continuous Deployment
• Continuous Development and DevOps can result in lightning-fast releases, but it isn't the only benefit those
businesses are finding from CD adoption. Here are a few of the benefits of continuous deployment:
• Quick New Releases
The most important feature of continuous deployment is that it allows development teams to deliver new
releases into production as soon as feasible.
• Secure
Releases are less dangerous because they are thoroughly tested and all bugs are addressed prior to release.
• Rapid Feedback Loop with Customers
A shorter feedback loop with the consumer equals more frequent updates to your application. Developer
teams can examine the impact of a new modification on user behaviour or engagement using cutting-edge
monitoring tools and make adjustments as needed.
• Continuous Improvements
Customers can see continual improvements due to Continuous Deployment.
• Increases Workforce Productivity
Continuous deployment automates the majority of software development tasks. The developer does not need
to be concerned with the integration, deployment, or testing of the code.
Challenges in Continuous Deployment
• Resistance
Employees may find it difficult to transition from traditional methods to Continuous Deployment. Developers
are expected to enhance their knowledge of operations tasks, and Ops are expected to keep up with the pace
that the Devs and Business demand. The organization faces a problem as a result of its opposition.
• Structure
Some may believe that having more developers and larger teams would speed up software development.
Using larger teams, on the other hand, can stifle the transformation. Deployment cycles will be delayed by
larger teams with weak communication channels. This could cause issues with coordination, creativity, and
version control.
• Tools
Continuous Deployment is not a new process, yet it is. Both knowing too many and not understanding any
tools might be problematic. For a successful implementation, the teams must have sufficient knowledge of the
tools, their setups, and combinations. Finding and implementing the proper tool with the exact configurations
and combinations could be tricky.
• Architecture
Monolithic architecture is used by some organizations, which is a major hurdle to achieving Continuous
Deployment. It will be difficult to work on a feature without affecting others because the code and database of
monolithic systems are often intertwined. It's not always easy to recognize the impact until it's too late. It will
be difficult to change the architecture.
Tools for Continuous Deployment
• DeployBot – Without the requirement for a full Continuous Integration system, it makes deployment
simple, fast, and easy.
• GitLab – It's made for Continuous Deployment, and it maintains projects using a user-friendly
interface.
• AWS CodeDeploy – A completely managed software deployment service for computing services
like AWS Lambda, Amazon EC2, AWS Fargate, and servers.
• Octopus Deploy – It's a tool for managing and automating code releases and deployments.
• Bamboo – This tools that assist in integrating and configuring automated builds, testing, and releases
into a single DevOps pipeline.
• Jenkins – It's an open-source automation server that lets developers build, test, and deploy
applications with confidence.
• TeamCity – This tool supports VE and runs on all operating systems, and it works with all known
testing frameworks and code QA tools.
Docker Overview
• Docker is a software containerization platform, meaning
you can build your application, package them along with
their dependencies into a container and then these
containers can be easily shipped to run on other
machines.
• Docker is an open platform for developing, shipping, and
running applications.
• Docker enables you to separate your applications from
your infrastructure so you can deliver software quickly.
• With Docker, you can manage your infrastructure in the
same ways you manage your applications.
• Sysadmins need not worry about infrastructure as Docker
can easily scale up and scale down the number of systems.
• Docker comes into play at the deployment stage of the
software development cycle.
Docker Overview
• For example:
• Lets consider a linux based application which has been written both in Ruby and Python. This application
requires a specific version of linux, Ruby and Python.
• In order to avoid any version conflicts on user’s end, a linux docker container can be created with the
required versions of Ruby and Python installed along with the application.
• Now the end users can use the application easily by running this container without worrying about the
dependencies or any version conflicts.
Docker Overview
What is Virtualization?
• Virtualization is the technique of importing a Guest operating system on top
of a Host operating system.
• Docker images are used to build docker containers by using a read-only template.
• The foundation of every image is a base image eg. base images such as – ubuntu14.04 LTS, and Fedora 20.
• Base images can also be created from scratch and then required applications can be added to the base image by modifying it thus this process
3. Docker File–
• Dockerfile is a text file that contains a series of instructions on how to build your Docker image.
• This image contains all the project code and its dependencies.
• The same Docker image can be used to spin ‘n’ number of containers each with modification to the underlying image.
• The final image can be uploaded to Docker Hub and shared among various collaborators for testing and deployment.
• The set of commands that you need to use in your Docker File is FROM, CMD, ENTRYPOINT, VOLUME, ENV, and many more.
Components of Docker
4. Docker Registries–
• We can store the images in either public/private repositories so that multiple users can collaborate in building the application.
• Docker Hub is Docker’s cloud repository. Docker Hub is called a public registry where everyone can pull available images and push
their images without creating an image from scratch.
5. Docker Containers–
• Containers contain the whole kit required for an application, so the application can be run in an isolated way.
• For eg.- Suppose there is an image of Ubuntu OS with NGINX SERVER when this image is run with the docker run command, then a
container will be created and NGINX SERVER will be running on Ubuntu OS.
Anatomy of a Docker Image
• A Docker image is made up of a collection of files that bundle together all the essentials –
such as installations, application code, and dependencies – required to configure a fully
operational container environment.
• Docker Hub: Docker’s own, official image resource where you can access more than 100,000 container images shared by software vendors,
open-source projects, and Docker’s community of users. You can also use the service to host and manage your own private images.
• Third-party registry services: Fully managed offerings that serve as a central point of access to your own container images, providing a way to
store, manage, and secure them without the operational headache of running your own on-premises registry. Examples of third-party registry
offerings that support Docker images include Red Hat Quay, Amazon ECR, Azure Container Registry, Google Container Registry, and the JFrog
Container Registry.
• Self-hosted registries: A registry model favored by organizations that prefer to host container images on their own on-premises infrastructure –
typically due to security, compliance concerns or lower latency requirements. To run your own self-hosted registry, you’ll need to deploy a
registry server. Alternatively, you can set up your own private, remote, and virtual Docker registry.
Container Repositories
• Container repositories are the specific physical locations where your Docker images are actually
stored, whereby each repository comprises a collection of related images with the same name.
• Each of the images within a repository is referenced individually by a different tag and represents a
different version of fundamentally the same container deployment.
• For example, on Docker Hub, mysql is the name of the repository that contains different versions
of the Docker image for the popular, open-source DBMS, MySQL.
How to Create a Docker Image
• Interactive Method
• Advantages: Quickest and simplest way to create Docker images. Ideal for testing, troubleshooting, determining dependencies, and validating processes.
• Disadvantages: Difficult lifecycle management, requiring error-prone manual reconfiguration of live interactive processes. Easier to create unoptimized images
• Use the following Docker run command to start an interactive shell session with a container launched from the image specified by image_name:tag_name:
• If you omit the tag name, then Docker automatically pulls the most recent image version, which is identified by the latest tag. If Docker cannot find the image locally
then it will pull what it needs to build the container from the appropriate repository on Docker Hub.
• In our example, we’ll launch a container environment based on the latest version of Ubuntu:
• Advantages: Clean, compact and repeatable recipe-based images. Easier lifecycle management and easier integration into continuous
integration (CI) and continuous delivery (CD) processes. Clear self-documented record of steps taken to assemble the image.
• Disadvantages: More difficult for beginners and more time consuming to create from scratch.
• The Dockerfile approach is the method of choice for real-world, enterprise-grade container deployments. It’s a more systematic,
flexible, and efficient way to build Docker images and the key to compact, reliable, and secure container environments.
• In short, the Dockerfile method is a three-step process whereby you create the Dockerfile and add the commands you need to
assemble the image.
• The following table shows you those Dockerfile statements you’re most likely to use:
How to Create a Docker Image
The following table shows you those Dockerfile statements you’re most likely to use:
Command Purpose
WORKDIR To set the working directory for any commands that follow in the Dockerfile.
RUN To install any applications and packages required for your container.
ADD As COPY, but also able to handle remote URLs and unpack compressed files.
ENTRYPOINT Command that will always be executed when the container starts. If not specified, the default is /bin/sh -c
CMD Arguments passed to the entrypoint. If ENTRYPOINT is not set (defaults to /bin/sh -c), the CMD will be the commands the container executes.
EXPOSE To define which port through which to access your container application.
• Speed – The speed of Docker containers compared to a virtual machine is very fast. The time required to build a container is very
fast because they are tiny and lightweight. Development, testing, and deployment can be done faster as containers are small.
Containers can be pushed for testing once they have been built and then from there on to the production environment.
• Portability – The applications that are built inside docker containers are extremely portable. These portable applications can easily
be moved anywhere as a single element and their performance also remains the same.
• Scalability – Docker has the ability that it can be deployed on several physical servers, data servers, and cloud platforms. It can also
be run on every Linux machine. Containers can easily be moved from a cloud environment to a local host and from there back to the
cloud again at a fast pace.
• Density – Docker uses the resources that are available more efficiently because it does not use a hypervisor. This is the reason that
more containers can be run on a single host as compared to virtual machines. Docker Containers have higher performance because of
their high density and no overhead wastage of resources.