Professional Documents
Culture Documents
DEVOPS Training
DEVOPS Training
Gopal
Developers were responsible for building applications and operations had the
responsibility of implementing those solutions.
There was no formal method of communication between the two teams other
than when problems arose during deployment.
Around 2007, a movement started in Europe that was based upon the idea
that there needed to be a direct connection between developers and
operations, not as a specific position, but as a meld of the two into a
connected process flow.
The emerging professional movement that advocates a collaborative working
relationship between Development and IT Operations, resulting in the fast
flow of planned work (i.e., high deploy rates), while simultaneously
increasing the reliability, stability, resilience and security of the production
environment.
Either way, these are people who have pushed beyond their
defined areas of competence and who have a more holistic view
of their technical environments.”
Chef
Jenkins
Git
Nexus
Docker
Puppet
Nagios
Vagrant….etc
Continuous integration (CI) is a software engineering practice in which
isolated changes are immediately tested and reported on when they are
added to a larger code base. The goal of CI is to provide rapid feedback so
that if a defect is introduced into the code base, it can be identified and
corrected as soon as possible.
Time frames are crucial. Integration should be divided into three steps:
commit new functionality and build new application
run unit tests
run Integration/System tests
The relevant terms here are “Continuous Integration” and “Continuous
Deployment”, often used together and abbreviated as CI/CD . Originally
Continuous Integration means that you run your “integration tests” at every
code change while Continuous Delivery means that you automatically deploy
every change that passes your tests.
Success Metrics. Set up dashboards and reports and pinpoint the metrics
that will enable you to track the success of your implementation
virtualization means to create a virtual version of adevice or
resource, such as a server, storage device, network or even
an operating system where the framework divides the resource into
one or more execution environments
Every virtual machine is a fully functioning virtual computer, where you can
install a guest operating system of your choice, with network configuration,
and a full suite of PC software. Once you install the software and reboot your
computer, you will begin your quest of transforming your IT infrastructure
into a virtual infrastructure.
A boot image is a type of disk image (a computer file containing the
complete contents and structure of a Computer storage media).
When it is transferred onto a boot device it allows the associated
hardware to boot
Cloud
Computing: is Internet-based computing, whereby shared resources,
software and
information are provided to computers and other devices on-
demand, like a
public utility.
Azure
How to create Databags using Knife
https://docs.chef.io/foodcritic.html
Test Kitchen is a test harness tool to execute
your configured code on one or more
platforms in isolation. In the context
of Chef, Test Kitchenhelps you run your
cookbooks in the various deployment
environments you are targeting
https://docs.chef.io/kitchen.html
Chapter 4
DEVOPS TOOLS: VAGRANT
◦ What is Vagrant
Vagrant - http://www.vagrantup.com/downloads
Virtual Box -
https://www.virtualbox.org/wiki/Downloads
Once the download completed install box
applications on your workstation.
Vagrant ssh-config
Use Putty to login to centos VM.
The default credentials for login to VM are
Login id – vagrant
Password - vagrant
Please go to below URL and get required
images.
https://atlas.hashicorp.com/boxes/search
http://sourceforge.net/projects/kdiff3/files/kdiff3
/0.9.96/
Execute below commands once you install kdiff3 tool
git config --global merge.tool kdiff3
git config --global mergetool.kdiff3.cmd '"C:\\Program Files
(x86)\\KDiff3\\kdiff3" $BASE $LOCAL $REMOTE -o $MERGED'
PFA Document for GIT Installation.
There are 3 levels of git config; project, global and
system.
2.0-2.5 6 or 7
2.7.x-2.9.x 7u45+
8+ may work but is not thoroughly tested
2.10.x+ 7u45+,8u25+
PFA Document for Installation
Maven is a build automation tool used primarily
for Java projects. The word maven means
'accumulator of knowledge' in
Yiddish. Maven addresses two aspects of building
software: First, it describes how software is built,
and second, it describes its dependencies.
It will check the compilation and run time errors.
It will download all dependency jars from a
repository of a project and build the project. If
there are any compilation error in the project the
build will be fail
There are three things about Maven that are very nice.
Maven will (after you declare which ones you are using) download all
the libraries that you useand and the libraries that they they use for
you automatically. This is very nice, and makes dealing with lots of
libraries ridiculously easy. This lets you avoid "depenendency hell".
Maven also has lots of nice plug-ins that you can install that will
handle many routine tasks fromgenerating Java classes from an XSD
schema using JAXB to measuring test coverage with Cobertura. Just
add them to your pom.xml and they will integrate with everything
else you want to do.
Chapter 6
CONTINUOUS INTEGRATION
Jenkins is an open-source continuous
integration software tool written in
the Javaprogramming language for testing
and reporting on isolated changes in a larger
code base in real time.
Building/testing software projects
continuously. In a nutshell, Jenkins provides
an easy-to-use so-called continuous
integration system, making it easier for
developers to integrate changes to the
project, and making it easier for users to
obtain a fresh build. The automated,
continuous build increases the productivity.
Monitoring executions of externally-run jobs,
such as cron jobs and procmail jobs, even
those that are run on a remote machine. For
example, with cron, all you receive is regular
e-mails that capture the output, and it is up
to you to look at them diligently and notice
when it broke. Jenkins keeps those outputs
and makes it easy for you to notice when
something is wrong.
PFA doc for it.
Chapter 7
DEVOPS: MONITORING Tools
PFA Document for Nagios
PFA Document for Zenoss
Chapter 8
CONFIGURATION MANAGEMENT
Configuration management (CM) is a systems
engineering process for establishing and maintaining
consistency of a product's performance, functional and
physical attributes with its requirements, design and
operational information throughout its life.[1][2] The CM
process is widely used by military engineering organizations
to manage complex systems, such as weapon systems,
vehicles, and information systems. Outside the military, the
CM process is also used with IT service management as
defined by ITIL, resp. ISO/IEC 2000
Is the activity of managing the product and
related documents, throughout the liyecycle
of the product. An effective configuration
control system ensures that :
The latest approved version of the product
and its components are used at all times.
No change is made to the product baselines
without authorization
A clear audit trail of all proposed, approved
or implement changes exists
Incident management (IcM) is an area of IT Service Management
(ITSM) that involves returning service to normal as quickly as
possible after an incident, in a way that has little to no negative
impact on the business. In practice, incident managment often relies
upon temporary workarounds to ensure services are up and running
while the incident is investigated, the root problem identified and a
permanent fix put in place
It as orchestration capabilities.
Summary:
Puppet uses its own domain-specific language
(DSL) to describe machine configurations. Code
in this language is saved in files called manifests.
The Puppet Language:
Puppet configurations are written in the Puppet
language, a DSL built to declaratively model
resources.
Manifests
Manifests are files containing Puppet code. They
are standard text files saved with the .pp
extension.
How Does Puppet Work?
It works like this.. Puppet agent is a daemon
that runs on all the client servers(the servers
where you require some configuration, or the
servers which are going to be managed using
puppet.)
All the clients which are to be managed will
have puppet agent installed on them, and are
called nodes in puppet.
This machine contains all the configuration
for different hosts. Puppet master will run as
a daemon on this master server.
This is the daemon that will run on all the servers, which are to
be managed using puppet. Puppet agent will go and ask the
configuration for itself from the puppet master server at a
specific time interval.
The connection between puppet agent and master is made in a
secure encrypted channel with the help of SSL.
If suppose the puppet agent has already applied the required
configuration and there are no new changes then it will do
nothing.
Note: You can also manually ask puppet agent to go and fetch
the configuration from the puppet master server whenever
required. People manage puppet agent to fetch configuration
through a cron. But managing puppet agent to automatically
fetch data, by running it as a daemon on every node is a good
idea.
30 minutes is the default interval when puppet agent daemon
will go and fetch config data from puppet master.
Step 1: Whenever a client node connects to the
master, the master server analyzes the
configuration to be applied to the node, and how
to apply that configs on the node.
Step 2:Puppet master server Takes and collects
all the resources and configurations to be applied
to the node, and compiles it and make it a
catalog. This catalog is given to the puppet agent
of the node.
Step 3: Puppet agent will apply the configuration
on the node, according to the catalog, and then
reply back, and submit the report of the
configuration applied to the puppet master
server.
This is possible with the help of a tool called
as Facter.
Whenever the agent connects to the puppet
master server for configuration data, Facter
tool is used to give the complete details
about the node(agent) to the puppet master.
Facter will provide almost all information
about the agent node. The information is very
much detailed. See an example output of
Facter below.
Puppet master gets the complete information
about the node, and takes a decision with the
help of that information on how to apply the
configuration.
if suppose the node is debian then to install a
package puppet will use apt-get instead of yum.
You can do stuff's like if the IP address is this,
then apply this gateway to the server. And you
can also add custom made facts to a node, and
do configuration based on that fact(this makes
puppet much more customizable).
You require Facter tool to be installed on all
the nodes, where you want to apply
configuration using puppet. Without Facter,
there is no way through which puppet server
will get all information about the agent.
Puppet Chef