Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 195

 PRESENTED BY

 Gopal

"Dev Ops is the key to unlock the


automation around the world"
 You can ask 10 people for a definition of DevOps and likely get 10 different
answers. If you want to find a definition of your own, your research will
probably begin by asking Google, “what is DevOps”. Naturally, Wikipedia is
one of the first result so that is where we will begin. The first sentence on
Wikipedia defines DevOps as “a software development method that stresses
communication, collaboration and integration between software developers
and information technology (IT) professionals.” Well, that’s a fairly dense
definition
 So let’s delve into how DevOps works and can better adjoin two traditionally
opposing departments
 Before the current evolution of DevOps, there was no direct connection
between development and operations.

 Developers were responsible for building applications and operations had the
responsibility of implementing those solutions.

 There was no formal method of communication between the two teams other
than when problems arose during deployment.

 Around 2007, a movement started in Europe that was based upon the idea
that there needed to be a direct connection between developers and
operations, not as a specific position, but as a meld of the two into a
connected process flow.
 The emerging professional movement that advocates a collaborative working
relationship between Development and IT Operations, resulting in the fast
flow of planned work (i.e., high deploy rates), while simultaneously
increasing the reliability, stability, resilience and security of the production
environment.

 DevOps is the philosophy of unifying Development and Operations at the


culture, practice, and tool levels, to achieve accelerated and more frequent
deployment of changes to Production.
Culture=behaviour, teamwork, responsibility/accountability, trust...
Practice=policy, roles, processes/procedures, metrics/reporting...
Tools=shared skills, toolmaking for each other, common technology
platforms...
 The DevOps Lifecycle Looks Like This:
 Check in code
 Pull code changes for build
 Run tests (continuous integration server to generate builds &
arrange releases): Test individual models, run integration tests,
and run user acceptance tests.
 Store artifacts and build repository (repository for storing
artifacts, results & releases)
 Deploy and release (release automation product to deploy apps)
 Configure environment
 Update databases
 Update apps
 Push to users – who receive tested app updates frequently and
without interruption
 Application & Network Performance Monitoring (preventive
safeguard)
 Rinse and repeat
 Utilizing a DevOps lifecycle, products can be continuously deployed
in a feedback loop through:
 Infrastructure Automation
 Configuration Management
 Deployment Automation
 Infrastructure Monitoring
 Log Management
 Application & Performance Management
 Installation of server hardware and OS
 Configuration of servers, networks, storage,
etc…
 Monitoring of servers
 Respond to outages
 IT security
 Managing phone systems, network
 Change control
 Backup and disaster recovery planning
 Manage active directory
 Asset tracking
 With the arrival of tools like Puppet, and
Chef, the concept of Infrastructure as Code
was born
 Infrastructure as code, or programmable
infrastructure, means writing code (which can
be done using a high level language or any
descriptive language) to manage
configurations and automate provisioning of
infrastructure in addition to deployments.
 What is the relationship between Cloud
Computing and DevOps? Is DevOps really just
“IT for the Cloud”? Can you only do DevOps in
the cloud? Can you only do cloud using
DevOps? The answer to all three questions is
“no”. Cloud and DevOps are independent but
mutually reinforcing strategies for delivering
business value through IT
 There’s no formal career track for becoming a DevOps engineer.
They are either developers who get interested in deployment and
network operations, or sysadmins who have a passion for
scripting and coding, and move into the development side where
they can improve the planning of test and deployment.

 Either way, these are people who have pushed beyond their
defined areas of competence and who have a more holistic view
of their technical environments.”
 Chef
 Jenkins
 Git
 Nexus
 Docker
 Puppet
 Nagios
 Vagrant….etc
 Continuous integration (CI) is a software engineering practice in which
isolated changes are immediately tested and reported on when they are
added to a larger code base. The goal of CI is to provide rapid feedback so
that if a defect is introduced into the code base, it can be identified and
corrected as soon as possible.

 Time frames are crucial. Integration should be divided into three steps:
 commit new functionality and build new application
 run unit tests
 run Integration/System tests
 The relevant terms here are “Continuous Integration” and “Continuous
Deployment”, often used together and abbreviated as CI/CD . Originally
Continuous Integration means that you run your “integration tests” at every
code change while Continuous Delivery means that you automatically deploy
every change that passes your tests.

 The Software Development Pipeline


 From a high level, a CI/CD pipeline usually consists of the following discrete
steps:
 Commit. When a developer finishes a change to an application, he or she
commits it to a central source code repository.
 Build. The change is checked out from the repository and the software is
built so that it can be run by a computer. This steps depends a lot on what
language is used and for interpreted languages this step can even be absent.
 Automated tests. This is where the meat of the CI/CD pipeline is. The change
is tested from multiple angles to ensure it works and that it doesn’t break
anything else.
 Deploy. The built version is deployed to production.
 Application monitoring and feedback solutions
enable you to:
 Help steer projects toward successful completion with
better application visibility.
 Manage and optimize application and infrastructure
performance in traditional IT, virtualized, cloud and
hybrid environments.
 Receive customer feedback in both pre-and post-
production phases, resulting in lower costs of errors
and changes.
 Maximize the value of every customer visit, helping to
ensure that more transactions are completed
successfully.
 Gain immediate visibility into the sources of customer
issues that may affect their behavior and impact
business.
 After some time, around 1970, the concept of virtual machines
(VMs) was created.
 Using virtualization software like VMware, it became possible to
execute one or more operating systems simultaneously in an
isolated environment. Complete computers (virtual) could be
executed inside one physical hardware which in turn can run a
completely different operating system.
 The VM operating system took the 1950s’ shared access mainframe
to the next level, permitting multiple distinct computing
environments to reside on one physical environment. Virtualization
came to drive the technology, and was an important catalyst in the
communication and information evolution.
 In the 1990s, telecommunications companies started offering
virtualized private network connections.
 Historically, telecommunications companies only offered single
dedicated point–to-point data connections. The newly offered
virtualized private network connections had the same service quality
as their dedicated services at a reduced cost. Instead of building out
physical infrastructure to allow for more users to have their own
connections, telecommunications companies were now able to
provide users with shared access to the same physical infrastructure.
 Cloud Computing Models – Iaas , Paas , Saas

 Understanding Public and Private Clouds –


On-premise Private cloud, Externally hosted
private cloud

 Cloud Computing Benefits – Reduced Cost ,


Increased storage , Flexibility

 Cloud Computing Challenges – Data


Protection
 Grid computing: is the collection of computer
resources from multiple locations to reach a
common goal.Solving large problems with
parallel computing

 Cloud computing: Anytime, anywhere access


to IT resources delivered dynamically as a
service
 Lower Costs
 Cap-Ex Free Computing
 Deploy Projects Faster
 Scale as Needed
 Lower Maintenance Costs
 Disaster recovery
 Automatic software updates
 Flexibility
 Increased collaboration
 Cloud Providers offer services that can be grouped into three categories.

 1. Software as a Service (SaaS): In this model, a complete application is


offered to the customer, as a service on demand. A single instance of the
service runs on the cloud & multiple end users are serviced. On the
customers‟ side, there is no need for upfront investment in servers or
software licenses, while for the provider, the costs are lowered, since only a
single application needs to be hosted & maintained. Today SaaS is offered by
companies such as Google, Salesforce, Microsoft, Zoho, etc.

 2. Platform as a Service (Paas): Here, a layer of software, or development


environment is encapsulated & offered as a service, upon which other higher
levels of service can be built. The customer has the freedom to build his own
applications, which run on the provider‟s infrastructure. To meet
manageability and scalability requirements of the applications, PaaS
providers offer a predefined combination of OS and application servers, such
as LAMP platform (Linux, Apache, MySQL and PHP), restricted J2EE, Ruby etc.
Google's App Engine, Force.com, etc. are some of the popular PaaS
examples.

 3. Infrastructure as a Service (Iaas): IaaS provides basic storage and


computing capabilities as standardized services over the network. Servers,
storage systems, networking equipment, data center space etc. are pooled
and made available to handle workloads. The customer would typically
deploy his own software on the infrastructure. Some common examples are
Amazon, GoGrid, 3 Tera, etc.
 Solution Design and Deployment Planning. Guidelines for the set up and
configuration of your Salesforce Service Cloud instance as well as a
timeline for deployment and milestones to help track success of the
overall program.

 Setup, Configuration and Custom Development. Testing and production


environments in addition to any configurations and custom development
(including integration) as necessary for each environment.

 Success Metrics. Set up dashboards and reports and pinpoint the metrics
that will enable you to track the success of your implementation
 virtualization means to create a virtual version of adevice or
resource, such as a server, storage device, network or even
an operating system where the framework divides the resource into
one or more execution environments

 Run multiple operating systems and applications on a single server

 Consolidate hardware to get vastly higher productivity from fewer


servers.

 Save 50 percent or more on overall IT costs.

 Speed up and simplify IT management, maintenance, and the


deployment of new applications.
 A virtual machine (VM) is a software program or operating system that not
only exhibits the behavior of a separate computer, but is also capable of
performing tasks such as running applications and programs like a separate
computer. A virtual machine, usually known as a guest is created within
another computing environment referred as a "host." Multiple virtual
machines can exist within a single host at one time.

 A virtual machine is also known as a guest.

 Every virtual machine is a fully functioning virtual computer, where you can
install a guest operating system of your choice, with network configuration,
and a full suite of PC software. Once you install the software and reboot your
computer, you will begin your quest of transforming your IT infrastructure
into a virtual infrastructure.
 A boot image is a type of disk image (a computer file containing the
complete contents and structure of a Computer storage media).
When it is transferred onto a boot device it allows the associated
hardware to boot

 One goal of boot image control is to minimize the number of boot


images used by an organization to reduce support costs.

 Specifying the machine hardware to minimize unneeded machine


diversity and minimize the resultant number of boot images
 Cloud storage is a service model in which data is maintained,
managed and backed up remotely and made available to users over
a network
 Service
Oriented Architecture (SOA): is a flexible set of design principles
used during
the phases of systems development and integration. A deployed
SOA-based
architecture will provide a loosely-integrated suite of services that
can be
used within multiple business domains.

 Cloud
Computing: is Internet-based computing, whereby shared resources,
software and
information are provided to computers and other devices on-
demand, like a
public utility.

 As we’ve seen in various


articles and books, SOA is mainly for enterprise while the cloud
computing
definition suggests it’s for Internet-based services.
 A virtual private cloud (VPC) is the logical division of a public cloud
service provider's multi-tenant architecture to support private cloud
computing in a public cloud environment.
 Cloud vendor outages leave you high and dry
 US government can see it all, with the right
warrant
 Critical business data at risk
 Authentication, authorization, and access
control
 Availability
 Amazon Web Services (AWS) is a collection
of remote computing services, also
called web services, that make up a cloud
computing platform offered by Amazon.com

 Azure
 How to create Databags using Knife

 Knife data bag create Databag_Name

 Knife data bag delete Databag_Name

 Knife data bag edit Databag_Name

 knife data bag from file BAG_NAME


ITEM_NAME.json
 How to Add Run list to Node
 Using below commands we can check node
details.

 Knife node show <nodename>

 Knife status name:<nodename>


 How to create Environments

 Login to Chef Manage Console


 Go to Policy Tab and you can see
Environment Section
 knife environment compare development

 knife environment compare development


staging
 knife environment compare –all
 knife environment create
ENVIRONMENT_NAME -d DESCRIPTION
 knife environment delete
ENVIRONMENT_NAME
 Create roles
 Set EDITOR=notepad

 knife role show ROLE_NAME

 knife role list

 knife role edit role1

 knife role delete devops


 What is foodcritic

 Foodcritic to test the code quality of our


cookbooks. More subtle than outright functionally
tests, Foodcritic will reference some leading best
practice rules to ensure your cookbooks are
squeaky clean.

 https://docs.chef.io/foodcritic.html
 Test Kitchen is a test harness tool to execute
your configured code on one or more
platforms in isolation. In the context
of Chef, Test Kitchenhelps you run your
cookbooks in the various deployment
environments you are targeting

 https://docs.chef.io/kitchen.html

 Chapter 4
DEVOPS TOOLS: VAGRANT
◦ What is Vagrant

◦ Vagrant, an open-source product released in 2010, is best


described as a VM manager. It allows you to script and
package the VM config and the provisioning setup. It is
designed to run on top of almost any VM tool – VirtualBox,
VMWare, AWS, etc. However, default support is only
included for VirtualBox, for the other providers you must
first install their plugins
 Vagrant is able to define and control multiple guest
machines per Vagrantfile. This is known as a "multi-
machine" environment.
 These machines are generally able to work together or are
somehow associated with each other. Here are some use-
cases people are using multi-machine environments for
today:
 Accurately modeling a multi-server production topology,
such as separating a web and database server.
 Modeling a distributed system and how they interact with
each other.
 Testing an interface, such as an API to a service
component.
 Disaster-case testing: machines dying, network partitions,
slow networks, inconsistent world views, etc.
 How to install Vagrant in Windows and
Linux

 Use below link to download the Vagrant and VirtualBox

 Vagrant - http://www.vagrantup.com/downloads

 Virtual Box -
https://www.virtualbox.org/wiki/Downloads
 Once the download completed install box
applications on your workstation.

 First we need to install Virtualbox


Application.
 Next Install Vagrant Application.
Please use below step to configure Vagrant.

Go to below path and configure Vagrant


Go to any path in System and execute below command as
administrator :
- Vagrant init ubuntu/trusty64 PFB screenshot for reference
 How to use Vagrant to create small virtual
 use below command to create a small VM
 - vagrant up
 Once Vagrant creates VM use below
command to check VM Details.

 Vagrant ssh-config
 Use Putty to login to centos VM.
 The default credentials for login to VM are
 Login id – vagrant
 Password - vagrant
 Please go to below URL and get required
images.
 https://atlas.hashicorp.com/boxes/search

 Once you got Images


 use below command to execute it -
vagrant init vmname/osname
 Use
 Register New provision VM with Chef using
Knife.

 And starting testing VM with Chef buy


executing cookbooks, role and etc
 Chapter 5
 SOURCE CODE MANAGEMENT
 What is a version control system?
 A version control system (also known as
a Revision Control System) is a repository of
files, often the files for the source code of
computer programs, with monitored access.
Every change made to the source is tracked,
along with who made the change, why they
made it, and references to problems fixed, or
enhancements introduced, by the change.
 Distributed revision control system with an
emphasis on being fast

 Every GIT working directory is a full-fledged


repository
-> Includes complete history
-> Full revision tracking capabilities

 Not dependent on network access and or a


central server
 Master Branch
-> Coding never occurs in the master
branch
 Branches
-> Development is done in branches
 Clone Repository To Computer
 Create a local branch of the master branch
 Switch to that branch (local branch )
 Make changes to files in local branch
 Commit changes to local branch
 Switch back to master
 Perform a pull operation
 Accept latest changes
 Merge local branch to local master
 If successful, push changes back to remote
master
 Download GIT for windows from below URL and
install
https://help.github.com/articles/set-up-
git#platform-windows

 Download Diff tool from below URL

http://sourceforge.net/projects/kdiff3/files/kdiff3
/0.9.96/
Execute below commands once you install kdiff3 tool
git config --global merge.tool kdiff3
git config --global mergetool.kdiff3.cmd '"C:\\Program Files
(x86)\\KDiff3\\kdiff3" $BASE $LOCAL $REMOTE -o $MERGED'
 PFA Document for GIT Installation.
 There are 3 levels of git config; project, global and
system.

 project: Project configs are only available for the


current project and stored in .git/config in the
project's directory.

 global: Global configs are available for all projects


for the current user and stored in ~/.gitconfig.
 system: System configs are available for all the
users/projects and stored in /etc/gitconfig.
 Create a project specific config, you have to
execute this under the project's directory.
 $ git config user.name "John Doe"

 Create a global config


 $ git config --global user.name "John Doe"

 Create a system config


 $ git config --system user.name "John Doe"

 And as you may guess, project overrides global and


global overrides system
 NEXUS
 Nexus is a repository manager, it stores
“artifacts”,In software development life cycle
(SDLC), artifact usually refers to "things" that are
produced by people involved in the process.
Examples would be design documents, data
models, workflow diagrams, test matrices and
plans, setup scripts, ... like an archaeological
site, any thing that is created could be an artifact.

 In most software development cycles, there's


usually a list of specific required artifacts that
someone must produce and put on a shared
drive or document repository for other people to
view and share.
 Release Artifacts
 A release artifact is an artifact which was created by a specific, versioned
release. For example, consider the 1.2.0 release of the commons-lang
library stored in the Central Maven repository. This release artifact,
commons-lang-1.2.0.jar, and the associated POM, commons-lang-
1.2.0.pom, are static objects which will never change in the Central
Maven repository. Released artifacts are consider to be solid, stable, and
perpetual in order to guarantee that builds which depend upon them are
solid and repeatable over time. The released JAR artifact is associated
with a PGP signature, an MD5 and SHA checksum which can be used to
verify both the authenticity and integrity of the binary software artifact.
 Snapshot Artifacts
 Snapshot artifacts are artifacts generated during the development of a
software project. A Snapshot artifact has both a version number such as
“1.3.0” or “1.3” and a timestamp in its name. For example, a snapshot
artifact for commons-lang 1.3.0 might have the name commons-lang-
1.3.0-20090314.182342-1.jar the associated POM, MD5 and SHA
hashes would also have a similar name. To facilitate collaboration during
the development of software components, Maven and other clients
which know how to consume snapshot artifacts from a repository know
how to interrogate the metadata associated with a Snapshot artifact and
always retrieve the latest version of a Snapshot dependency from a
repository.
 Java

 Sonatype Nexus is a Java server application


that requires specific versions to operate.
Nexus Version Supported Sun/Oracle JRE version

1.9 and earlier 5 or 6

2.0-2.5 6 or 7

2.6.x 7u45+ only, 8+ will not work

2.7.x-2.9.x 7u45+
8+ may work but is not thoroughly tested

2.10.x+ 7u45+,8u25+
 PFA Document for Installation
 Maven is a build automation tool used primarily
for Java projects. The word maven means
'accumulator of knowledge' in
Yiddish. Maven addresses two aspects of building
software: First, it describes how software is built,
and second, it describes its dependencies.
 It will check the compilation and run time errors.
It will download all dependency jars from a
repository of a project and build the project. If
there are any compilation error in the project the
build will be fail
 There are three things about Maven that are very nice.

 Maven will (after you declare which ones you are using) download all
the libraries that you useand and the libraries that they they use for
you automatically. This is very nice, and makes dealing with lots of
libraries ridiculously easy. This lets you avoid "depenendency hell".

 It uses "Convention over Configuration" so that by default you don't


need to define the tasks you want to do. You don't need to write a
"compile", "test", "package", or "clean" step like you would have to in
Ant or a Makefile. Just put the files in the places Maven expects
them and it should work off of the bat.

 Maven also has lots of nice plug-ins that you can install that will
handle many routine tasks fromgenerating Java classes from an XSD
schema using JAXB to measuring test coverage with Cobertura. Just
add them to your pom.xml and they will integrate with everything
else you want to do.
 Chapter 6

 CONTINUOUS INTEGRATION
 Jenkins is an open-source continuous
integration software tool written in
the Javaprogramming language for testing
and reporting on isolated changes in a larger
code base in real time.
 Building/testing software projects
continuously. In a nutshell, Jenkins provides
an easy-to-use so-called continuous
integration system, making it easier for
developers to integrate changes to the
project, and making it easier for users to
obtain a fresh build. The automated,
continuous build increases the productivity.
 Monitoring executions of externally-run jobs,
such as cron jobs and procmail jobs, even
those that are run on a remote machine. For
example, with cron, all you receive is regular
e-mails that capture the output, and it is up
to you to look at them diligently and notice
when it broke. Jenkins keeps those outputs
and makes it easy for you to notice when
something is wrong.
 PFA doc for it.
Chapter 7
DEVOPS: MONITORING Tools
 PFA Document for Nagios
 PFA Document for Zenoss
Chapter 8
CONFIGURATION MANAGEMENT
 Configuration management (CM) is a systems
engineering process for establishing and maintaining
consistency of a product's performance, functional and
physical attributes with its requirements, design and
operational information throughout its life.[1][2] The CM
process is widely used by military engineering organizations
to manage complex systems, such as weapon systems,
vehicles, and information systems. Outside the military, the
CM process is also used with IT service management as
defined by ITIL, resp. ISO/IEC 2000
 Is the activity of managing the product and
related documents, throughout the liyecycle
of the product. An effective configuration
control system ensures that :
 The latest approved version of the product
and its components are used at all times.
 No change is made to the product baselines
without authorization
 A clear audit trail of all proposed, approved
or implement changes exists
 Incident management (IcM) is an area of IT Service Management
(ITSM) that involves returning service to normal as quickly as
possible after an incident, in a way that has little to no negative
impact on the business. In practice, incident managment often relies
upon temporary workarounds to ensure services are up and running
while the incident is investigated, the root problem identified and a
permanent fix put in place

 The first goal of the incident management process is to restore a


normal service operation as quickly as possible and to minimize the
impact on business operations, thus ensuring that the best possible
levels of service quality and availability are maintained
 The change management process is the sequence of steps or
activities that a change management team or project leader
would follow to apply change management to a project or
change. Based on Prosci's research of the most effective and
commonly applied change, they have created a change
management process that contains the following three phases:

 Phase 1 - Preparing for change (Preparation, assessment and


strategy development)
 Phase 2 - Managing change (Detailed planning and change
management implementation)
 Phase 3 - Reinforcing change™ (Data gathering, corrective action
and recognition)
 The goal of problem management is to minimise both the
number and severity of incidents and potential problems to the
business/organisation.
 Problem management should aim to reduce the adverse impact
of incidents and problems that are caused by errors within the IT
infrastructure, and to prevent recurrence of incidents related to
these errors.
 Problems should be addressed in priority order with higher
priority given to the resolution of problems that can cause
serious disruption to critical IT services.
 Problem management’s responsibility is to ensure that incident
information is documented in such a way that it is readily
available to support all
 problem management activities. Problem management has
reactive and proactive aspects: reactive – problem solving when
one or more incidents occur proactive – identifying and solving
problems and known errors before incidents occur in the first
place
Chapter 9
GENERAL ENVIRONMENT SETUP STEPS IN AWS
and Azure
 Creating Servers and Networks in Cloud
Setting up rules and Application
 Difficult Scenarios in environments.
 Scaling
◦ Increase of RAM, CPU and Storage
◦ Checking logs of server
 Login page -
https://console.aws.amazon.com/console/ho
me?region=us-east-1
 We can see login page -
https://cloudadmin.nttamerica.com/cloudui/
ms/login.htm?forwardAction=%2Fapp%2Fclou
ds.htm
Chapter 10
10. MISCELLANEOUS
 Other Tools such as Puppet
 Docker
 Puppet Enterprise is a comprehensive tool
for managing the configuration of
enterprise systems. Specifically, PE offers:

 Configuration management tools that let


you define a desired state for your
infrastructure and then automatically
enforce that state.

 It as orchestration capabilities.
Summary:
Puppet uses its own domain-specific language
(DSL) to describe machine configurations. Code
in this language is saved in files called manifests.
The Puppet Language:
Puppet configurations are written in the Puppet
language, a DSL built to declaratively model
resources.
Manifests
Manifests are files containing Puppet code. They
are standard text files saved with the .pp
extension.
 How Does Puppet Work?
 It works like this.. Puppet agent is a daemon
that runs on all the client servers(the servers
where you require some configuration, or the
servers which are going to be managed using
puppet.)
 All the clients which are to be managed will
have puppet agent installed on them, and are
called nodes in puppet.
 This machine contains all the configuration
for different hosts. Puppet master will run as
a daemon on this master server.
 This is the daemon that will run on all the servers, which are to
be managed using puppet. Puppet agent will go and ask the
configuration for itself from the puppet master server at a
specific time interval.
 The connection between puppet agent and master is made in a
secure encrypted channel with the help of SSL.
 If suppose the puppet agent has already applied the required
configuration and there are no new changes then it will do
nothing.
 Note: You can also manually ask puppet agent to go and fetch
the configuration from the puppet master server whenever
required. People manage puppet agent to fetch configuration
through a cron. But managing puppet agent to automatically
fetch data, by running it as a daemon on every node is a good
idea.
 30 minutes is the default interval when puppet agent daemon
will go and fetch config data from puppet master.
 Step 1: Whenever a client node connects to the
master, the master server analyzes the
configuration to be applied to the node, and how
to apply that configs on the node.
 Step 2:Puppet master server Takes and collects
all the resources and configurations to be applied
to the node, and compiles it and make it a
catalog. This catalog is given to the puppet agent
of the node.
 Step 3: Puppet agent will apply the configuration
on the node, according to the catalog, and then
reply back, and submit the report of the
configuration applied to the puppet master
server.
 This is possible with the help of a tool called
as Facter.
 Whenever the agent connects to the puppet
master server for configuration data, Facter
tool is used to give the complete details
about the node(agent) to the puppet master.
 Facter will provide almost all information
about the agent node. The information is very
much detailed. See an example output of
Facter below.
 Puppet master gets the complete information
about the node, and takes a decision with the
help of that information on how to apply the
configuration.
 if suppose the node is debian then to install a
package puppet will use apt-get instead of yum.
 You can do stuff's like if the IP address is this,
then apply this gateway to the server. And you
can also add custom made facts to a node, and
do configuration based on that fact(this makes
puppet much more customizable).
 You require Facter tool to be installed on all
the nodes, where you want to apply
configuration using puppet. Without Facter,
there is no way through which puppet server
will get all information about the agent.
Puppet Chef

Basic version $99 per node per $120 per month


year (annual term for 20 nodes or
license) with the $300 per month
first 10 nodes for 50 nodes
free; discounts
kick in for larger
installations

Premium version The Premium $600 per month


version of Puppet for 100 nodes
Enterprise is
priced by the sales
team.
Docker is an open-source project that
automates the deployment of applications
inside software containers, by providing an
additional layer of abstraction and automation
of operating-system-level virtualization on
Linux, Mac OS and Windows
 Faster delivery of your applications

 Deploying and scaling more easily

 Achieving higher density and running more


workloads
 Docker has two major components:

 Docker: the open source container


virtualization platform.
 Docker Hub: our Software-as-a-Service
platform for sharing and managing Docker
containers.
 Docker does the following:
 Pulls the ubuntu image: Docker checks for the presence of the
ubuntu image and, if it doesn’t exist locally on the host, then
Docker downloads it from Docker hub. If the image already
exists, then Docker uses it for the new container.
 Creates a new container: Once Docker has the image, it uses it to
create a container.
 Allocates a filesystem and mounts a read-write layer: The
container is created in the file system and a read-write layer is
added to the image.
 Allocates a network / bridge interface: Creates a network
interface that allows the Docker container to talk to the local
host.
 Sets up an IP address: Finds and attaches an available IP address
from a pool.
 Executes a process that you specify: Runs your application, and;
 Captures and provides application output: Connects and logs
standard input, outputs and errors for you to see how your
application is running.
 Use Docker to run your code on your laptop
in the same environment as you have on your
server (try the building tool)
 Use Docker whenever your app needs to go
through multiple phases of development
(dev/test/qa/prod, try Drone or Shippable,
both do Docker CI/CD)
 Use Docker with your Chef Cookbooks and
Puppet Manifests (remember, Docker doesn't
do configuration management)
 END

You might also like