Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

DevOps-Trinity

Keywords working commands

*************************************Terraform************************************
Keywords main.tf, variable.tf, variable.tfvars, workspace provionser backup state loops conditions
State locking and remote state

Working The first step in the Terraform workflow is to write the configuration. Initialize the
Terraform Working Directory. Validate the Configuration, Create an Execution Plan, and Apply the
Changes. Destroy the Infrastructure.

Commands Init, Plan, apply, destroy, Taint, untaint, and refresh workspace, state, import

Loops count metaarg loop over resources for each metaarg loop over resources and inline blocks
within a resource for list and maps

Splat expr conditional expr condition? True: false

Built in function there are functions for numbers, strings, collections, file system, date and time, IP
Network, Type Conversions and more. NO user defined functions

***********************************Ansible****************************************K
Keywords Inventory file, Playbook, roles, loops, conditions
Working
Ansible works by connecting to your nodes and pushing out small programs, called "Ansible
modules" to them. Ansible then executes these modules (over SSH by default), and removes them
when finished. Your library of modules can reside on any machine, and there are no servers,
daemons, or databases required. The management node in the above picture is the controlling node
(managing node) which controls the entire execution of the playbook. It’s the node from which you
are running the installation. The inventory file provides the list of hosts where the Ansible modules
needs to be run and the management node does a SSH connection and executes the small modules
on the hosts machine and installs the product/software. Beauty of Ansible is that it removes the
modules once those are installed so effectively it connects to host machine, executes the
instructions and if it’s successfully installed removes the code which was copied on the host machine
which was executed.

Ansible ad hoc commands are used to perform the quick actions. Mainly uses of these commands
for parallelism and shell, file transfer packages and services.

*******************************Jenkins****************************************

Keywords envi login and path, backup, security, secrets, users

Working
CI/CD Jenkins pipeline
CI/CD Checkout --> Build and Test---> Build and push docker image --> update the deployment file
Checkout ---> build docker ---> push the artefacts --> checkout k8s manifest SCM--> update the
K8s manifest and push the repo
Post - defines actions to be taken when pipeline or stage completes based on outcome.

***********************************GIT******************************************
Keywords staging and commits, Undoing changes, branching and merging finally collaboration with
remote repository

Working Set up a Github Organization Fork Organization Repository to Your Personal GitHub, Clone
the Repository to Your Local Machine, Create a Branch for your Working Files
Set Remote Repository to the GitHub Organization, Get Coding!, Pull the Most Recent Files From the
Organization Repo, Merge the Master Branch Into the Feature Branch, Push Your Code to your
GitHub Repo, Make a Pull Request to the Organization Repo

Commands
Staging and commits Init add commit clone Stash save Ignore fork Repository
Index Head Origin Master Tags Upstream and Down stream
Undoing Changes Checkout Revert Reset Rm Cherry-pick
Branching & Merging Branch Merge and Merge conflict rebase Squash
Collaborating Fetch Pull Push

********************************Maven**************************************

Keywords: - Maven working, components, commands, Goals

Maven Command Structure


Build Life Cycles, Phases and Goals
Executing Build Life Cycles, Phases and Goals
Executing the Default Life Cycle
Executing Build Phases
Maven contains a wide set of commands which you can execute. Maven commands are a mix of
build life cycles, build phases and build goals, and can thus be a bit confusing. Therefore I will
describe the common Maven commands in this tutorial, as well as explain which build life cycles,
build phases and build goals they are executing.

However, I will first list some common Maven commands with a brief explanation of what they do.
After this list of common Maven commands I have a description of the Maven command structure.

Common Maven Commands


Here is a list of common Maven commands plus a description of what they do. Please note, that
even if a Maven command is shown on multiple lines in the table low, it is to be considered a single
command line when typed into a windows command line or linux shell.

Maven Command Description


mvn --version Prints out the version of Maven you are running.
mvn clean Clears the target directory into which Maven normally builds your project.
mvn package Builds the project and packages the resulting JAR file into the target directory.
mvn package -Dmaven.test.skip=true Builds the project and packages the resulting JAR file into
the target directory - without running the unit tests during the build.
mvn clean package Clears the target directory and Builds the project and packages the resulting
JAR file into the target directory.
mvn clean package -Dmaven.test.skip=true Clears the target directory and builds the project
and packages the resulting JAR file into the target directory - without running the unit tests during
the build.
mvn verify Runs all integration tests found in the project.
mvn clean verify Cleans the target directory, and runs all integration tests found in the
project.
mvn install Builds the project described by your Maven POM file and installs the resulting
artifact (JAR) into your local Maven repository
mvn install -Dmaven.test.skip=true Builds the project described by your Maven POM file
without running unit tests, and installs the resulting artifact (JAR) into your local Maven repository
mvn clean install Clears the target directory and builds the project described by your Maven
POM file and installs the resulting artifact (JAR) into your local Maven repository
mvn clean install -Dmaven.test.skip=true Clears the target directory and builds the project
described by your Maven POM file without running unit tests, and installs the resulting artifact (JAR)
into your local Maven repository
mvn dependency:copy-dependencies Copies dependencies from remote Maven repositories to
your local Maven repository.
mvn clean dependency:copy-dependencies Cleans project and copies dependencies from
remote Maven repositories to your local Maven repository.
mvn clean dependency:copy-dependencies package Cleans project, copies dependencies from
remote Maven repositories to your local Maven repository and packages your project.
mvn dependency:tree Prints out the dependency tree for your project - based on the dependencies
configured in the pom.xml file.
mvn dependency:tree -Dverbose Prints out the dependency tree for your project - based on
the dependencies configured in the pom.xml file. Includes repeated, transitive dependencies.
mvn dependency:tree -Dincludes=com.fasterxml.jackson.core Prints out the dependencies from
your project which depend on the com.fasterxml.jackson.core artifact.
mvn dependency:tree -Dverbose -Dincludes=com.fasterxml.jackson.core Prints out the
dependencies from your project which depend on the com.fasterxml.jackson.core artifact. Includes
repeated, transitive dependencies.
mvn dependency:build-classpath Prints out the classpath needed to run your project
(application) based on the dependencies configured in the pom.xml file.
Keep in mind, that when you execute the clean goal of Maven, the target directory is removed,
meaning you lose all compiled classes from previous builds. That means, that Maven will have to
build all of your project again from scratch, rather than being able to just compile the classes that
were changed since last build. This slows your build time down. However, sometimes it can be nice
to have a clean, fresh build, e.g. before releasing your product to the world - mostly for your own
"feeling" of knowing everything was built from scratch and working.

Maven Command Structure


A Maven command consists of two elements:

mvn
One or more build life cycles, build phases or build goals
Here is a Maven command example:

mvn clean
This command consists of the mvn command which executes Maven, and the build life cycle named
clean.

Here is another Maven command example:


mvn clean install
This maven command executes the clean build life cycle and the install build phase in the default
build life cycle.

You might wonder how you see the difference between a build life cycle, build phase and build goal.
I will get back to that later.

Build Life Cycles, Phases and Goals


As mentioned in the introduction in the section about Build life cycles, build phases and build goals,
Maven contains three major build life cycles:

Clean
Default
Site
Inside each build life cycle there are build phases, and inside each build phase there are build goals.

You can execute either a build life cycle, build phase or build goal. When executing a build life cycle
you execute all build phases (and thus build goals) inside that build life cycle.

When executing a build phase you execute all build goals within that build phase. Maven also
executes all build phases earlier in the build life cycle of the desired build phase.

Build goals are assigned to one or more build phases. When the build phases are executed, so are all
the goals in that build phase. You can also execute a build goal directly.

Executing Build Life Cycles, Phases and Goals


When you run the mvn command you pass one or more arguments to it. These arguments specify
either a build life cycle, build phase or build goal. For instance to execute the clean build life cycle
you execute this command:

Mvn clean
To execute the site build life cycle you execute this command:

Mvn site
Executing the Default Life Cycle
The default life cycle is the build life cycle which generates, compiles, packages etc. your source
code.

You cannot execute the default build life cycle directly, as is possible with the clean and site. Instead
you have to execute a specific build phase within the default build life cycle.

The most commonly used build phases in the default build life cycle are:

Build Phase Description


validateValidates that the project is correct and all necessary information is available. This also
makes sure the dependencies are downloaded.
compileCompiles the source code of the project.
test Runs the tests against the compiled source code using a suitable unit testing framework.
These tests should not require the code be packaged or deployed.
package Packs the compiled code in its distributable format, such as a JAR.
install Install the package into the local repository, for use as a dependency in other projects
locally.
deploy Copies the final package to the remote repository for sharing with other developers and
projects.
Executing one of these build phases is done by simply adding the build phase after the mvn
command, like this:

mvn compile
This example Maven command executes the compile build phase of the default build life cycle. This
Maven command also executes all earlier build phases in the default build life cycle, meaning the
validate build phase.

Executing Build Phases


You can execute a build phase located inside a build life cycle by passing the name of the build phase
to the Maven command. Here are a few build phase command examples:

mvn pre-clean

mvn compile

mvn package
Maven will find out what build life cycle the specified build phase belongs to, so you don't need to
explicitly specify which build life cyle the build phase belongs to.

*********************sonar cube analysis**********************


Working
Sonar, working, components, commands, rules

Running SonarQube Locally, Generating Token in sonarqube Analyzing Source Code, Analysis the
result and Sonarqube Quality gate

Code scan analyser database

There are four types of rules:


Code smell (maintainability domain)
Bug (reliability domain)
Vulnerability (security domain)
Security hotspot (security domain)

************************Trivy*****************************************************
Working
Trivy is an open-source vulnerability scanner used for scanning container images, file systems, and
git repositories. Standalone and client/server

************************************Docker****************************************

Keywords host, Deamon, Client, Registry, volume, n/w


Working
Docker is an open-source centralized platform designed to create, deploy, and run applications.
Docker uses container on the host's operating system to run applications. It allows applications to
use the same Linux kernel as a system on the host computer, rather than creating a whole virtual
operating system. Containers ensure that our application works in any environment like
development, test, or production. Docker Host is used to provide an environment to execute and run
applications. Docker client uses commands and REST APIs to communicate with the Docker Daemon
(Server). When a client runs any docker command on the docker client terminal, the client terminal
sends these docker commands to the Docker daemon. Docker daemon receives these commands
from the docker client in the form of command and REST API's request. Docker daemon runs on
host.

Commands ADD Add local or remote files and directories.


ARG Use build-time variables.
CMD Specify default commands.
COPY Copy files and directories.
ENTRYPOINT Specify default executable.
ENV Set environment variables.
EXPOSEDescribe which ports your application is listening on.
FROM Create a new build stage from a base image.
HEALTHCHECK Check a container's health on startup.
LABEL Add metadata to an image.
MAINTAINER Specify the author of an image.
ONBUILD Specify instructions for when the image is used in a build.
RUN Execute build commands.
SHELL Set the default shell of an image.

*********************************K8S************ *************************

Keywords
k8s working, components, commands, pods, config-maps and secrets. Volumes, security

Working
A user deploys a Kubernetes manifest specifying desired pod configurations. The manifest is
submitted to the API server. The API server stores the configuration in etcd. The Controller Manager
detects the desired state and instructs the Kubelet on worker nodes to start or stop
containers.Kubelet communicates with the container runtime to execute the desired
State. Kube Proxy manages network rules, enabling communication between pods
Commands
Create expose run set get explain edit delete
Describe logs attach exec port forward proxy cp

*********************************************************************
Helm Chart -Working
Helm charts are collections of pre-configured Kubernetes resources. They can be thought of as
packages that contain everything needed to deploy an application or service on Kubernetes. A Helm
chart includes detailed information about the application structure, its dependencies, and the
necessary configuration to run it on Kubernetes. Essentially, Helm charts standardize and simplify
the deployment process, making it easy to share and reuse configurations across different
environments or among different teams.
chart.yaml: This is where you’ll put the information related to your chart. That includes the chart
version, name, and description so you can find it if you publish it on an open repository. Also in this
file you’ll be able to set external dependencies using the dependencies key.
values.yaml: Like we saw before, this is the file that contains defaults for variables.
Templates (dir): This is the place where you’ll put all your manifest files. Everything in here will be
passed on and created in Kubernetes.
Charts: If your chart depends on another chart you own, or if you don’t want to rely on Helm’s
default library (the default registry where Helm pull charts from), you can bring this same structure
inside this directory. Chart dependencies are installed from the bottom to the top, which means if
chart A depends on chart B, and B depends on C, the installation order will be C ->B ->A.

Argo CD Working is a Kubernetes-native continuous deployment (CD) tool. Unlike external CD tools
that only enable push-based deployments, Argo CD can pull updated code from Git repositories and
deploy it directly to Kubernetes resources.

GitOps Working is a way of implementing Continuous Deployment for cloud-native applications. It


focuses on a developer-centric experience when operating infrastructure, by using tools developers
are already familiar with, including Git and Continuous Deployment tools. The pull request is
reviewed and changes are merged to the main branch. This triggers a web hook which tells Argo CD
a change was made. Argo CD clones the repo and compares the application state with the current
state of the Kubernetes cluster.

*************************Prometheus Grafana*********************
Working
PG working components files

1. Scraping Metrics: Prometheus scrapes metrics from each micro service every 15 seconds.
2. Storing Metrics: These metrics are stored in Prometheus’s time series database.
3. Querying Metrics: The Dev Ops team queries these metrics to create dashboards that show
the health of each micro service.
4. Alerting: Prometheus is configured to alert the team if the error rate of any micro service
exceeds 5% over 5 minutes.
5. Notification: When an alert is triggered, the Alert manager sends a notification to the team’s
Slack channel.

Setting up the YAML File


The prometheus.yml file is written in YAML format, a human-readable data serialization language. It
consists of three main sections:
 Global: This section defines global settings for Prometheus, such as scrape interval,
evaluation interval, and retention period for metrics data.
 scrape_configs: This section defines one or more scraping jobs. Each job specifies a group of
targets to scrape metrics from and how to scrape them.
 Rule files: (Optional) This section specifies one or more files containing alerting rules. These
rules define conditions that trigger alerts based on the collected metrics.

Grafana first need to install and add Prometheus data source and import the dashboard

*************************Splunk ***********************************
Working

Process Components forwards indexers search heads


Management components deployment server indexer cluster master node search head deployed
license master monitoring console

************************************Network***************************************
Keywords
Working communication protocol identification source to destination logical host and process
Connection
OSI, application, presentation, session, transport, network, data link, physical
TCP/IP application, transport, network, network interface
Protocols User Datagram Protocol, HTPP, FTP, NFS, SMTP

Commands trace route net stat ifconfig/hostname route route table nslookup host Arp address
Resolution Dig ethtool

********************************Linux********************************
Keywords: - files, users, system, process, services

Working

Commands
File ls, cd pwd mkdir rm cp touch cat grep find tar Gzip gunzip ZIP Chmod chown chgrp tail SORT
AWK sed cut tr export search history
User init useradd usermod userdel groupadd groupmod groupdel su su- top(-o mem) kill
ifconfig ping ssh scp Wget curl diff Head chroot file hexdump wc tee script keys
System Which where is locate date cal start/stop/restart services shutdown ps crontab at nohup bg
fg apropos info alias uname DF du mount In

***************************************Shell*******************************
To write the shell scripts file, first create file and give execution permissions and next need write
shebang(#!/bin/bash) and The read command helps in reading the user input (i.e. line of text) from
the standard input stdin. Command Substitution enables the output of the command to be
substituted as a value to the variable. It can be done in two ways: Using backticks (`) Using the
dollar sign ($)
Arguments passing
$0 - returns the file name of the shell script.$@ - returns all arguments passed from cli.$# - returns
the no of arguments passed from cli.
Arithmetic operations (()) We can also use test command to work with arithmetic and string
operations which provide more flexibility along with unary operators.
Conditions [[]]
[[ -z STRING ]] - Empty string
[[ -n STRING ]] - Not empty string
[[ STRING == STRING ]] - Equal
[[ STRING != STRING ]] - Not equal
[[ NUM -eq NUM ]] - Equal
[[ NUM -ne NUM ]] - Not equal
[[ NUM -lt NUM ]] - Less than
[[ NUM -le NUM ]] - Less than or equal
[[ NUM -gt NUM ]] - Greater than
[[ NUM -ge NUM ]] - Greater than or equal
[[ ! EXPR ]] - Not
[[ X && Y ]] - And
[[ X || Y ]] - Or
File Conditions
[[ -e FILE ]] - Exists
[[ -r FILE ]] - Readable
[[ -h FILE ]] - Symbolic link
[[ -d FILE ]] - Directory
[[ -w FILE ]] - Writable file
[[ -s FILE ]] - File size is > 0 bytes
[[ -f FILE ]] - File
[[ -x FILE ]] - Executable file
Loops if elif [[]]
if else loop is a conditional statement that allows executing different commands based on the
condition true/false. Here square brackets [[ ]] are used to evaluate a condition
elif is a combination of both else and if. It is used to create multiple conditional statements and it
must be always used in conjunction with if else statement

for while until break and continue


The for loop is used to iterate over a sequence of values and below is the syntax
The while loop is used to execute a set of commands repeatedly as long as a certain condition is
true. The loop continues until the condition is false.
The until loop in shell scripting is used to execute a block of code repeatedly until a certain condition
is met. break is a keyword. It is a control statement that is used to exit out of a loop ( for, while, or
until) when a certain condition is met. It means that the control of the program is transferred
outside the loop and resumes with the next set of lines in the script

continue is a keyword that is used inside loops (such as for, while, and until) to skip the current
iteration of the loop and move on to the next iteration. It means that when the continue keyword is
encountered while executing a loop the next set of lines in that loop will not be executed and moves
to the next iteration.

Functions are a block of code which can be used again and again for doing a specific task thus
providing code reusability. Functions with return values To access the return value of the function
we need to use $? to access that value
Arrays
An array is a variable that can hold multiple values under a single name

${arrayVarName[*]} - displays all the values of the array.


${#arrayVarName[@]} - displays the length of the array.
${arrayVarName[0]} - displays the first element of the array
${arrayVarName[-1]} - displays the last element of the array
unset arrayVarName[2] - deletes the 2 element
#!/bin/bash
Variable The variable is a placeholder for saving a value which can be later accessed using that
name. There are two types of variable
Global - Variable defined outside a function which can be accessed throughout the script
Local - Variable defined inside a function and can be accessed only within it
Dictionaries In shell scripting, dictionaries are implemented using associative arrays. An associative
array is an array that uses a string as an index instead of an integer
Set options set -x - it's like a debug mode set -e - immediately exits set -o pipefail - To overcome the
above pipe command error

****************************************AWS**********************************

Choose AMI, Instance Type, Configure instance, Add storage, Add tags, SG
An Amazon Machine Image (AMI) is used to create virtual servers (Amazon Elastic Compute Cloud or
EC2 instances) in the Amazon Web Services (AWS) environment. Different types of instances can be
launched from a single AMI to support the hardware of the host computer used for the instance.

Template Creation an AWS Cloud Formation template is a formatted text file in JSON or YAML
language that describes your AWS infrastructure. To create, view and modify templates, you can use
AWS Cloud Formation Designer or any text editor tool. Template may come from a French word that
refers to a part of a loom, but the ultimate origin is unknown. The word template can be used in
almost countless contexts. Anything that serves as a pattern to make something can be called a
template.
EBS 8GB default AWS EBS is also called AWS Elastic Block Store. EBS is a service that provides
storage volumes. You can use provided storage volumes in Amazon EC2 instances. EBS volumes are
used for data that needs to persist.
Elastic IP an Elastic IP address is a static IP address designed for dynamic cloud computing. An Elastic
IP address is associated with your AWS account. It's a public IP address, which is reachable from the
internet.
Load Balancing A load balancer serves as the single point of contact for clients. The load balancer
distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple
Availability Zones. This increases the availability of your application. You add one or more listeners
to your load balancer.
Route traffic,
Health check, SSL termination
IAM with AWS Identity and Access Management (IAM), you can specify who or what can access
services and resources in AWS, centrally manage fine-grained permissions, and analyze access to
refine permissions across A
SNS Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up,
operate, and send notifications from the cloud.
Cloud watch Cloud Watch enables you to monitor your complete stack (applications, infrastructure,
network, and services) and use alarms, logs, and events data to take automated actions and reduce
mean time to resolution (MTTR). This frees up important resources and allows you to focus on
building applications and business value.
Lambda AWS Lambda is a server less compute service that runs your code in response to events and
automatically manages the underlying compute resources for you.
Auto Scaling AWS Auto Scaling monitors your applications and automatically adjusts capacity to
maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it's
easy to setup application scaling for multiple resources across multiple services in minutes.
RDS Amazon Relational Database Service (Amazon RDS) is a collection of managed services that
makes it simple to set up, operate, and scale databases in the cloud.
Cloud formation AWS Cloud Formation is a service that gives developers and businesses an easy way
to create a collection of related AWS and third-party resources, and provision and manage them in
an orderly and predictable fashion.
EFS Amazon Elastic File System (Amazon EFS) is a simple, server less, set-and-forget, elastic file
system. There is no minimum fee or setup charge. You pay only for the storage you use, for read and
write access to data stored in Infrequent Access storage classes, and for any provisioned throughput
S3 Buckets A bucket is a container for objects. To store your data in Amazon S3, you first create a
bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as
objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the
object within the bucket.
VPC A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically
isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the
VPC, add subnets, add gateways, and associate security groups.
Route 53 With Amazon Route 53, you can create and manage your public DNS records. Like a phone
book, Route 53 lets you manage the IP addresses listed for your domain names in the Internet's DNS
phone book. Route 53 also answers requests to translate specific domain names like into their
corresponding IP addresses like 192.0.

You might also like