Professional Documents
Culture Documents
Trinity
Trinity
*************************************Terraform************************************
Keywords main.tf, variable.tf, variable.tfvars, workspace provionser backup state loops conditions
State locking and remote state
Working The first step in the Terraform workflow is to write the configuration. Initialize the
Terraform Working Directory. Validate the Configuration, Create an Execution Plan, and Apply the
Changes. Destroy the Infrastructure.
Commands Init, Plan, apply, destroy, Taint, untaint, and refresh workspace, state, import
Loops count metaarg loop over resources for each metaarg loop over resources and inline blocks
within a resource for list and maps
Built in function there are functions for numbers, strings, collections, file system, date and time, IP
Network, Type Conversions and more. NO user defined functions
***********************************Ansible****************************************K
Keywords Inventory file, Playbook, roles, loops, conditions
Working
Ansible works by connecting to your nodes and pushing out small programs, called "Ansible
modules" to them. Ansible then executes these modules (over SSH by default), and removes them
when finished. Your library of modules can reside on any machine, and there are no servers,
daemons, or databases required. The management node in the above picture is the controlling node
(managing node) which controls the entire execution of the playbook. It’s the node from which you
are running the installation. The inventory file provides the list of hosts where the Ansible modules
needs to be run and the management node does a SSH connection and executes the small modules
on the hosts machine and installs the product/software. Beauty of Ansible is that it removes the
modules once those are installed so effectively it connects to host machine, executes the
instructions and if it’s successfully installed removes the code which was copied on the host machine
which was executed.
Ansible ad hoc commands are used to perform the quick actions. Mainly uses of these commands
for parallelism and shell, file transfer packages and services.
*******************************Jenkins****************************************
Working
CI/CD Jenkins pipeline
CI/CD Checkout --> Build and Test---> Build and push docker image --> update the deployment file
Checkout ---> build docker ---> push the artefacts --> checkout k8s manifest SCM--> update the
K8s manifest and push the repo
Post - defines actions to be taken when pipeline or stage completes based on outcome.
***********************************GIT******************************************
Keywords staging and commits, Undoing changes, branching and merging finally collaboration with
remote repository
Working Set up a Github Organization Fork Organization Repository to Your Personal GitHub, Clone
the Repository to Your Local Machine, Create a Branch for your Working Files
Set Remote Repository to the GitHub Organization, Get Coding!, Pull the Most Recent Files From the
Organization Repo, Merge the Master Branch Into the Feature Branch, Push Your Code to your
GitHub Repo, Make a Pull Request to the Organization Repo
Commands
Staging and commits Init add commit clone Stash save Ignore fork Repository
Index Head Origin Master Tags Upstream and Down stream
Undoing Changes Checkout Revert Reset Rm Cherry-pick
Branching & Merging Branch Merge and Merge conflict rebase Squash
Collaborating Fetch Pull Push
********************************Maven**************************************
However, I will first list some common Maven commands with a brief explanation of what they do.
After this list of common Maven commands I have a description of the Maven command structure.
mvn
One or more build life cycles, build phases or build goals
Here is a Maven command example:
mvn clean
This command consists of the mvn command which executes Maven, and the build life cycle named
clean.
You might wonder how you see the difference between a build life cycle, build phase and build goal.
I will get back to that later.
Clean
Default
Site
Inside each build life cycle there are build phases, and inside each build phase there are build goals.
You can execute either a build life cycle, build phase or build goal. When executing a build life cycle
you execute all build phases (and thus build goals) inside that build life cycle.
When executing a build phase you execute all build goals within that build phase. Maven also
executes all build phases earlier in the build life cycle of the desired build phase.
Build goals are assigned to one or more build phases. When the build phases are executed, so are all
the goals in that build phase. You can also execute a build goal directly.
Mvn clean
To execute the site build life cycle you execute this command:
Mvn site
Executing the Default Life Cycle
The default life cycle is the build life cycle which generates, compiles, packages etc. your source
code.
You cannot execute the default build life cycle directly, as is possible with the clean and site. Instead
you have to execute a specific build phase within the default build life cycle.
The most commonly used build phases in the default build life cycle are:
mvn compile
This example Maven command executes the compile build phase of the default build life cycle. This
Maven command also executes all earlier build phases in the default build life cycle, meaning the
validate build phase.
mvn pre-clean
mvn compile
mvn package
Maven will find out what build life cycle the specified build phase belongs to, so you don't need to
explicitly specify which build life cyle the build phase belongs to.
Running SonarQube Locally, Generating Token in sonarqube Analyzing Source Code, Analysis the
result and Sonarqube Quality gate
************************Trivy*****************************************************
Working
Trivy is an open-source vulnerability scanner used for scanning container images, file systems, and
git repositories. Standalone and client/server
************************************Docker****************************************
*********************************K8S************ *************************
Keywords
k8s working, components, commands, pods, config-maps and secrets. Volumes, security
Working
A user deploys a Kubernetes manifest specifying desired pod configurations. The manifest is
submitted to the API server. The API server stores the configuration in etcd. The Controller Manager
detects the desired state and instructs the Kubelet on worker nodes to start or stop
containers.Kubelet communicates with the container runtime to execute the desired
State. Kube Proxy manages network rules, enabling communication between pods
Commands
Create expose run set get explain edit delete
Describe logs attach exec port forward proxy cp
*********************************************************************
Helm Chart -Working
Helm charts are collections of pre-configured Kubernetes resources. They can be thought of as
packages that contain everything needed to deploy an application or service on Kubernetes. A Helm
chart includes detailed information about the application structure, its dependencies, and the
necessary configuration to run it on Kubernetes. Essentially, Helm charts standardize and simplify
the deployment process, making it easy to share and reuse configurations across different
environments or among different teams.
chart.yaml: This is where you’ll put the information related to your chart. That includes the chart
version, name, and description so you can find it if you publish it on an open repository. Also in this
file you’ll be able to set external dependencies using the dependencies key.
values.yaml: Like we saw before, this is the file that contains defaults for variables.
Templates (dir): This is the place where you’ll put all your manifest files. Everything in here will be
passed on and created in Kubernetes.
Charts: If your chart depends on another chart you own, or if you don’t want to rely on Helm’s
default library (the default registry where Helm pull charts from), you can bring this same structure
inside this directory. Chart dependencies are installed from the bottom to the top, which means if
chart A depends on chart B, and B depends on C, the installation order will be C ->B ->A.
Argo CD Working is a Kubernetes-native continuous deployment (CD) tool. Unlike external CD tools
that only enable push-based deployments, Argo CD can pull updated code from Git repositories and
deploy it directly to Kubernetes resources.
*************************Prometheus Grafana*********************
Working
PG working components files
1. Scraping Metrics: Prometheus scrapes metrics from each micro service every 15 seconds.
2. Storing Metrics: These metrics are stored in Prometheus’s time series database.
3. Querying Metrics: The Dev Ops team queries these metrics to create dashboards that show
the health of each micro service.
4. Alerting: Prometheus is configured to alert the team if the error rate of any micro service
exceeds 5% over 5 minutes.
5. Notification: When an alert is triggered, the Alert manager sends a notification to the team’s
Slack channel.
Grafana first need to install and add Prometheus data source and import the dashboard
*************************Splunk ***********************************
Working
************************************Network***************************************
Keywords
Working communication protocol identification source to destination logical host and process
Connection
OSI, application, presentation, session, transport, network, data link, physical
TCP/IP application, transport, network, network interface
Protocols User Datagram Protocol, HTPP, FTP, NFS, SMTP
Commands trace route net stat ifconfig/hostname route route table nslookup host Arp address
Resolution Dig ethtool
********************************Linux********************************
Keywords: - files, users, system, process, services
Working
Commands
File ls, cd pwd mkdir rm cp touch cat grep find tar Gzip gunzip ZIP Chmod chown chgrp tail SORT
AWK sed cut tr export search history
User init useradd usermod userdel groupadd groupmod groupdel su su- top(-o mem) kill
ifconfig ping ssh scp Wget curl diff Head chroot file hexdump wc tee script keys
System Which where is locate date cal start/stop/restart services shutdown ps crontab at nohup bg
fg apropos info alias uname DF du mount In
***************************************Shell*******************************
To write the shell scripts file, first create file and give execution permissions and next need write
shebang(#!/bin/bash) and The read command helps in reading the user input (i.e. line of text) from
the standard input stdin. Command Substitution enables the output of the command to be
substituted as a value to the variable. It can be done in two ways: Using backticks (`) Using the
dollar sign ($)
Arguments passing
$0 - returns the file name of the shell script.$@ - returns all arguments passed from cli.$# - returns
the no of arguments passed from cli.
Arithmetic operations (()) We can also use test command to work with arithmetic and string
operations which provide more flexibility along with unary operators.
Conditions [[]]
[[ -z STRING ]] - Empty string
[[ -n STRING ]] - Not empty string
[[ STRING == STRING ]] - Equal
[[ STRING != STRING ]] - Not equal
[[ NUM -eq NUM ]] - Equal
[[ NUM -ne NUM ]] - Not equal
[[ NUM -lt NUM ]] - Less than
[[ NUM -le NUM ]] - Less than or equal
[[ NUM -gt NUM ]] - Greater than
[[ NUM -ge NUM ]] - Greater than or equal
[[ ! EXPR ]] - Not
[[ X && Y ]] - And
[[ X || Y ]] - Or
File Conditions
[[ -e FILE ]] - Exists
[[ -r FILE ]] - Readable
[[ -h FILE ]] - Symbolic link
[[ -d FILE ]] - Directory
[[ -w FILE ]] - Writable file
[[ -s FILE ]] - File size is > 0 bytes
[[ -f FILE ]] - File
[[ -x FILE ]] - Executable file
Loops if elif [[]]
if else loop is a conditional statement that allows executing different commands based on the
condition true/false. Here square brackets [[ ]] are used to evaluate a condition
elif is a combination of both else and if. It is used to create multiple conditional statements and it
must be always used in conjunction with if else statement
continue is a keyword that is used inside loops (such as for, while, and until) to skip the current
iteration of the loop and move on to the next iteration. It means that when the continue keyword is
encountered while executing a loop the next set of lines in that loop will not be executed and moves
to the next iteration.
Functions are a block of code which can be used again and again for doing a specific task thus
providing code reusability. Functions with return values To access the return value of the function
we need to use $? to access that value
Arrays
An array is a variable that can hold multiple values under a single name
****************************************AWS**********************************
Choose AMI, Instance Type, Configure instance, Add storage, Add tags, SG
An Amazon Machine Image (AMI) is used to create virtual servers (Amazon Elastic Compute Cloud or
EC2 instances) in the Amazon Web Services (AWS) environment. Different types of instances can be
launched from a single AMI to support the hardware of the host computer used for the instance.
Template Creation an AWS Cloud Formation template is a formatted text file in JSON or YAML
language that describes your AWS infrastructure. To create, view and modify templates, you can use
AWS Cloud Formation Designer or any text editor tool. Template may come from a French word that
refers to a part of a loom, but the ultimate origin is unknown. The word template can be used in
almost countless contexts. Anything that serves as a pattern to make something can be called a
template.
EBS 8GB default AWS EBS is also called AWS Elastic Block Store. EBS is a service that provides
storage volumes. You can use provided storage volumes in Amazon EC2 instances. EBS volumes are
used for data that needs to persist.
Elastic IP an Elastic IP address is a static IP address designed for dynamic cloud computing. An Elastic
IP address is associated with your AWS account. It's a public IP address, which is reachable from the
internet.
Load Balancing A load balancer serves as the single point of contact for clients. The load balancer
distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple
Availability Zones. This increases the availability of your application. You add one or more listeners
to your load balancer.
Route traffic,
Health check, SSL termination
IAM with AWS Identity and Access Management (IAM), you can specify who or what can access
services and resources in AWS, centrally manage fine-grained permissions, and analyze access to
refine permissions across A
SNS Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up,
operate, and send notifications from the cloud.
Cloud watch Cloud Watch enables you to monitor your complete stack (applications, infrastructure,
network, and services) and use alarms, logs, and events data to take automated actions and reduce
mean time to resolution (MTTR). This frees up important resources and allows you to focus on
building applications and business value.
Lambda AWS Lambda is a server less compute service that runs your code in response to events and
automatically manages the underlying compute resources for you.
Auto Scaling AWS Auto Scaling monitors your applications and automatically adjusts capacity to
maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it's
easy to setup application scaling for multiple resources across multiple services in minutes.
RDS Amazon Relational Database Service (Amazon RDS) is a collection of managed services that
makes it simple to set up, operate, and scale databases in the cloud.
Cloud formation AWS Cloud Formation is a service that gives developers and businesses an easy way
to create a collection of related AWS and third-party resources, and provision and manage them in
an orderly and predictable fashion.
EFS Amazon Elastic File System (Amazon EFS) is a simple, server less, set-and-forget, elastic file
system. There is no minimum fee or setup charge. You pay only for the storage you use, for read and
write access to data stored in Infrequent Access storage classes, and for any provisioned throughput
S3 Buckets A bucket is a container for objects. To store your data in Amazon S3, you first create a
bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as
objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the
object within the bucket.
VPC A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically
isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the
VPC, add subnets, add gateways, and associate security groups.
Route 53 With Amazon Route 53, you can create and manage your public DNS records. Like a phone
book, Route 53 lets you manage the IP addresses listed for your domain names in the Internet's DNS
phone book. Route 53 also answers requests to translate specific domain names like into their
corresponding IP addresses like 192.0.