Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Terraform aws region, cidr, vpc, subnet, route table, route table association, ec2 ami

subnet id, key


Ansible play book is used to install Jenkins on launched ec2 instance
Jenkins CICD pipeline, environment defined, how git, maven (goals) and sonarqube (rules)
works, trivy image scanner working
Backup security,secrets, user management
Docker working, container creation and exposed at port
Kubernetes Pods deployment service Helm and Argo CD
M PG
L Splunk
************************************Cloud*****************************************
IAM User Group Policy Role Cross Region Replication MFA
Ec2 choose AMI Instance Type configure instance Add storage Add tags Security Group Bootstrap
EC2 Instance Placement Group Shunt down Behaviour
Troubleshooting AMI Cloud watch Metrics Public IP vs Elastic IP
VPC route table NACL Security Group
Internet Gateway route table public subnet
Internet Gateway bastion host NAT gateway private subnet
AWS VPC create

PS VS PS VPC peering NACL VS SG VPN

S3 in depth: ALL you need to know about S3:


MFA Default Encryption, Policies Access Logs,Athena, Glacier CloudFront, Snowball,Storage
Gateway, Bucket object classes Life Cycle Management

Databases
Databases: RDS, Aurora & ElastiCache Security Backup Snapshot Performanc Metrics
Troubleshooting,Scalability

Route 5 works hosted zone srecord typesHelath Checks

************************************Network***************************************
communication protocol
identification source to destination logical host and process
connection
Network
network connecting between two computers DNS IP address Protocol
workstation and servershub, switch, bridge, router, brouter gateway

OSI, application, presentation, session, transport, network, datalink, physical


TCP/IP application, transport, network, network interface
User Datagram Protocol, HTPP, FTP,NFS,SMTP
ping remote system running or not
traceroute seequence of networks
netstat status of ports
ifconfig/hostname to find IP address and netmask
route route table
nslookup name server lookup and about DNS server
host network address info about a remote system to your network
arp address Resolution Protoc
dig Domain Information Groper
ethtool displaying and modifying network interface controllers (NICs) parameters & device driver
software.
network remote system running or not
seequence of networks
status of ports
to find IP address and netmask
route table
name server lookup and about DNS server

********************************Linux********************************
verbs files create delete move copy paste open read write text processing listing

globally regular expression pattern


user useradd group size permissions
system cpuinfo exectuable path display date locate
process schedule running jobs
service start stop

User, file and partition, LVM and Raids Network, security booting
Job automation, admin ,remote memory and swap
software management, backup and restore
service, process
server performance
Samba, DNS, DHCP, webserver, mail server, mysql, log server
Virtualization, redhat cluster, kick start installation
remote storage
Linux user/apps/shell/kernel/hardware
Directory bin etc home opt tmp usr var
Binary binary, etc system configuration files, home home , opt optinal, tmp temprory/reboot, usr
user, var log file
Commands about files (manipulation,operation and compression) Text processing and manipulation
System admin, process and service finally network commands
LVM memory backup
Protocol
Servers

*************************************Terraform************************************
Terraform working components commands .tf files state provionser backup state loops conditions
State locking and remote state

Commands Init --> backend reconfigure migrate upgrade


Plan --> out input var target
validate
apply --> auto approve, replace, var parallelism
destroy--> auto approve target
Taint untaint refresh
workspace --> delete list new select show
state --> list show mv rm pull
output
import-->
Loops count metaarg loop over resources for each metaarg loop over resources and inline blocks
within a resource for list and maps

splat exp

conditional exp condition ? true :false

Built in function There are functions for numbers, strings, collections, file system, date and time, IP
Network, Type Conversions and more.
NO user defined functions

State State locking remote state


How the operation executed and where the state is stored
A Terraform module is a set of Terraform configuration files in a single directory.
When you run Terraform commands like terraform plan or terraform apply directly from
such a directory, then that directory will be considered the root module.
TO create resource on AWS first we need to write the Main.tf in which we need specify Provider and
AWS region and Availability Zone then after define the Varible CIDR and the resource is aws keypair.
Next thing is write the VPC and CIDR
Next thing is write the AWS Sub net and its components are VPC id, CIDR Block, Avail Zone, map
public on launch Provide internet gateway with VPC Id and next thing is AWS route table with VPC id
route Cidr gateway Id and also provide the route table association subnet id with route table id
Finally need to provide the security group for VPC for ingress and Egress traffic which means specify
the from port to port protocol and cidr blocks, and need to provide tag
Last but least is aws instance server with AMI, Instance type key name Vpc security group id and
subnet id and need to provide the connection is type user and private key with host if want to
execute the command after we need to define the provisioner souce and desination inline details.To
Create the S3 bucket we need to provide bucket name with key and in which region are going create
that encrypt and as well as with dynamodb
appalication Loadbalancer name internal typeSG Subnet enable deletion protection access
bucket prefix enable tag envi
Network Load Balancer name internal type SG Subnet enable deletion protection tag Specify
Elastic IP Name Load balancer type subnet mapping

***********************************Ansible*****************************************
Ansible working, components, commands, files, roles, loops, conditions

Anisble EC2, Setup hostname, Create ans admin users, Add users to sudoers file, Generate ssh keys
Enable password based login, Install ansible, Integrate ansible with jenkins

Ansible works by connecting to your nodes and pushing out small programs, called "Ansible
modules" to them. Ansible then executes these modules (over SSH by default), and removes them
when finished. Your library of modules can reside on any machine, and there are no servers,
daemons, or databases required. The management node in the above picture is the controlling node
(managing node) which controls the entire execution of the playbook. It’s the node from which you
are running the installation. The inventory file provides the list of hosts where the Ansible modules
needs to be run and the management node does a SSH connection and executes the small modules
on the hosts machine and installs the product/software. Beauty of Ansible is that it removes the
modules once those are installed so effectively it connects to host machine , executes the
instructions and if it’s successfully installed removes the code which was copied on the host machine
which was executed.

after the installation of ansible next thing is ssh key generation to connect with node. To connect we
need to write the inventory file of private IP address of nodes. To achieve password less auth
between ansible to node is to copy of public key of ansible server to paste it on authorized keys of
nodes.

Ansible ad hoc commands are used to perform the quick actions. Mainly uses of these commands for
parallelism and shell, file transfer packages and services.

Playbooks
Roles
Variables
Expectation handling block, rescue and always
Loops with_items: {{}}
Conditions when: logical Or and logical AND

*******************************Jenkins****************************************

Jenkins working, components, commands, envi login and path, backup, security, secrets, users

Jenkins is an open source continuous integration/continuous delivery and deployment (CI/CD)


automation software DevOps tool written in the Java programming language. It is used to implement
CI/CD workflows, called pipelines.

Backup and restore Jenkins configurations.


Creating a Backup. Filesystem snapshots. Plugins for backup. Writing a shell script for backups.
Back up the Controller Key Separately.
Which Files Should Be Backed Up? $JENKINS_HOME. Configuration files. ./jobs Subdirectory. ...
Validating a backup.Summary. Going further.

How do you handle secrets and credentials in Jenkins?


To do this, navigate to the "Credentials" page in the Jenkins settings. This is Jenkins' official
credential management tool. You can add, modify, and delete secrets as needed. You can also
specify which jobs or projects a secret is available for, and assign permissions to specific users or
groups

How do you secure Jenkins?


Implement secure configurations for Jenkins, such as enforcing strong passwords, enabling two-
factor authentication, and configuring secure communication channels (e.g., SSL/TLS) to protect
sensitive data.

How do you manage users and roles in Jenkins? Managing User Roles on Jenkins
Click on Manage Jenkins. Select Manage and Assign Roles. Note that, Manage and Assign Role will
only be visible if you have installed the Role strategy plugin.

CI/CD Jenkins pipeline


CI/CD Checkout --> Build and Test---> Build and push docker image --> update the deployment file
Checkout ---> build docker ---> push the artifacts --> checkout k8s manifest SCM--> update the
K8s manifest and push the repo
pipeline stages stage steps
any build step or build wrapper defined in Pipeline. e.g. sh, bat, powershell, timeout, retry, echo,
archive, junit, etc.script - execute Scripted Pipeline block when - executes stage conditionally

branch - expression - anyOf - allOf - not - parallel stage- stages are executed in parallel but
agent, environment, tools and post may also optionally be defined in stage environment - a
sequence of “key = value” pairs to define environment variablescredentials(‘<id>’) (optional) - Bind
credentials to variable.libraries - load shared libraries from an scm lib - the name of the shared
library to load options - options for entire Pipeline.
skipDefaultCheckout - disable auto checkout scm timeout - sets timeout for entire Pipeline
buildDiscarder - discard old builds disable ConcurrentBuilds - disable concurrent Pipeline runs
ansiColor - color the log file output tools - Installs predefined tools to be available on PATH
triggers - triggers to launch Pipeline based on schedule, etc. parameters - parameters that are
prompted for at run time.

post - defines actions to be taken when pipeline or stage completes based on outcome.

***********************************GIT******************************************
Staging and commits Init add commit clone Stash save Ignore fork Repository
Index Head Origin Master Tags Upstream and Down stream
Undoing Changes Checkout Revert Reset Rm Cherry-pick
Braching & Merging Branch Merge and Merge conflict rebase Squash
Collaborating Fetch Pull Push
git branch git switch create the baranch
git checkout branch name change to that branch
git checkout -b branch name git switch -c branch name
git merge other branch branch between into single branch
git revert undoing changes to a repository's commit history
Git Reset undo the changes in your working directory and get back to a specific commit while
discarding all the commits made after that one.
git stash acts as a version control tool and lets developers work on other activities or switch
branches in Git without having to discard or commit changes that aren't ready.
git reflog recover lost commits or branches in the repository
git rebase the process of moving or combining a sequence of commits to a new base commit
git cherry-pick choosing a commit from one branch and applying it to another branch.

******************Maven**************************************

Maven working, components, commands, Goals

1. validate: Validates the project configuration.


2. compile: Compiles the source code into bytecode.
3. test: Runs the tests for the project.
4. package: Packages the compiled code and resources into an artifact (e.g., JAR, WAR).
5. install: Installs the artifact in the local repository.
6. deploy: Copies the artifact to a remote repository.
POM XML file dependencies source library plugin goals
Dependencies are external java libraries
Repositories are directories of packaged JAR files.
Build Life Cycles, Phases, and Goals: A build life cycle consists of a sequence of build phases, and
each build phase consists of a sequence of goals.
Build Profiles: Build profiles a set of configuration values that allows you to build your project using
different configurations.
Build Plugins: Build plugins are used to perform a specific goal

Maven Repository
Local = Developer
central = maven community
Remote = on web server

*********************sonar cube anayasis**********************


Sonar, working, components, commands, rules

Running SonarQube Locally, Generating Token in sonarqube Analyzing Source Code, Analysis the
result and Sonarqube Quality gate

Code scan analyser database

************************Trivy*****************************************************
Trivy working, components,commands

Trivy is an open-source vulnerability scanner used for scanning container images, file systems, and
git repositories.

Standalone and client/server

************************************Docker****************************************
Docker, working, components, commands, file, volume, n/w

Docker is an open-source centralized platform designed to create, deploy, and run applications.
Docker uses container on the host's operating system to run applications. It allows applications to
use the same Linux kernel as a system on the host computer, rather than creating a whole virtual
operating system. Containers ensure that our application works in any environment like
development, test, or production. Docker Host is used to provide an environment to execute and run
applications.

Docker client uses commands and REST APIs to communicate with the Docker Daemon (Server).
When a client runs any docker command on the docker client terminal, the client terminal sends
these docker commands to the Docker daemon. Docker daemon receives these commands from the
docker client in the form of command and REST API's request.Docker deamon runs on host.
Docker Registry manages and stores the Docker images. Docker images are the read-only binary
templates used to create Docker Containers, are the structural units of Docker, which is used to hold
the entire package that is needed to run the application. The advantage of containers is that it
requires very less resources.In other words, we can say that the image is a template, and the
container is a copy of that template.
Bridge - Bridge is a default network driver for the container. It is used when multiple docker
communicates with the same docker host.

Host - It is used when we don't need for network isolation between the container and the host.

None - It disables all the networking.

Overlay - Overlay offers Swarm services to communicate with each other. It enables containers to
run on the different docker host.

Macvlan - Macvlan is used when we want to assign MAC addresses to the containers. Docker Storage
is used to store data on the container.

Docker offers the following options for the Storage -

Data Volume - Data Volume provides the ability to create persistence storage. It also allows us to
name volumes, list volumes, and containers associates with the volumes.
Directory Mounts - It is one of the best options for docker storage. It mounts a host's directory into a
container.

Storage Plugins - It provides an ability to connect to external storage platforms.

it is a tool which is used to create and start Docker application by using a single command. We can
use it to file to configure our application's services.It is a great tool for development, testing, and
staging environments. Put Application environment variables inside the Dockerfile to access
publicly.Provide services name in the docker-compose.yml file so they can be run together in an
isolated environment. run docker-compose up and Compose will start and run your entire app.

*********************************K8S************ ****************
k8s working, components, commands, pods, config-maps and secrets. Volumes, security

A user deploys a Kubernetes manifest specifying desired pod configurations.The manifest is


submitted to the API server.The API server stores the configuration in etcd. The Controller Manager
detects the desired state and instructs the Kubelet on worker nodes to start or stop
containers.Kubelet communicates with the container runtime to execute the desired
state. Kube Proxy manages network rules, enabling communication between pods

Node (work machine)


Namespace(virtual clusters)
cluster ( set of worker machine) Cluster Autosclar
controlplane (collection of all process)
kubectl(command line tool)
API server (front end)
ETCd (cluster data)
cloud control mangers
scheduler
kubeproxy Network rules
kubelet( an agent)
container run time
Pod is smallest unit of deployment) Quotas Annotations Labels and
Selectors Liveness and readiness Probes HPA VPA pod priority and Premption
Taints and Tolerations node affinity pod presets Init containers must run to completion
before the Pod can be ready; sidecar containers continue running during a Pod's lifetime, and do
support some probes.

Workloads and controllers


Deployment update to applications
Replicas same as pods
Replicas sets specific set of pods
stateful sets is used to databases Stateful Sets can only be scaled up, scaled down, or deleted.
Deamonsets running background of cluster to collect to info about logging Daemon Sets are
designed to run one Pod per Node. Deployments manage the rollout and rollback of application
updates, providing fault tolerance and minimal downtime
Jobs is task
Cronjobs automate the JObs

Service and networking


Services is used to communicate to outside to world
Ingress is used to entrypoint to traffic for POD
Load balancer is used control traffic for POD
Service discovery polices Kubernetes service discovery is an abstraction that allows an application
running on a set of Pods to be exposed as a network service. This enables a set of Pods to run using a
single DNS name, and allows Kubernetes load balancing across them all.
Configuration and secrets
Config-maps decouple environment-specific configuration from your container images

Secrets data as base64-encoded data, thereby ensuring an additional layer of security


ConfigMaps are typically used for non-sensitive configuration data, while Secrets are used for storing
sensitive information. ConfigMaps stores data as key-value pairs, whereas Secrets stores data as
base64-encoded data, thereby ensuring an additional layer of security. Once we have created the
secrets, it can be consumed in a pod or the replication controller as −
Environment Variable
Volume

Security and authorization


RABC Role based access control by using Verbs subjects and objects
Security polices need to apply policies to Pods
TSL certificate
Service Account is used to access service from Cloud management

Storage PVs persistent volume Persistent Volume supports three types of Reclaim Policy
Retain, Delete, Recycle
Persistent Volume supports three types of access modes ReadWriteOnce, ReadOnlyMany,
ReadWriteMany, ENV environment
PVC
A Storage Class provides a way for administrators to describe the classes of storage they offer.
Each Storage Class contains the fields provisioner, parameters, and reclaimPolicy
Add-ons, DNS, Web, UI Container

********************commands***************************************
Create read Update and Delete
Create expose run set get explain edit delete
Describe logs attach exec port forward proxy cp
*********************************************************************

Helm charts are collections of pre-configured Kubernetes resources. They can be thought of as
packages that contain everything needed to deploy an application or service on Kubernetes. A Helm
chart includes detailed information about the application structure, its dependencies, and the
necessary configuration to run it on Kubernetes. Essentially, Helm charts standardize and simplify
the deployment process, making it easy to share and reuse configurations across different
environments or among different teams.
chart.yaml: This is where you’ll put the information related to your chart. That includes the chart
version, name, and description so you can find it if you publish it on an open repository. Also in this
file you’ll be able to set external dependencies using the dependencies key.
values.yaml: Like we saw before, this is the file that contains defaults for variables.
templates (dir): This is the place where you’ll put all your manifest files. Everything in here will be
passed on and created in Kubernetes.
charts: If your chart depends on another chart you own, or if you don’t want to rely on Helm’s
default library (the default registry where Helm pull charts from), you can bring this same structure
inside this directory. Chart dependencies are installed from the bottom to the top, which means if
chart A depends on chart B, and B depends on C, the installation order will be C ->B ->A.

Argo CD is a Kubernetes-native continuous deployment (CD) tool. Unlike external CD tools that only
enable push-based deployments, Argo CD can pull updated code from Git repositories and deploy it
directly to Kubernetes resources.

What is GitOps?
GitOps is a way of implementing Continuous Deployment for cloud-native applications. It focuses on
a developer-centric experience when operating infrastructure, by using tools developers are already
familiar with, including Git and Continuous Deployment tools.

The pull request is reviewed and changes are merged to the main branch. This triggers a webhook
which tells Argo CD a change was made. Argo CD clones the repo and compares the application state
with the current state of the Kubernetes cluster.

*************************Prometheus Grafana*********************

PG working components files

1. Scraping Metrics: Prometheus scrapes metrics from each micro service every 15 seconds.
2. Storing Metrics: These metrics are stored in Prometheus’s time series database.
3. Querying Metrics: The Dev Ops team queries these metrics to create dashboards that show
the health of each micro service.
4. Alerting: Prometheus is configured to alert the team if the error rate of any micro service
exceeds 5% over 5 minutes.
5. Notification: When an alert is triggered, the Alert manager sends a notification to the team’s
Slack channel.

Setting up the YAML File


The prometheus.yml file is written in YAML format, a human-readable data serialization language. It
consists of three main sections:
 Global: This section defines global settings for Prometheus, such as scrape interval,
evaluation interval, and retention period for metrics data.
 scrape_configs: This section defines one or more scraping jobs. Each job specifies a group of
targets to scrape metrics from and how to scrape them.
 Rule files: (Optional) This section specifies one or more files containing alerting rules. These
rules define conditions that trigger alerts based on the collected metrics.

Gafana first need to install and add Prometheus data source and import the dashboard
*************************Splunk ***********************************
Splunk working components

Process Components forwards indexers search heads


Management components deployment server indexer cluster master node search head deployed
license master monitoring console
***********************************Shellscript*************************************
Shebang
Permissions and execution
Read input from user
Command Substitution
Arguments passing
Arithmetic operations (())
Conditions [[]]
Loops if elif
For while until break and continue
Functions with return values
Arrays
Variable
Dictionaries
Set options

List [] tuple () set {}

You might also like