Professional Documents
Culture Documents
TargetPages
TargetPages
Databases
Databases: RDS, Aurora & ElastiCache Security Backup Snapshot Performanc Metrics
Troubleshooting,Scalability
************************************Network***************************************
communication protocol
identification source to destination logical host and process
connection
Network
network connecting between two computers DNS IP address Protocol
workstation and servershub, switch, bridge, router, brouter gateway
********************************Linux********************************
verbs files create delete move copy paste open read write text processing listing
User, file and partition, LVM and Raids Network, security booting
Job automation, admin ,remote memory and swap
software management, backup and restore
service, process
server performance
Samba, DNS, DHCP, webserver, mail server, mysql, log server
Virtualization, redhat cluster, kick start installation
remote storage
Linux user/apps/shell/kernel/hardware
Directory bin etc home opt tmp usr var
Binary binary, etc system configuration files, home home , opt optinal, tmp temprory/reboot, usr
user, var log file
Commands about files (manipulation,operation and compression) Text processing and manipulation
System admin, process and service finally network commands
LVM memory backup
Protocol
Servers
*************************************Terraform************************************
Terraform working components commands .tf files state provionser backup state loops conditions
State locking and remote state
splat exp
Built in function There are functions for numbers, strings, collections, file system, date and time, IP
Network, Type Conversions and more.
NO user defined functions
***********************************Ansible*****************************************
Ansible working, components, commands, files, roles, loops, conditions
Anisble EC2, Setup hostname, Create ans admin users, Add users to sudoers file, Generate ssh keys
Enable password based login, Install ansible, Integrate ansible with jenkins
Ansible works by connecting to your nodes and pushing out small programs, called "Ansible
modules" to them. Ansible then executes these modules (over SSH by default), and removes them
when finished. Your library of modules can reside on any machine, and there are no servers,
daemons, or databases required. The management node in the above picture is the controlling node
(managing node) which controls the entire execution of the playbook. It’s the node from which you
are running the installation. The inventory file provides the list of hosts where the Ansible modules
needs to be run and the management node does a SSH connection and executes the small modules
on the hosts machine and installs the product/software. Beauty of Ansible is that it removes the
modules once those are installed so effectively it connects to host machine , executes the
instructions and if it’s successfully installed removes the code which was copied on the host machine
which was executed.
after the installation of ansible next thing is ssh key generation to connect with node. To connect we
need to write the inventory file of private IP address of nodes. To achieve password less auth
between ansible to node is to copy of public key of ansible server to paste it on authorized keys of
nodes.
Ansible ad hoc commands are used to perform the quick actions. Mainly uses of these commands for
parallelism and shell, file transfer packages and services.
Playbooks
Roles
Variables
Expectation handling block, rescue and always
Loops with_items: {{}}
Conditions when: logical Or and logical AND
*******************************Jenkins****************************************
Jenkins working, components, commands, envi login and path, backup, security, secrets, users
How do you manage users and roles in Jenkins? Managing User Roles on Jenkins
Click on Manage Jenkins. Select Manage and Assign Roles. Note that, Manage and Assign Role will
only be visible if you have installed the Role strategy plugin.
branch - expression - anyOf - allOf - not - parallel stage- stages are executed in parallel but
agent, environment, tools and post may also optionally be defined in stage environment - a
sequence of “key = value” pairs to define environment variablescredentials(‘<id>’) (optional) - Bind
credentials to variable.libraries - load shared libraries from an scm lib - the name of the shared
library to load options - options for entire Pipeline.
skipDefaultCheckout - disable auto checkout scm timeout - sets timeout for entire Pipeline
buildDiscarder - discard old builds disable ConcurrentBuilds - disable concurrent Pipeline runs
ansiColor - color the log file output tools - Installs predefined tools to be available on PATH
triggers - triggers to launch Pipeline based on schedule, etc. parameters - parameters that are
prompted for at run time.
post - defines actions to be taken when pipeline or stage completes based on outcome.
***********************************GIT******************************************
Staging and commits Init add commit clone Stash save Ignore fork Repository
Index Head Origin Master Tags Upstream and Down stream
Undoing Changes Checkout Revert Reset Rm Cherry-pick
Braching & Merging Branch Merge and Merge conflict rebase Squash
Collaborating Fetch Pull Push
git branch git switch create the baranch
git checkout branch name change to that branch
git checkout -b branch name git switch -c branch name
git merge other branch branch between into single branch
git revert undoing changes to a repository's commit history
Git Reset undo the changes in your working directory and get back to a specific commit while
discarding all the commits made after that one.
git stash acts as a version control tool and lets developers work on other activities or switch
branches in Git without having to discard or commit changes that aren't ready.
git reflog recover lost commits or branches in the repository
git rebase the process of moving or combining a sequence of commits to a new base commit
git cherry-pick choosing a commit from one branch and applying it to another branch.
******************Maven**************************************
Maven Repository
Local = Developer
central = maven community
Remote = on web server
Running SonarQube Locally, Generating Token in sonarqube Analyzing Source Code, Analysis the
result and Sonarqube Quality gate
************************Trivy*****************************************************
Trivy working, components,commands
Trivy is an open-source vulnerability scanner used for scanning container images, file systems, and
git repositories.
************************************Docker****************************************
Docker, working, components, commands, file, volume, n/w
Docker is an open-source centralized platform designed to create, deploy, and run applications.
Docker uses container on the host's operating system to run applications. It allows applications to
use the same Linux kernel as a system on the host computer, rather than creating a whole virtual
operating system. Containers ensure that our application works in any environment like
development, test, or production. Docker Host is used to provide an environment to execute and run
applications.
Docker client uses commands and REST APIs to communicate with the Docker Daemon (Server).
When a client runs any docker command on the docker client terminal, the client terminal sends
these docker commands to the Docker daemon. Docker daemon receives these commands from the
docker client in the form of command and REST API's request.Docker deamon runs on host.
Docker Registry manages and stores the Docker images. Docker images are the read-only binary
templates used to create Docker Containers, are the structural units of Docker, which is used to hold
the entire package that is needed to run the application. The advantage of containers is that it
requires very less resources.In other words, we can say that the image is a template, and the
container is a copy of that template.
Bridge - Bridge is a default network driver for the container. It is used when multiple docker
communicates with the same docker host.
Host - It is used when we don't need for network isolation between the container and the host.
Overlay - Overlay offers Swarm services to communicate with each other. It enables containers to
run on the different docker host.
Macvlan - Macvlan is used when we want to assign MAC addresses to the containers. Docker Storage
is used to store data on the container.
Data Volume - Data Volume provides the ability to create persistence storage. It also allows us to
name volumes, list volumes, and containers associates with the volumes.
Directory Mounts - It is one of the best options for docker storage. It mounts a host's directory into a
container.
it is a tool which is used to create and start Docker application by using a single command. We can
use it to file to configure our application's services.It is a great tool for development, testing, and
staging environments. Put Application environment variables inside the Dockerfile to access
publicly.Provide services name in the docker-compose.yml file so they can be run together in an
isolated environment. run docker-compose up and Compose will start and run your entire app.
*********************************K8S************ ****************
k8s working, components, commands, pods, config-maps and secrets. Volumes, security
Storage PVs persistent volume Persistent Volume supports three types of Reclaim Policy
Retain, Delete, Recycle
Persistent Volume supports three types of access modes ReadWriteOnce, ReadOnlyMany,
ReadWriteMany, ENV environment
PVC
A Storage Class provides a way for administrators to describe the classes of storage they offer.
Each Storage Class contains the fields provisioner, parameters, and reclaimPolicy
Add-ons, DNS, Web, UI Container
********************commands***************************************
Create read Update and Delete
Create expose run set get explain edit delete
Describe logs attach exec port forward proxy cp
*********************************************************************
Helm charts are collections of pre-configured Kubernetes resources. They can be thought of as
packages that contain everything needed to deploy an application or service on Kubernetes. A Helm
chart includes detailed information about the application structure, its dependencies, and the
necessary configuration to run it on Kubernetes. Essentially, Helm charts standardize and simplify
the deployment process, making it easy to share and reuse configurations across different
environments or among different teams.
chart.yaml: This is where you’ll put the information related to your chart. That includes the chart
version, name, and description so you can find it if you publish it on an open repository. Also in this
file you’ll be able to set external dependencies using the dependencies key.
values.yaml: Like we saw before, this is the file that contains defaults for variables.
templates (dir): This is the place where you’ll put all your manifest files. Everything in here will be
passed on and created in Kubernetes.
charts: If your chart depends on another chart you own, or if you don’t want to rely on Helm’s
default library (the default registry where Helm pull charts from), you can bring this same structure
inside this directory. Chart dependencies are installed from the bottom to the top, which means if
chart A depends on chart B, and B depends on C, the installation order will be C ->B ->A.
Argo CD is a Kubernetes-native continuous deployment (CD) tool. Unlike external CD tools that only
enable push-based deployments, Argo CD can pull updated code from Git repositories and deploy it
directly to Kubernetes resources.
What is GitOps?
GitOps is a way of implementing Continuous Deployment for cloud-native applications. It focuses on
a developer-centric experience when operating infrastructure, by using tools developers are already
familiar with, including Git and Continuous Deployment tools.
The pull request is reviewed and changes are merged to the main branch. This triggers a webhook
which tells Argo CD a change was made. Argo CD clones the repo and compares the application state
with the current state of the Kubernetes cluster.
*************************Prometheus Grafana*********************
1. Scraping Metrics: Prometheus scrapes metrics from each micro service every 15 seconds.
2. Storing Metrics: These metrics are stored in Prometheus’s time series database.
3. Querying Metrics: The Dev Ops team queries these metrics to create dashboards that show
the health of each micro service.
4. Alerting: Prometheus is configured to alert the team if the error rate of any micro service
exceeds 5% over 5 minutes.
5. Notification: When an alert is triggered, the Alert manager sends a notification to the team’s
Slack channel.
Gafana first need to install and add Prometheus data source and import the dashboard
*************************Splunk ***********************************
Splunk working components