Cmds DEVOPS

You might also like

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 51

name: Deploy to Amazon EKS

on:
push:
branches: [ "master" ]
env:
AWS_REGION: ap-south-1
ECR_REPOSITORY: alpha-bucket
permissions:
contents: read
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.run_number }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Checkout code from GitHub to runner
uses: actions/checkout@v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Substitute Values In Deployment Files
uses: cschleiden/replace-tokens@v1
with:
tokenPrefix: '${'
tokenSuffix: '}'
files: '["bucketdeploy.yml"]'
- name: Install kubectl
uses: azure/setup-kubectl@v3
- name: Update kube config
run: |
aws eks update-kubeconfig --name mycluster --region ${{ env.AWS_REGION }}

- name: Deploy application images to EKS cluster using manifest


run: |
kubectl apply -f bucketdeploy.yml

https://sonarqube.thewayfarers.in/ ======sonarqube dashboard url

11-05-24
ARCH
uname

10-05-24 ===============git bash for all dont install ubuntu

git bash ===admin ==


mv /c/Users/Admin/Downloads/kubectl.exe /mingw64/bin
====set system path in env == /c/Users/Admin/Downloads

kubectl version --client --output=yaml


aws --version
eksctl version
aws configure ==access keys of wayfarers==check region

56 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get pods
--all-namespaces

58 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get pv
59 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get pvc
60 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get ns
61 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get pods
-n sonarqube
62 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get
deploy -n sonarqube
63 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get svc
-n sonarqube
64 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get
ingress -n sonarqube
65 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get pv -
n sonarqube
66 kubectl
--kubeconfig=/c/Users/Admin/Downloads/uxdl/wayfarers/kube/config.wayfarers get pvc
-n sonarqube

aws configure ==access keys of workspace ==check region

kubectl --kubeconfig=/c/Users/Admin/Downloads/uxdl/workspace/config get pods -n


alpha
kubectl --kubeconfig=/c/Users/Admin/Downloads/uxdl/workspace/config get deploy -n
alpha

=============================
8-5-24

07-05-24
======if ubutu app installed from windows play store getting error
and cannot find file ext4.vhdx

C:\Users\Admin\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu_79rhkp1fndgsc\
LocalState\ext4.vhdx
then

windows+x (admin )

C:\Windows\System32\wsl.exe --unregister ubuntu

C:\Windows\System32\wsl.exe --install -d ubuntu

suresh
6268

========06-05-24

curl -Lv https://alpha.the-workspace.in -H "Cache-Control: no-cache, no-store,


must-revalidate"
curl -Lv https://app.the-workspace.in -H "Cache-Control: no-cache, no-store,
must-revalidate"

curl -Lv https://thewayfarers.in -H "Cache-Control: no-cache, no-store, must-


revalidate"
curl -Lv https://uxdl.in -H "Cache-Control: no-cache, no-store, must-revalidate"
******************
aws configure ====workspace
workspace
**************
kubectl --kubeconfig=C:\Users\Admin\Downloads\uxdl\workspace\config get pods -n
alpha
kubectl --kubeconfig=C:\Users\Admin\Downloads\uxdl\workspace\config get deploy -n
alpha

******************
aws configure ====wayfarers
wayfarers
**************
kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get pods --all-
namespaces
**************

kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get ingress -n


alpha
kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get ingress -n
sonarqube
kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get pods -n
sonarqube
kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get svc -n
sonarqube

kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get deploy -n


alpha
kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get svc -n
alpha

to read from aws by mahesh

aws certifaicate manager


route 53
cognito
sns ses use cases

iam role policy users


eks
alb
asg
VPC

03-05-24 ======setting up sonar qube for wayfarers

====by using helm installing sonarqube dash board

check aws-cli,kubectl,eksctl,

====install helm repo

helm repo add sonarqube https://SonarSource.github.io/helm-chart-sonarqube


helm repo update

kubectl --kubeconfig=.\config.wayfarers create ns sonarqube

aws configure ===root keys (mahesh) ==for eks admin


helm repo list

helm upgrade --install -n sonarqube sonarqube sonarqube/sonarqube

eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-


system --cluster thewayfarerscluster --role-name AmazonEKS_EBS_CSI_DriverRole --
role-only --attach-policy-arn
arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve

===create kms from aws a/c


===copy arn

aws iam create-policy \


--policy-name KMS_Key_For_Encryption_On_EBS_Policy \
--policy-document file://kms-key-for-encryption-on-ebs.json

aws iam attach-role-policy \


--policy-arn
arn:aws:iam::111122223333:policy/KMS_Key_For_Encryption_On_EBS_Policy \
--role-name AmazonEKS_EBS_CSI_DriverRole

eksctl create addon --name aws-ebs-csi-driver --cluster thewayfarerscluster --


service-account-role-arn
arn:aws:iam::406600804856:role/AmazonEKS_EBS_CSI_DriverRole --force

-==== check sonarqube pods,svc,pvc

======================================04-05-24

=====git clone wayfarers yaml files

=== change namespace, arn ,host name , svc name, port no in file of sonarqube-
nginx-ingress.yml

=== kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get svc -n


sonarqube

kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers create -f


sonarqube-nginx-ingress.yml -n sonarqube

kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get ingress -n


sonarqube
===check adress created or not
===aws ===certificate manager == copy arn form cer.. man..
===route53 == hosted zones == thewayfarers.in ==== Create record ==
sonarqube.thewayfarers.in == add cname === enter value (copy ingress address ) ==
create records

kubectl --kubeconfig=C:\Users\Admin\Downloads\kube\config.wayfarers get ingress -n


sonarqube

====open browser === past host name == default login for sonar qube ==admin p :
admin === create 16 char strong password generator (from google)====== save host
name and paassword in drive ===first time

https://sonarqube.thewayfarers.in/ ======second time onwards ===certificate


manager created alredy === sonarqube dashboard url

user = admin ====password check drive

=====import from github

======create git hub app for organisation not need personal


==============23-5-24

create git hub app for organisation

integrate sonar qube dash board with git hub app organisation

import git hub repos to sonar qube dash board

click on Configure analysis

create SONAR_TOKEN of global password and SONAR_HOST_URL

add SONAR_TOKEN and SONAR_HOST_URL in git hub repo settings === secrets

Create Workflow YAML File and paste in git hub repo ===commit and push

===========================================
05-02-24
===windows + x
===powershell admin
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

Set-ExecutionPolicy Bypass -Scope Process -Force;


[System.Net.ServicePointManager]::SecurityProtocol =
[System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object
System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

choco

choco install eksctl


eksctl version

choco install -y aws-iam-authenticator


aws-iam-authenticator help

install aws-cli

choco install kubectl

choco install kubernetes-helm

choco install git


git version

20-04-2024
=================================

helm --kubeconfig=/Users/office/documents/kube/config.oros install beta-postgresql


oci://registry-1.docker.io/bitnamicharts/postgresql -n beta
WARNING: Kubernetes configuration file is group-readable. This is insecure.
Location: /Users/office/documents/kube/config.oros

WARNING: Kubernetes configuration file is world-readable. This is insecure.


Location: /Users/office/documents/kube/config.oros

Pulled: registry-1.docker.io/bitnamicharts/postgresql:15.2.5

Digest: sha256:cfe4da64afd9c72c06f718efe41de5dde0f68c86fab2e562069147bfb488279e

NAME: beta-postgresql

LAST DEPLOYED: Sat Apr 20 11:19:11 2024

NAMESPACE: beta

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

CHART NAME: postgresql

CHART VERSION: 15.2.5

APP VERSION: 16.2.0

** Please be patient while the chart is being deployed **

PostgreSQL can be accessed via port 5432 on the following DNS names from within
your cluster:

beta-postgresql.beta.svc.cluster.local - Read/Write connection

To get the password for "postgres" run:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace beta beta-postgresql -o


jsonpath="{.data.postgres-password}" | base64 -d)

To connect to your database run the following command:


kubectl run beta-postgresql-client --rm --tty -i --restart='Never' --namespace
beta --image docker.io/bitnami/postgresql:16.2.0-debian-12-r15 --
env="PGPASSWORD=$POSTGRES_PASSWORD" \

--command -- psql --host beta-postgresql -U postgres -d postgres -p 5432

> NOTE: If you access the container using bash, make sure that you execute
"/opt/bitnami/scripts/postgresql/entrypoint.sh /bin/bash" in order to avoid the
error "psql: local user with ID 1001} does not exist"

To connect to your database from outside the cluster execute the following
commands:

kubectl port-forward --namespace beta svc/beta-postgresql 5432:5432 &

PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p


5432

WARNING: The configured password will be ignored on new installation in case when
previous PostgreSQL release was deleted through the helm command. In that case, old
PVC will have an old password, and setting it through helm won't take effect.
Deleting persistent volumes (PVs) will solve the issue.

WARNING: There are "resources" sections in the chart not set. Using
"resourcesPreset" is not recommended for production. For production installations,
please set the following values according to your workload needs:

- primary.resources

- readReplicas.resources

+info https://kubernetes.io/docs/concepts/configuration/manage-resources-
containers/

=======================
mkdir oros

eval $(ssh-agent)
ssh-add ./oros (works fine) or ssh-add /Users/office/.ssh/uxdl

git clone (works fine)

===================
15-04-2024
git bash commands

go to github and create a private repo

===on terminal

git config --global user.email "sureshmanapadu999@gmail.com"


git config --global user.user "suresh999333"

ssh-keygen and copy pub key place in github

git init
git add -A
git commit -m "first commit"
git branch -M main
git status
git remote add origin https://github.com/suresh999333/work.git
git pull -v origin main
git push -u origin main

ls -la
102 cat askmesvc.yml === create a file
103 git add .
104 git add -A
105 git commit -m "push ask me"
106 git push -u origin main == works fine

==================

install zsh and ohmyzsh from youtube ========== https://www.youtube.com/watch?


v=SE1UtrtH9mo&t=313s

go to Installing Zsh in Git Bash


click on the MSYS2 package repository
click on file
search and download .peazip for windows and install
left click on MSYS2 and select peazip and extract here
copy and paste files etc and usr files into c:/program files/git
refresh and restart git

check zsh --version


=========================================
export PATH=$PATH:/c/Users/Admin/bin/oc.exe

05-04-24

kubectl --kubeconfig=/mnt/c/users/Admin/Downloads/uxdl/workspace/config get


deployment -n alpha
kubectl --kubeconfig=/mnt/c/users/Admin/Downloads/uxdl/workspace/config get pods
-n alpha

curl -Lv https://alpha.the-workspace.in -H "Cache-Control: no-cache, no-store,


must-revalidate"
curl -Lv https://app.the-workspace.in -H "Cache-Control: no-cache, no-store,
must-revalidate"
curl -Lv https://thewayfarers.in -H "Cache-Control: no-cache, no-store, must-
revalidate"
curl -Lv https://uxdl.in -H "Cache-Control: no-cache, no-store, must-revalidate"

03-04-2024
insatll ubuntu from microsoft store

sudo apt update


sudo apt install zsh
===install oh my zsh

==== INSTALL AWS IAM AUTHENTICATOR ===from https://weaveworks-


gitops.awsworkshop.io/

=====set path cd /usr/local/bin

=====install kubectl (1.28) aws docs

uname === to know the terminal is linux or windows


Linux

dpkg --print-architecture
amd64

====set path cd /usr/local/bin

ls -la
where kubectl
echo $PATH
sudo mv /home/suresh/bin/kubectl /usr/local/bin =====( path===/usr/local/bin )

=====install aws cli from aws docs ===not worked sudo sanp install aws-cli --
classic

sudo apt update


sudo apt list --upgradable
sudo apt install unzip
export PATH="/usr/local/bin:$PATH"
unzip --version
kubectl version --client --output=yaml
sudo rm -rf kubectl

sudo aws configure

====open file where config exist


====windows formate==see at system file manager ==double tap at scroll down bar
===or to know the path of windows

=====place env in the system configuration PATH ==C:\Users\Admin\Downloads\uxdl\


workspace (== use black slash)

===check cluster config file in git hub repos or bitbucket repos

aws eks update-kubeconfig --name workspacecluster ======updates


===/home/suresh/.kube/config

======updates specic path


aws eks update-kubeconfig --name workspacecluster --kubeconfig
/mnt/c/Users/Admin/Downloads/uxdl/workspace/config

kubectl version --client --output=yaml


kubectl config view

kubectl config get-contexts


kubectl --kubeconfig=/mnt/c/users/Admin/Downloads/uxdl/workspace/config get
deployment -n alpha
kubectl --kubeconfig=/mnt/c/users/Admin/Downloads/uxdl/workspace/config get pods -n
alpha

01-04-2024

In Linux terminals, when files are displayed in green color, it typically indicates
that those files are executable.
or configuration files

~ uname === to know the terminal is linux or windows


Linux

dpkg --print-architecture
amd64

install kubectl

=========Linux is one of the first things you need to start while learning
DevOps.Here are some of the Linux commands that are useful:

⭐ Find and Delete Old Files:


find /path/to/files -type f -mtime +30 -exec rm {} \;
This command finds files older than 30 days in the specified path and deletes them.

⭐Search for a String Recursively in Files:


grep -r "search_string" /path/to/search
Recursively searches for a specific string in all files within the specified path.

⭐Create a Tar Archive and Compress it:


tar -czvf archive_name.tar.gz /path/to/directory
Creates a compressed tar archive of a directory.

⭐SSH Tunneling for Remote Access:


ssh -L local_port:destination_host:destination_port user@remote_host
Sets up local port forwarding, allowing you to access a service on a remote machine
securely.

⭐Awk for Text Processing:


awk '/pattern/ {print $2}' file.txt
Uses Awk to search for a pattern and prints the second field of matching lines in a
text file.

⭐Monitoring System Resources with Sar:


sar -u 1 10
Utilizes the System Activity Reporter (Sar) to display CPU utilization every second
for 10 iterations.

⭐Run a Command in the Background and Log Output:


nohup command > output.log 2>&1 &
Executes a command in the background, detaching it from the current session and
redirecting output to a log file.

⭐Using xargs to Parallelize Commands:


find /path -type f -print | xargs -n 1 -P 4 command
Finds files in a directory and executes a command on each file, running up to 4
commands in parallel.

⭐Monitoring Disk Space with df and awk:


df -h | awk '$5 > 90 {print $1, $5}'
Uses df to display disk space information and Awk to filter and print filesystems
with usage above 90%.

⭐Securely Copy Files Between Hosts with rsync over SSH:


rsync -avz -e ssh /local/path user@remote:/remote/path
Uses rsync to synchronize files between local and remote hosts over SSH, preserving
permissions and compression.

===================================================================================
======================================

17-12-23 ===push ecr to aws a/c from terminal

3 sudo apt update


4 sudo apt list --upgradable

6 sudo snap install aws-cli --classic


7 aws --version
8 aws configure

11 sudo apt install docker.io -y

15 docker run hello-world


16 sudo docker run hello-world
17 sudo systemctl status docker
18 sudo usermod -aG docker ubuntu
19 logout
20 docker images
21 aws ecr get-login-password --region ap-south-1 | docker login --username AWS
--password-stdin 233783755790.dkr.ecr.ap-south-1.amazonaws.com

76 vi Dockerfile
FROM ubuntu:latest

77 docker build -t demo-pro-ecr .


78 docker tag demo-pro-ecr:latest 233783755790.dkr.ecr.ap-south-
1.amazonaws.com/demo-pro-ecr:latest
79 docker push 233783755790.dkr.ecr.ap-south-1.amazonaws.com/demo-pro-
ecr:latest

=======================================
24-11-23 ==full red ubuntu app on ur pc

sudo chmod 400 aws-key-jagan.pem ===chmod 600 by abhishek veeramalla


sudo ssh -i "aws-key-jagan.pem" ubuntu@ec2-3-110-103-6.ap-south-
1.compute.amazonaws.com

sudo apt update


sudo apt install fontconfig openjdk-17-jre
java -version

sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \


https://pkg.jenkins.io/debian/jenkins.io-2023.key
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt update
sudo apt install jenkins

sudo systemctl enable jenkins


sudo systemctl start jenkins
sudo systemctl status jenkins

aws== security groups= custom 8080==save

====google==public ip of ec2 :8080

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

cat>index.html
<!DOCTYPE html>
<html>
<body>

<h1>My First Heading</h1>


<p>My first paragraph.</p>

</body>
</html>

3 vi basic.html
4 python3
5 python3 -m http.server 8000

aws== sg= custom 8000==save


====google==public ip of ec2 instance:8000

=======================================================
20-11-23

to install docker desktop on windows pc (windows 11)

follow git hub open remote repo


16-11-23
==============search == ubuntu full red colour inside ubuntu logo == open
pwd
cd /mnt/c/users/admin/Downloads
sudo apt update
sudo apt install zsh
sh -c "$(curl -fsSL
https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

=====connecting ubuntu on windows laptop with open shift

google == openshift cli == copy link address of windows client == curl -LO
( oc.tar.gz ==link) == curl -O oc.tar.gz
pc == file exporer == download == extract here ==
echo $PATH
mv oc /usr/local/bin

export PATH=$PATH:( /path/to/directory == echo $PATH )

sudo oc
oc
oc status

oc login --token=sha256~mVKvf13FbZhLfk0ClSNSVgV_6CodBw5R7VnZjncPMlw --
server=https://api.sandbox-m4.g2pi.p1.openshiftapps.com:6443

oc get svc
oc get deploy
oc get ns
oc get pods -A
===================
google ==download kubectl == extract here
echo $PATH
mv kubectl /usr/local/bin
export PATH=$PATH:/usr/local/bin

sudo kubectl get svc


ubuntu password =6268

kubectl get svc


kubectl get pods -A
kubectl get ns
kubectl get ing
kubectl get deploy
kubectl get ns

15-11-23

clear cache from cmd prompt as Administrator


ipconfig /flushdns

14-11-23

download

https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/
html/cli_tools/openshift-cli-oc

google == openshift cli =======Installing the CLI by downloading the


binary===Installing the CLI on Windows ===

==== Infrastructure Provider == OpenShift Container Platform downloads page


==download

file explore == compress here==

seach env edit system env var===env var== path ==edit ==new ==(git bash ==pwd )C:\
Users\Admin\Downloads (use /) ==save ==ok

git bash open == oc ==oc login ==( openshift sand box=== copy login command ====
dev sandbox==display token )

4-11-23
============================================

-----run docker django application from abhishek git hub


----------run depoly svc pods

git remote -v

vi cm.yml
kubectl create -f cm.yml
kubectl get cm

431 kubectl describe cm test-cm


432 kubectl get pods

vi env-deploy.yml
kubectl create -f env-deploy.yml
kubectl get pods
kubectl exec -it my-deploy-66975698d4-2zwqd -- /bin/bash
env | grep DB

-- or --- kubectl exec my-deploy-66975698d4-2zwqd -- env | grep DB

477 vi vol-mount-deploy.yml
478 kubectl apply -f vol-mount-deploy.yml

483 kubectl get pods


484 kubectl exec -it my-deploy-66975698d4-p8wj8 -- /bin/bash

env | grep DB
ls /opt
cat /opt/db-port | more
kubectl exec -it my-deploy-66975698d4-p8wj8 -- env | grep DB
kubectl exec -it my-deploy-66975698d4-p8wj8 -- ls /opt
kubectl exec -it my-deploy-66975698d4-p8wj8 -- cat /opt/db-port | more

469 kubectl create secret generic test-secret --from-literal=db-port="3306"


kubectl get secret

470 kubectl describe secret test-secret


471 kubectl edit secret test-secret
472 echo MzMwNg== | base64 --decode
473 kubectl create secret generic test-secret1 --from-literal=db-
password="suresh"
474 kubectl edit secret test-secret
475 kubectl edit secret test-secret1
476 echo c3VyZXNo | base64 --decode

=============================================================
03-11-23

==== run abishek project python in docker files


====run docker ,minikube, deploy,svc

minikube addons enable ingress


kubectl get pods -n ingress-nginx
kubectl get svc
--------------------
vi ingress.yml

service
name: (svc name)
-----------------------
kubectl get ingress

-------(ingress address created)

sudo vi /etc/hosts

(ingress address) foo.bar.com

ping foo.bar.com

====================================================================
31-10-23
sudo snap install kubectl --classic
sudo snap install amazon-ssm-agent --classic
sudo snap install helm --classic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

helm install ingress-nginx/ingress-nginx --generate-name


kubectl --namespace default get services -o wide -w ingress-nginx-1698912784-
controller
kubectl get svc
kubectl get pods -o wide

30-10-23
===================
installing on git bash
google installing zsh on git bash
https://dominikrys.com/posts/zsh-in-git-bash-on-windows/
the MSYS2 package repository
https://packages.msys2.org/package/zsh?repo=msys&variant=x86_64
press on getmysys2
https://www.msys2.org/
msys2-x86_64-20231026.exe
install app from downloads
restart
search === UCRT64
pacman -S mingw-w64-ucrt-x86_64-gcc

================================================================

== windows laptop ==git bash == ec2- ubuntu instance

sudo apt install zsh

sh -c "$(curl -fsSL
https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

=================================================

29-10-2023
======================================

sudo ufw status ----check for fire wall block


------it should be inactive

sudo netstat -tuln | grep 8899 ==== try on git bash terminal

==========================================
4 ls -la
5 sudo apt update
6 sudo apt list --upgradable
7 sudo apt install docker.io -y
8 docker --version
9 sudo usermod -aG docker ubuntu
10 logout
11 docker run hello-world
12 docker images
13 docker ps
14 docker ps -a
15 sudo systemctl docker status
16 sudo systemctl status docker
17 clear
18 git clone https://github.com/iam-veeramalla/Docker-Zero-to-Hero

20 ls -la
21 cd Docker-Zero-to-Hero/examples/python-web-app/
22 ls -la

minikube What you’ll need

2 CPUs or more
2GB of free memory
20GB of free disk space
install docker.io
Internet connection

23 curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-


linux-amd64
24 sudo install minikube-linux-amd64 /usr/local/bin/minikube
25 minikube start
26 minikube status
27 ls -la
28 minikube status
29 minikube ip
30 sudo snap install kubectl --classic
31 kubectl get all
32 kubectl get nodes

34 vi deploy.yml

36 vi np-svc.yml
37 ls -la
38 cat Dockerfile
39 cat np-svc.yml
40 cat deploy.yml

46 ls -la
47 docker build -t 0428a861224e/demopyproject:v1 .

=============== or == docker build -t projectpythondemo:v11 .

48 ls -la
49 docker images

eval $(minikube docker-env)


docker images
docker ps
docker login
docker tag 0428a861224e/demopyproject:v1 0428a861224e/demopyproject:v1
docker push 0428a861224e/demopyproject:v1

51 kubectl get nodes


52 kubectl create -f np-svc.yml
53 kubectl get svc
54 kubectl create -f deploy.yml
55 kubectl get deploy
56 kubectl get deploy -o wide

kubectl get pods -o wide ======if pods not running

minikube image ls --format table

minikube image load projectpythondemo:v1 (image name & tag)


================imagepullbackoff works with above cmd in minikube

minikube image load projectpythondemo:v11

kubectl get pods -o wide === shows running


58 kubectl logs sample-py-project ========= kubectl logs my-deploy-
795bbd6797-hbw7z (podname)

kubectl describe pod podname =====kubectl describe pod my-deploy-


795bbd6797-hbw7z

59 kubectl config view

minikube ip
curl -L http://192.168.49.2:30001/demo

kubectl get svc -o wide


minikube ssh
curl -L http://10.99.85.129:80/demo ========(nodeport cluster ip =10.99.85.129 )

kubectl get pods -o wide


minikube ssh
ping -v (pod ip)

=========================================================

25-10-23

======================================================

cat>lb-svc.yml ----------works for short lines edit

nano lb-svc.yml ----------works for long lines edit

vi or vim lb-svc.yml ------- edited lines go zig zig manner

=============================------================
-------------------

kubectl get all ========= shows all name spaces


kubectl get pods -o wide -==== shows pods ip address
kubectl get deploy
kubectl get svc
kubectl edit svc (svc pod name) or vi (yaml file)

kubectl delete svc (svc pod name) ====for deleting pods ==delete with any of (svc
or deploy or nodes or all)

docker build -t 0428a861224e/python-project:v1 . ===== try once docker scan after


build

kubectl create -f np-svc.yml


kubectl apply -f np-svc.yml

minikube ssh
curl -L http:// pods ip:30,000 to 32,767/demo === depend on (defining in the
code)
==above on terminal
http:// pods ip:30,000 to 32,767/demo ====on browser

kubectl create -f lb-svc.yml ===(for only clouds like aks,eks,gcp) == external


ip address are created
kubectl apply -f lb-svc.yml ===(for others) == external ip shows pending

kubectl get svc


kubectl edit svc (svc pod name)

=== or

vi or vim lb-svc.yml

kubectl get deployment or deploy

==========================================
minikube installation for learning purpose

sudo apt update


sudo apt install docker.io -y
docker --version
sudo usermod -aG docker ubuntu
===logout

sudo systemctl status docker


sudo systemctl start docker
doker run hello-world
docker images
docker ps -a

curl -L https://storage.googleapis.com/minikube/releases/latest/minikube-linux-
amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

minikube start
minikube status

----------minikube only one master or control plane

===========================================
--------------------------

docker build -t
docker images
kubectl get pods -o wide
minikube ssh
curl -L http:// pods ip1 /8000/demo ====== if deploy pods are 2

curl -L http:// pods ip2 /8000/demo

22-10-23 -- if u want to delete pod permanetely then delete deployment


eval $(minikube docker-env)

kubectl get nodes

vi simplepod.yml

18-10-23

=============================
1 ls -la
2 clear
3 curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-
linux-amd64
4 sudo install minikube-linux-amd64 /usr/local/bin/minikube
5 minikube start
6 sudo apt update
7 sudo snap install kubectl --classic
8 kubectl version --client
9 kubectl get nodes
10 kubectl get po -A
11 kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
12 kubectl expose deployment hello-minikube --type=NodePort --port=8080
13 kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
14 kubectl expose deployment hello-minikube --type=NodePort --port=8000
15 kubectl get services hello-minikube
16 minikube service hello-minikube
17 sudo apt update
18 sudo apt install docker.io -y
19 sudo systemctl status docker
20 sudo usermod -aG docker ubuntu
21 exit
22 docker run hello-world
23 docker images
24 minikube start
25 kubectl get nodes
26 vi simplepod.yml
27 kubectl create -f simplepod.yml
28 kubectl get pods
29 kubectl get pods -o wide
30 kubectl describe pod nginx
31 kubectl delete pod nginx
32 kubectl get pods -o wide
33 kubectl get pods
34 kubectl get nodes
35 kubectl apply -f simplepod.yml
36 kubectl get nodes
37 kubectl get pods -o wide
38 minikube ssh
39 kubectl describe pod nginx
40 kubectl logs nginx
41 cat>podtemplate.yml
42 cat podtemplate.yml
43 kubectl create -f pod podtemplate.yml
44 vi podtemplate.yml
45 kubectl create -f pod podtemplate.yml
46 cat>simple.yml
47 kubectl create -f simple.yml
48 kubectl get pods -o wide
49 kubectl describe pod hello-m7c4t
50 kubectl logs hello-m7c4t
51 kubectl logs nginx
52 cat>controllers/nginx-deployment.yaml
53 cat>nginx-deployment.yaml
54 kubectl create -f nginx-deployment.yaml
55 kubectl get pods -o wide
kubectl get pods -w ===to see how pods die and starts new pods (check with 2
tabs terminal)

56 kubectl get rs
57 kubectl describe deployments
58 kubectl logs deployments
59 kubectl set image deployment/nginx-deployment nginx=nginx:1.161
60 kubectl rollout status deployment/nginx-deployment
61 kubectl get rs
62 kubectl get pods -o wide
63 kubectl describe deployment
64 kubectl get rs
65 kubectl rollout history deployment/nginx-deployment
66 kubectl rollout history deployment/nginx-deployment --revision=2
67 kubectl rollout undo deployment/nginx-deployment
68 kubectl rollout undo deployment/nginx-deployment --to-revision=2
69 kubectl get rs
70 kubectl get deployment nginx-deployment
71 kubectl describe deployment nginx-deployment
72 history
=================================
2 ls -la
3 sudo apt update

6 sudo apt list --upgradable


7 sudo apt install python3 python3-pip -y

9 python3 --version
10 sudo snap install aws-cli --classic
11 aws --version
12 sudo snap install kubectl --classic

15 kubectl version --client

17 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key


add -
18 echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -
a /etc/apt/sources.list.d/kubernetes.list

20 sudo apt install -y python3-pip apt-transport-https kubectl

22 pip3 install awscli --upgrade


23 export PATH="$PATH:/home/ubuntu/.local/bin/"

24 curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s


https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut
-d '"' -f 4)/kops-linux-amd64
25 chmod +x kops-linux-amd64
26 sudo mv kops-linux-amd64 /usr/local/bin/kops
27 exit
28 aws configure

35 aws s3api create-bucket --bucket kops-abhi1-storage --region us-east-1


36 kops create cluster --name=demok8scluster1.k8s.local --state=s3://kops-
abhi1-storage --zones=us-east-1a --node-count=1 --node-size=t2.micro --control-
plane-size=t2.micro --control-plane-volume-size=8 --node-volume-size=8

39 kops update cluster --name demok8scluster1.k8s.local --yes


--state=s3://kops-abhi1-storage --yes --admin

51 kops validate cluster demok8scluster1.k8s.local --state=s3://kops-abhi1-


storage --wait 10m --count 3

62 kops delete cluster demok8scluster1.k8s.local --yes --state=s3://kops-abhi1-


storage --yes

16-10-23
----------------------
Check Disk Space on ec2 instance
df -h

----------------------
=====porting the container
python3 -m http.server 8000

==shows dockerfile and python code

-------------------------------------
===open ubuntu and connect to instance
git clone
cd Docker/example/first-docker
---------------------
vi Dockerfile

apt update && apt install pyphon3


---------------------------------
vi app.py ===container will not exited
----------------------------------
while True:
print("Hello, world!"

----------------------------------

docker build -t 0428a861224e/suresh:1 .


docker images
docker volume create testv1

docker run -d --mount source=testv1,target=/app 0428a861224e:suresh:1(docker image


name)
docker run -it 0428a861224e:suresh:1

====================shows continues print of hello-wold

===open 2 nd tab git bash and connect instance

docker ps ===shows container is running

sudo su root === on ec2 instance


ls -la
cd /var/lib/docker/volumes/testv1/_data === python code here avilable
exit

docker ps
docker exec -it (cont id) /bin/bash

ls -la

/app ==== python code here avilable

12-10-23
================================
ls -la

218 docker network ls


219 docker run -d --name login nginx:latest
220 docker network ls
221 docker ps
222 docker exec -it login /bin/bash
# apt update
# apt install iputils-ping -y
# ping -V
223 docker network ls
224 docker ps
225 docker run -d --name logout nginx:latest
226 docker ps
227 docker network ls
228 docker inspect login
229 docker inspect logout
230 docker exec -it login /bin/bash
231 docker exec -it logout /bin/bash

docker exec -it login /bin/bash


#ping 172.17.0.2
#ping 172.17.0.3

232 docker network ls


233 docker network create secure-network
234 docker network ls
235 docker run -d --name finance --network=secure-network nginx:latest
236 docker network ls
237 docker ps
238 docker inspect finance
239 docker exec -it login /bin/bash
#ping 172.17.0.2 ===response comes
#ping 172.17.0.3 ===response comes
#ping 172.19.0.2 === no response ===that means highly secured

================================================

2 ls -la
3 git --version
4 docker --version
5 sudo apt update
6 sudo apt list --upgradable
7 sudo apt install docker.io -y
8 sudo usermod -aG docker ubuntu
9 exit
10 ls -la
11 docker ps
12 docker run hello-world
13 docker ps
14 docker images
15 docker volume ls
16 docker network ls

19 git clone https://github.com/iam-veeramalla/Docker-Zero-to-Hero


20 ls
21 cd Docker-Zero-to-Hero/
22 ls
23 ce examples/
24 ls
25 cd examples/
26 ls
27 cd first-docker-file/
28 ls

60 docker run -d --mount source=testv1,target=/app nginx:latest

88 docker system prune


89 docker system prune -a
docker image prune
docker image prune -a

95 docker stop 5d388b969437


96 docker ps

109 docker images

117 docker rmi nginx httpd tomcat


118 docker images
119 docker volume ls
120 docker volume rm be99a473283e testv1 testv2
121 docker volume ls

129 docker build -t suresh3 .


130 docker images
131 docker run hello-world
132 docker images
133 docker ps
134 docker volume create testv9
135 docker volume ls
136 docker run -d --mount source=testv9
137 docker run -d --mount source=testv9,target=/app nginx:latest
138 docker ps
139 docker inspect 49aee75550ee

172 docker run -d --mount source=testv9,target=/app,readonly tomcat:latest


173 docker ps
174 docker inspect 83bc0ba208e8

08-10-23
Kubernetes Cluster Installation

aws s3api create-bucket --bucket kops-suresh1-storage --region us-east-1


kops create cluster --name=myfirstcluster.k8s.local --state=s3://kops-suresh1-
storage --zones=us-east-1a --node-count=1 --node-size=t2.micro --master-
size=t2.micro --master-volume-size=8 --node-volume-size=8
kops update cluster --name myfirstcluster.k8s.local --yes --admin
export KOPS_STATE_STORE=s3://kops-suresh1-storage

kops validate cluster myfirstcluster.k8s.local


kops validate cluster --name myfirstcluster.k8s.local

nslookup api-myfirstcluster-k8s-lo-hqulii-3f5f7f4d6f65f5a4.elb.us-east-
1.amazonaws.com

kops delete cluster --name ${myfirstcluster.k8s.local} --yes

07-10-23

sudo apt update && sudo apt list --upgradable


sudo apt install docker.io -y
sudo systemctl status docker

sudo usermod -aG docker ubuntu


docker run hello-world
docker login

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-


amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

minikube start

kubectl get po -A
kubectl get deployment
kubectl get svc
kubectl get pods
kubectl describe pod pod name
kubectl get nodes
kubectl get pods -o wide | grep minikube
kubectl get pods -o wide | grep pod name

minikube stop
minikube start
minikube config set memory 9001
minikube config view
minikube logs
minikube logs --problems

26-9-23
docker volume create suresh
23 docker volume ls
24 docker volume inspect suresh
25 docker volume rm suresh

docker volume create suresh


docker run -it dockervolume
46 docker ps
47 docker run -d --mount source=suresh,target=/app nginx:latest
48 docker ps
49 docker inspect ec8fa11bc519
50 docker ps
51 docker images

23-9-23
========================
pwd
whoami
git --version

sudo apt update


sudo apt install docker.io -y

docker --version

docker run hello-world


88 sudo systemctl status docker
89 sudo usermod -aG docker ubuntu
90 logout
91 exit
92 ls -la
94 cd pythonproject2
95 ls -la
96 vi Dockerfile
97 vi requirements.txt
98 cd src/
99 ls -la
100 vi server.py
101 cd ..
102 ls -la

docker login
docker build -t suresh:1 .
docker images
103 docker ps
docker run -it myfirstimage:1
doker run -it a0fcacea8d01 (cont id)
doker run -p 8000:8000 -it a0fcacea8d01 (cont id)
curl http://localhost:5000

======google chrome pubic ip :5000 === go to instance ==security group == edit


inbound rules == custom tcp == port 5000 ==custom cidr block ==0.0.0.0/0 == save
rules
== hello-world ==appear

108 docker exec -it 7e869a269819 sh


109 ls -la

=========================
to host ur public ip of an instance
security group inbound rules edit customtcp 5000 custom 0.0.0.0/0
======================

===To connect with winscp

public ip
ubuntu advanced settings== advanced ==ssh authentication== private key ==.pem
kay attach

==ok ===login ==connected

=========================

22-9-23
docker rm -rf images

to change to root user on ubuntu terminal

sudo -i
sudo apt-get -y update && sudo apt install docker.io
service docker status
service docker start
sudo dockerd === docker daemon will start

19-9-23

1 ls -la
2 sudo apt update
3 sudo apt list --upgradable
4 sudo apt install docker.io -y
5 docker --version
6 docker run hello-world
7 sudo systemctl status docker
8 sudo usermod -aG docker ubuntu
9 logout
10 docker run hello-world
11 git clone https://github.com/iam-veeramalla/Docker-Zero-to-Hero
12 cd examples
13 ls -la
14 cd Docker-Zero-to-Hero/
15 cd examples/
16 ls -la
17 cd python-web-app/
18 ls -la

20 docker login
21 docker build -t abhishekf5/my-first-docker-image:latest .
22 docker images
23 docker run -it abhishekf5/my-first-docker-image
24 docker run python manage.py migrate
25 pwd
26 ls -la
27 docker push abhishekf5/my-first-docker-image
28 docker images
29 docker tag abhishekf5/my-first-docker-image 0428a861224e/suresh1:latest1
30 docker push 0428a861224e/suresh1:latest1
31 ls -la

103 ls -la
104 cd examples/
105 exit
106 cd Docker-Zero-to-Hero/
107 cd examples/
108 cd python-web-app/

130 docker ps -a
131 docker images -a
132 vi Dockerfile
133 cat Dockerfile
134 ls -la
-
137 docker build -t pyproject1 .
138 docker images -a
139 docker run -it dfb9d20b5ee1
140 docker run -p 8080:8080 -it dfb9d20b5ee1 94d1ce4037b3

docker system prune

docker system prune -a

docker logs container ID

14-9-23 ==ubuntu instance ==pushing to docker

sudo apt update


sudo apt install docker.io -y
docker run hello-world
sudo systemctl status docker
sudo systemctl start docker
sudo usermod -aG docker ubuntu
git clone https://github.com/iam-veeramalla/Docker-Zero-to-Hero
cd examples
docker login
docker build -t my-first-docker-image .
docker tag my-first-docker-image 0428a861224e/suresh1:latest
docker push 0428a861224e/suresh1:latest

docker run -d -p 81:80 --name my-first-docker-image nginx


docker ps
curl http://localhost:81
public ip:81 =====shows nginx on google chrome
( my-first-docker-image==local code name)
(0428a861224e/suresh1:latest ==docker hub details)

1 ls -la
2 git --version
3 git clone https://github.com/iam-veeramalla/Docker-Zero-to-Hero
4 ls -la
5 cd Docker-Zero-to-Hero/
6 ls -la
7 cd examples/
8 ls -la
9 cd first-docker-file/
10 ls -la
11 cat Dockerfile
12 cd ..
13 ls -la
14 cd golang-multi-stage-docker-build/
15 ls -la
16 cd dockerfile-without-multistage/
17 ls -la
18 cat Dockerfile
19 go run calculator.go
20 sudo apt-get update -y && sudo apt install golang-gosudo apt install
golang-go -y
21 sudo apt-get update -y && sudo apt install golang-go -y
22 go run calculator.go
23 ls -la
24 cat Dockerfile
25 docker build -t simplecalculator .
26 sudo apt-get update -y && sudo apt install docker.io -y
27 docker images
Docker image ls -a

11-9-23 ====to know wifi password of already login from ur computer


========Run command prompt as a administrator

netsh wlan show profile name="wifi user name" key=clear

====to clear virus from ur computer


Windows +R type mrt ====next==== yes

====to run python file create a folder and save as a filename.py ==run it
Download plugins from vs code max download check download

9-9-23

===To identify local while git push enter following cmds (in case of bit bucket)

1. ssh-add ~/.ssh/file name


2. eval $(ssh-agent)

==to install aws cli


1.download python from website and install manuvally not from terminal
2.restart ur computer
3. check python –version
4. pip install awscli
5. aws –version

==in git bash always ==use commands like


Git add .
Git commit -m “ “

===Ubuntu terminal to find downloads


cd /mnt/c/users/Admin/Downloads

7-9-23
tar --use-compress-program=unzstd -xvf zsh-5.9-2-x86_64.pkg.tar.zst

scp myfile.txt ubuntu@your_server_ip:/home/ubuntu/

cp suresh.pem /c/Users/User/Downloads /home/ubuntu/flujo

scp suresh.pem Ubuntu@ 3.85.6.16:/home/ubuntu/flujo

== how-to-switch-from-root-to-another-user-in-ubuntu-terminal

Whoami

su -i suresh ***(user) or sudo su suresh oe exit

=======ubuntu username suresh password 6268

== how-to-switch-from normal user to root user-in-ubuntu-terminal

Sudo -i or sudo su

==================to install ubuntu software

Set-VMProcessor -VMName MyWSL -ExposeVirtualizationExtensions $true

==run as a administrater on command prompt

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux


/all /norestart

dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all


/norestart

to download the wsl_update_x64.msi package

=== go folder download install above package


wsl --set-default-version 2

==go to Microsoft store install ubuntu latest version


06-09-23
===Ubuntu terminal to find downloads ==pwd== /mnt/c/users/user/Downloads

cd /mnt
cd c cd users cd user ls –la cd Downloads

===how to connect ubuntu instance from git bash terminal


Open aws account ==instance= key pair name ==create key pair ==download=
Open git bash =ls –la = chmod 400 key pair name == copy aws select instance
==connect ==ssh-client == copy example ==paste git bash terminal ==instance
connected from terminal

17-11-22

which helm == to know the path


hellm version

==== kibana ===logs full


check
kubectl --kubeconfig=/users/Suresh/Downloads/config get svc -n board2

Kibabana pod load balancer: 5601 (port)


=====To check dashboard of elk which is kibana

17-10-22 ===========works Daily


===for working of kubectl enter below cmds

mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

kubectl --kubeconfig=/users/Suresh/Downloads/config get pods


kubectl --kubeconfig=/users/Suresh/Downloads/config get pods -n alpha
kubectl --kubeconfig=/users/Suresh/Downloads/config get pods -n beta

13-10-22

====for connecting to terminal to wayfr aws account


==========Kubectl file has to maintain in local (check path with pwd)

curl -o kubectl
https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/darwin/amd64/
kubectl

chmod +x ./kubectl

mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

echo 'export PATH=$PATH:$HOME/bin' >> ~/.bash_profile


kubectl version --client --output=yaml

====== maintain wayfr config file and kubectl file in one location

Aws configure

==== to know wayfr cluster connected or not

kubectl --kubeconfig=/users/Suresh/Downloads/config config get-contexts

kubectl --kubeconfig=/users/Suresh/Downloads/config get pods

kubectl --kubeconfig=/users/Suresh/Downloads/config get pods -n beta

kubectl --kubeconfig=/users/Suresh/Downloads/config get pods -n alpha

kubectl --kubeconfig=/users/Suresh/Downloads/config get ns

====================================================================

======to delete file on mac terminal

rm -i file name
===============

12-10-22

testing all sites in terminal on everyday at 10am 2 or 3pm 5 to 7pm

================================IMPORTANT

curl -Lv http://messengerapiprod.wayfr.co -H "Cache-Control: no-cache, no-store,


must-revalidate"

curl -Lv http://admin.carrierconnects.com -H "Cache-Control: no-cache, no-store,


must-revalidate"

curl -Lv http://usersapiprod.wayfr.co -H "Cache-Control: no-cache, no-store, must-


revalidate"

curl -Lv http://app.carrierconnects.com -H "Cache-Control: no-cache, no-store,


must-revalidate"

curl -Lv http://carrierconnects.com -H "Cache-Control: no-cache, no-store, must-


revalidate"

==========================

curl -Lv http://alpha.carrierconnects.com -H "Cache-Control: no-cache, no-store,


must-revalidate"
curl -Lv http://betaenv.carrierconnects.com -H "Cache-Control: no-cache, no-store,
must-revalidate"
28-9-22 to check any website from terminal for working or not

curl -Lv http://usersapiprod.wayfr.co -H "Cache-Control: no-cache, no-store, must-


revalidate"

18-8-22 === ci /cd setup ==== db migration

mkdir wayfr
Cd master

git clone https://wayfrdev3@bitbucket.org/wayfr/wayfr-messengerbe.git

git clone https://wayfrdev3@bitbucket.org/wayfr/store-link.git

git clone https://wayfrdev3@bitbucket.org/wayfr/wayfr-be.git

git checkout main


cd storelink
git pull origin develop

git checkout master


cd way-be
git pull origin beta

git checkout master


cd wayfr-messenger
git pull origin beta

cd wayfr-be
npm i
for line in $(cat.env);do export $line; done
sequalize db:migrate

cd storelink
git push

cd wayfr-messenger
git push

cd wayfr-be
git push

12-7-22 ========== ci/cd set up

bitbucket ==== repository settings ==== repository variables

DOCKER_IMAGE_NAME === suresh ==doc hub private repo name only and no tag

k8s_SERVER_URL === kubectl cluster-info === control plane https ==ans


ca.crt ===decode it ===kubectl get secret default-token-rzmxv -o
jsonpath="{['data']['ca\.crt']}" | base64 -d

k8s_USERNAME == kubectl get sa ==or== kubectl get all --all-namespaces

k8s_USER_TOKEN = kubectl get secret default-token-rzmxv --namespace={default} -o


yaml

k8s_NAMESPACE ====default

k8s_USERNAME_PROD ====default

k8s_DEPLOYMENT_NAME_PROD === landingpagesuresh

cd bitbucketcicd ==== roles and role binding for ci /cd

kubectl apply -f saprod.yaml

kubectl apply -f rolesuresh.yaml

kubectl apply -f rolebindingsuresh.yaml

29-7-22

=============================================================================

==========To deploy the AWS Load Balancer Controller to an Amazon EKS cluster

====Create an IAM OIDC provider for your cluster


aws eks describe-cluster --name suresh-cluster --query
"cluster.identity.oidc.issuer" --output text

======https://oidc.eks.us-east-1.amazonaws.com/id/6B95E28ED0CB6EE8079614A183BB6B28

aws iam list-open-id-connect-providers | grep 6B95E28ED0CB6EE8079614A183BB6B28

====="Arn": "arn:aws:iam::185106119508:oidc-provider/oidc.eks.us-east-
1.amazonaws.com/id/6B95E28ED0CB6EE8079614A183BB6B28"

eksctl utils associate-iam-oidc-provider --cluster suresh-cluster --approve

curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-


balancer-controller/v2.4.2/docs/install/iam_policy.json

aws iam create-policy \


--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json

eksctl create iamserviceaccount \


--cluster=suresh-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name "AmazonEKSLoadBalancerControllerRole" \

--attach-policy-arn=arn:aws:iam::185106119508:policy/AWSLoadBalancerControllerIAMPo
licy \
--approve

helm repo add eks https://aws.github.io/eks-charts


helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=suresh-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set image.repository=602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-
load-balancer-controller

kubectl get pods -n kube-system =======check describe and logs pod if pod not
running
kubectl get nodes -n kube-system
kubectl get sa -n kube-system

kubectl get deployment -n kube-system aws-load-balancer-controller

============================================================================
26-7-22

cp -rf ~/.kube ~/Desktop/.kube ===== copy a folder or file to anthoter folder in


local

mv -f ~/Desktop/s ~/Documents ==== move a folder or file to anthoter folder


in local

~ if u used it then no need to use

====================================

while creating i am roles and user ==csv file generated for aws login access key
and value

create user and attach policies and download csv file(root key of aws login from
terminal)

ssh-keygen ===from terminal ==.ssh folder created with public and private keys

move .ssh directory to instance == to interact with terminal

scp -i "~/.ssh/eks" -r /Users/Suresh/.ssh/bitbucketsuresh.pub ec2-user@ec2-107-22-


124-54.compute-1.amazonaws.com:/home/ec2-user/.ssh

to connect with instance ===move .ssh keys to instance


to connect with bitbucket ===add .ssh keys to bitbucket

====bit bucket keys and .ssh keys of local and .ssh keys of instance

==== bitbucket login password from terminal == personal settings == app


passwords==create == name and permissions ==done==

==== bitbucket personal settings === ssh==add keys == cat ~/.ssh/id_rsa.pub from
terminal == Lakeland key ==add key

===================================================================================
===========

====== pushing to docker hub private repo and hiding the password

docker hub==Account Settings==Security==New Access Token==name it==generate ==copy


secret

kubectl create secret docker-registry sureshdockerkey -n patch --docker-


server=https://index.docker.io/v2/ --docker-username=0428a861224e --docker-
password=c76d1745-6c13-45a0-a16d-c1654385c6ad --docker-email=m.suresh404@gmail.com

===================================================================================
=============

=======18-7-22
==========================================================
=====check for cluster connected

kubectl config get-contexts


==========================================================

==========creating alpha and beta

crate svc in local 4 pods fe ,be , messenger, redis

aws configure === aws login

crate cluster .yaml define one node group min 1 max 1 desired 1 once instance
region

setup ssh keys aws keys bitbucket keys ==== always disable root aws keys

take instance login using public ssh key

sudo yum install git


git version
git clone https from bitbucket and enter bitbucket login password

git branch
git checkout alpha beta

change environments files in fe ,be ,messenger

use http for alpha beta, check port no in docker file and svc pods port should
be same

docker build
docker push
crate pods ===change image in pods

svc public ip:port no (4444) in the google

register in site
login
create load
check chart service

drive upload a file

move created load to transit, delivery ,-----

check data inserted in pgadmin

check no managed node group created in aws a/c


check no of load balancers created in aws a/c ===limitation for load balancers
in

=======creating a private repo as wayfr in docker hub and pushing public repo in
local to docker hub private repo

docker login -u 0428a861224e ===== p:


docker pull 0428a861224e/bemessengercc43
docker image ls
docker tag 0428a861224e/bemessengercc43:latest 0428a861224e/wayfr:messenger
docker push 0428a861224e/wayfr:messenger

=======14-7-22

pods created on type=loadbalacer

=====change environments file in develop branch before docker build do

docker build --network=host -t 0428a861224e/bemessagercarrierconnects33 .

============================
====pgadmin ======for checking register data on website through in pgadmin

servers ---register--server ---general name=any ---connection --host-username---


port --password in
branch develop (alpha wayfr) ---in bitbucket

-- local host --wayfr-users---schemas---tables---columns---users --select * from


users --press play button

kubectl config get-contexts ====check for connected to cluster

===========================

====13-7-22
login into instance by ssh
cd storelink
git branch
git checkout dev01

vi environments.ts

=====netstat
=======--network=host == used for instance connected to internet

docker build --network=host -t 0428a861224e/festorelinkcarrierconnects24 .

===========================

***************
cd redis
kubectl kustomize ./
vi redis-deploymentalpha.yaml
kubectl apply -k ./
kubectl get pods

=====redis created in my cluster ===yaml file in terminal

change .env redis


docker build
docker push to hub

************************

======

rm -rf store-link/ ======to delete directory on instance

============12-7-22 ===donot install postgress with helm

helm ls --namespace default


helm list -aq

helm uninstall postgresql-1657560294

kubectl get secret --namespace default samplepostgres-postgresql -o


jsonpath="{.data.postgres-password}"

TlRPRjhVMWtCaw==%

==================to delete helm created postgres

helm ls -A

kubectl delete pod postgresql-1657560294-0 =======delete pod first

kubectl get statefulset =======delete statefulset

kubectl delete statefulset postgresql-1657560294


===================================================================================
====================

➜ code1 git:(master) ✗ helm install samplepostgres bitnami/postgresql


NAME: samplepostgres
LAST DEPLOYED: Tue Jul 12 17:57:39 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: postgresql
CHART VERSION: 11.6.15
APP VERSION: 14.4.0

** Please be patient while the chart is being deployed **

PostgreSQL can be accessed via port 5432 on the following DNS names from within
your cluster:

samplepostgres-postgresql.default.svc.cluster.local - Read/Write connection

To get the password for "postgres" run:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace default


samplepostgres-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)

To connect to your database run the following command:

kubectl run samplepostgres-postgresql-client --rm --tty -i --restart='Never' --


namespace default --image docker.io/bitnami/postgresql:14.4.0-debian-11-r7 --
env="PGPASSWORD=$POSTGRES_PASSWORD" \
--command -- psql --host samplepostgres-postgresql -U postgres -d postgres -p
5432

> NOTE: If you access the container using bash, make sure that you execute
"/opt/bitnami/scripts/postgresql/entrypoint.sh /bin/bash" in order to avoid the
error "psql: local user with ID 1001} does not exist"

To connect to your database from outside the cluster execute the following
commands:

kubectl port-forward --namespace default svc/samplepostgres-postgresql


5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -
p 5432
===================================================================================
=============================

============11-7-22

====== to install postgres using helm charts on k8s cluster

helm install bitnami/postgresql --generate-name

========to increase instances from terminal


eksctl scale nodegroup --cluster=sample-cluster --nodes=3 ng-1

=========================================8-7-22

=========login in to instance

ssh -i "eks" ec2-user@ec2-34-210-61-20.us-west-2.compute.amazonaws.com

scp -i "~/.ssh/eks" -r /Users/Suresh/.ssh/ ec2-user@ec2-34-210-61-20.compute-


1.amazonaws.com:/home/ec2-user/.ssh

sudo yum install git


git --version
sudo yum install docker
docker --version
docker status
sudo docker restart

git clone https://wayfrdev3@bitbucket.org/wayfr/wayfr-messengerbe.git

ls -la
git checkout develop / dev01

------change passwords in .env

docker build -t betest:201 .


docker run -d -p 3000:3000 betest:201 ====to host locally

cd deployment

kubectl apply -f UsersDeployment.yaml ==== rename as betest:201

cd services

kubectl apply -f UserService.yaml ==== rename as betest:201

============ copy from local to remote

➜ .ssh scp -r -i "eks" /Users/Suresh/.ssh/bitbucketsuresh.pub ec2-user@ec2-34-210-


61-20.us-west-2.compute.amazonaws.com:~/.ssh
bitbucketsuresh.pub
100% 586 2.3KB/s 00:00
➜ .ssh scp -r -i "eks" /Users/Suresh/.ssh/bitbucketsuresh ec2-user@ec2-34-210-61-
20.us-west-2.compute.amazonaws.com:~/.ssh
bitbucketsuresh

=======================

scp -r -i "id_rsa" /Users/Suresh/Downloads/code1 ec2-user@ec2-54-200-241-97.us-


west-2.compute.amazonaws.com:~/code

=========== 8-7-22
aws configure

AWSAccessKeyId=AKIASWGJPHNKMWFOT7WV
AWSSecretKey=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

security credentials
access key
download root key ====root key all permissions not required user permission

========To create your Amazon EKS cluster role in the IAM console

Open the IAM console at https://console.aws.amazon.com/iam/.

Choose Roles, then Create role.

Under Trusted entity type, select AWS service.

From the Use cases for other AWS services dropdown list, choose EKS.

Choose EKS - Cluster for your use case, and then choose Next.

On the Add permissions tab, choose Next.

For Role name, enter a unique name for your role, such as eksClusterRole.

For Description, enter descriptive text such as Amazon EKS - Cluster role.

Choose Create role.

========= 7-7-22

aws configure ==== details found in file called (aws k8s


cluster_user_credentials.csv)

AWS Access Key ID [None]: AKIAR2ATEQGMBDAAAZON


AWS Secret Access Key [None]: 5mPM2lfyg5oe594cY7PrBX0WbOkRpxmZFpR4wLBi
Default region name [None]: us-west-2

eksctl create cluster -f cluster.yaml

=====6-7-22

====

ec2-instance mac login


cd .ssh
ssh -i "eks" ec2-user@ec2-35-87-34-82.us-west-2.compute.amazonaws.com

ssh -i "id_rsa" ec2-user@ec2-52-38-35-48.us-west-2.compute.amazonaws.com

=====
Can't open file for writing

:w !sudo tee % > /dev/null


and then in my situation, I still can't modify the file, so it prompted that add
"!". so I input

:q!

=============== 5-7-22

how to delete cluster


check node group in compute in amazon website eksctl

delete node group

aws website cluster instance stop technique


search auto scaling group
desired = 0
min =0
max=2

IAM USERS

users --add permission --Attach existing policies directly

1)AdministratorAccess
2)select all eks services

============28-6-22 ci cd setup ,bitbucket

# Create Public Node Group


eksctl create cluster --cluster=eksdemo1 \
--region=us-east-1 \
--name=eksdemo1-ng-public1 \
--node-type=t2.micro

=========
git pull origin master --allow-unrelated-histories
git pull origin master
git pull --rebase origin master
git push -u origin master

=== git pull


====git push -u origin master

=========
cd git init
git add .
git commit -m "first commit"
git status
git remote add origin https://wayfrdev3@bitbucket.org/wayfrdev3/fe-carrier-
connects.git
git push origin1 master --force
============ 15 -6-22

brew install jmeter


brew upgrade jmeter
jmeter

========7-6-22 azure web app cmds

open terminal
open backend code or front end code
git add -A
git commit -m "cr azure"

open vs code
install azure app service
click a symbol
at resource + symbol
create web app
sign in to azure
u: m.suresh404@gmail.com
p: MANUsuri@999

==== select browse the code

select app service


app name
right click and deploy
pop up window open the browser
see output
================

======== 1-6-22

eb --version
aws --version

aws configure
AWS Access Key ID [None]: AKIAR2ATEQGMBDAAAZON
AWS Secret Access Key [None]: 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
Default region name [None]:
Default output format [None]:

eb init --platform node.js --region us-east-2

eb create carrierconnectsalpha-env

eb create/status/health/events/logs/open/deploy/config/terminate

======27-5-22 ====aws elastic bean stalk ===sample application

==== aws cli install manually


cd node-express

brew install python


brew reinstall python@3.9

export PATH="/Library/Frameworks/Python.framework/Versions/3.5/bin:/Users/
hedongfang/.rvm/gems/ruby-2.0.0-p643/bin:/Users/hedongfang/.rvm/gems/ruby-2.0.0-
p643@global/bin:/Users/hedongfang/.rvm/rubies/ruby-2.0.0-p643/bin:/usr/local/bin:/
usr/bin:/bin:/usr/sbin:/sbin:/Users/hedongfang/.rvm/bin:/System/Library/
Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages"
export MANPATH="/usr/local/man:$MANPATH"

======check python version

export PATH=/usr/local/lib/python3.9/site-packages:$PATH

pip3 install --upgrade awscli

export PATH=LOCAL_PATH:$PATH
source ~/PROFILE_SCRIPT
eb --version

aws --version

aws configure
AWS Access Key ID [None]: AKIAR2ATEQGMHDMDAEHM
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:

mkdir node-express
cd node-express
express && npm install

git init
cd node_modules
vi .gitignore

cd ..

brew install awsebcli


eb --version
eb init --platform node.js --region us-east-2
eb create --sample node-express-env
eb open

eb terminate
====asks env name was site search ebs ===node-express-env ==

==========26-5-22

=======aws (eks )kubernetes direct install


open aws == kubernetes cluster == create iam role ==
select ==AWS service,ec2,next, search eks select all ,next roll name=
cluster_admin,create role

brew install python


brew reinstall python@3.9
pip3 install awscli --upgrade --user

export PATH=/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-
packages:$PATH or

export PATH=/Users/Suresh/Library/Python/3.9/bin:$PATH ===changes on


installation
aws --version

aws configure ==== details found in file called (aws k8s


cluster_user_credentials.csv)

AWS Access Key ID [None]: AKIAR2ATEQGMHDMDAEHM


AWS Secret Access Key [None]: hF6f67Ksllz/LZCZ8nrmqTkF0oItDb4zkuMApjcH
Default region name [None]: us-west-2
Default output format [None]:

brew tap weaveworks/tap


brew install weaveworks/tap/eksctl
eksctl create cluster

kubectl config get-contexts


kubectl get pods -n kube-system
kubectl cluster-info
eksctl delete cluster --name=unique-gopher-1653575991

=======================
ls -a
vi .bash_history

site ==== https://ohmyz.sh/

sh -c "$(curl -fsSL
https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

==========================

========= 25-5-22 ====== Mongo

docker exec -it cb38b19f79eb sh ===container id===login in to container and


checking data base data

use DATABASE_NAME

use mydb
db
show dbs

db.movie.insert({"name":"tutorials point"})

show collections
db.inventory.find( {} )

SELECT * FROM inventory

========== 25-5-22 docker,kubernetes,postgres

cd FEcode
docker build -t fetest:201 .
docker run -d -p 8080:8080 fetest:201 or

cd deployment

kubectl apply -f landingpage.yaml

cd services

kubectl apply -f landingpageservice.yaml

docker ps
docker ps -a

kubectl get pods


kubectl get svc
kubectl get deployment
kubectl get nodes

=======localhost:8080 ===check at web server

cd BEcode
cd BEcode
docker build -t betest:201 .
docker run -d -p 3000:3000 betest:201 or

cd deployment

kubectl apply -f UsersDeployment.yaml

cd services

kubectl apply -f UserService.yaml

docker ps
docker ps -a

kubectl get pods


kubectl get svc
kubectl get deployment
kubectl get nodes

=======localhost:3000 ===check at web server


docker run -d -p 5432:5432 -e POSTGRES_PASSWORD='1234' postgres

docker ps

====open new terminal

=====login into postgres container

docker exec -it cb38b19f79eb sh ===container id===

psql -U postgres
\l
\d
\c subscribers

CREATE DATABASE subscribers;

select * from subscribers; === for showing inserted data in the Data base

cd BEcode
cd BEcode

vi .env
DB_HOST=localhost

sequelize db:migrate

=================
docker ps -a
docker container start container id ====if u r container is exited

kubectl describe pod (pod name) ==deployment-landingpage-5df8b594b4-ghknm== if


pod is not ready

kubectl delete deployment deployment id ======if u want to delete pods

=================== 24-5-22

docker exec -it <container-name> psql -U <username> <database>


you can run any PSQL-Query you like. To list all the tables within your database
you can use

\dt
After you identified the table you want to query you can call any select, update or
delete statement you like, e.g.:

SELECT * FROM <tablename>;

docker logs (container id) ==== to check status


docker container start (container id) ========for exited container use

===========================
minikube update-check
kubectl cluster-info =====check if I have kubectl is installed
or not
kubectl cluster-info dump ======To further debug and diagnose cluster
problems

docker ps -a -q
docker container ls

docker image ls

docker stop (container id)


docker rm -f (container id)
docker rmi -f (image id)

or

docker container ls -a
docker image ls
docker container rm <container_id>
docker image rm <image_id>

=================================
docker system prune -a
WARNING! This will remove:

all stopped containers


all networks not used by at least one container
all unused images
all build cache
============================

18-4-22

search option ===services === docker service stop start restart

=====docker cmds not working in command prompt

use tis cmd === RMDIR /S %USERPROFILE%\AppData\Roaming\Docker


Startup your service docker
Now click on your "Docker Desktop"

docker build --network=host -t betest:4 . ===on instance ===--network=host


===when container does not get internet
cd deployment ======check with net stat
notepad UsersDeployment.yaml

===== betest:4

kubectl apply -f UsersDeployment.yaml


doskey /history ===to check history of on time of screen

========13-4-2022

=======creating nginx pod and starting nginx svc and exposing to local host
kubectl run nginx --image=nginx --port=80 --restart=Never
kubectl expose pod nginx --type=NodePort
kubectl get nodes
kubectl get pods
kubectl get svc

docker images
docker pull ubuntu
docker images

docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres


===image &container created

docker ps
docker exec -it some-postgres bash ===login to container

su postgres
psql
\conninfo
\q
exit
exit

=======12-4-2022

docker run -itd --name (cont name) -p 60:80 alpine (image name)

============================
code Dockerfile
notepad Dockerfile === to check the file
minikube update-check
cls ==== to clear the screen

=============================errors

============ bean stack errors 3-6-22

ERROR: ServiceError - Create environment operation is complete, but with errors.


For more information, see troubleshooting documentation

ERROR: ServiceError - Create environment operation is complete, but with errors.


For more information, see troubleshooting documentation

AWS Elastic Beanstalk error: Failed to deploy application


Instance has failed at least the Unhealthy Threshold number of health checks
consecutively

[Instance: i-06efc42e41c3e1100] Command failed on instance. Return code: 1 Output:


Engine execution has encountered an error..

[Instance: i-06efc42e41c3e1100] Command failed on instance. Return code: 1 Output:


Engine execution has encountered an error..

Instance deployment failed to build the Docker image. The deployment failed.

During an aborted deployment, some instances may have deployed the new application
version

Instance ELB health state has been "OutOfService" for 3 days: Instance has failed
at least the UnhealthyThreshold number of health checks consecutively.

===6-7-22
npm ERR! code EAI_AGAIN
npm ERR! errno EAI_AGAIN
npm ERR! request to https://registry.npmjs.org/npm failed, reason: getaddrinfo
EAI_AGAIN registry.npmjs.org

npm ERR! A complete log of this run can be found in:


npm ERR! /root/.npm/_logs/2022-07-06T15_08_15_877Z-debug.log

You might also like