Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

APPLICATION: Collection of services

daemon: manages the docker components(images,


MONOLITHIC: multiple services are deployed on single containers, volumes)
server with single database.
MICRO SERVICES: multiple services are deployed on host: where we install docker (ex: linux, windows,
multiple servers with multiple databases. macos)

BASED ON USERS AND APP COMPLEXITY WE NEED TO Registry: manages the images.
SELECT THE ARCHITECTURE.
ARCHITECTURE OF DOCKER:
FACTORS AFFECTIONG FOR USING MICROSERVICES: yum install docker -y #client
F-1: COST systemctl start docker #client,Engine
F-2: MAINTAINANCE systemctl status docker

CONTAINERS:
its same as a server/vm. COMMANDS:
it will not have any operating system. docker pull ubuntu : pull ubuntu image
os will be on images. docker images : to see list of images
(SERVER=AMI, CONTAINER=IMAGE) docker run -it --name cont1 ubuntu : to create a
its free of cost and can create multiple containers. container
-it (interactive) - to go inside a container
DOCKER: cat /etc/os-release : to see os flavour
Its an free & opensource tool.
it is platform independent.
used to create, run & deploy applications on apt update -y : to update
containers. redhat=yum
it is introduced on 2013 by solomenhykes & sebastian ubuntu=apt
phal. without update we can’t install any pkg in ubuntu
We used GO language to develop the docker.
here we write files on YAML.
before docker user faced lot of problems, but after apt install git -y
docker there is no issues with the application. apt install apache2 -y
Docker will use host resources (cpu, mem, n/w, os). service apache2 start
Docker can run on any OS but it natively supports service apache2 status
Linux distributions.
docker p q : to exit container
CONTAINERIZATION: docker ps -a : to list all containers
Process of packing an application with its docker attach cont_name : to go inside
dependencies. container
ex: PUBG docker stop cont_name : to stop container
docker start cont_name : to start container
APP= PUBG & DEPENDECY = MAPS docker pause cont_name : to pause container
APP= CAKE & DEPENDECY = KNIFE docker unpause cont_name: to unpause container
docker inspect cont_name: to get complete info of a
Os level of virtualization. container
docker rm cont_name : to delete a container
VIRTUALIZATION:
able to create resource with our hardware properties. STOP: will wait to finish all process running inside
container
ARCHITECTURE & COMPONENTS: KILL: won’t wait to finish all process running inside
client: it will interact with user container
user gives commands and it will be executed by
docker client
============================================ ARGS : to pass env variables (outside
==================================== containers)
EXPOSE : to give port number
OS LEVEL OF VIRTUALIZATION:

docker pull ubuntu EX-1:


docker run -it --name cont1 ubuntu FROM ubuntu
apt update -y RUN apt update -y
apt install mysql-server apache2 python3 -y RUN apt install apache2 -y
touch file{1...5}
apache2 -v docker build -t raham:v1 .
mysql-server --version docker run -it --name cont1 raham:v1
python3 --version
ls EX-2:
FROM ubuntu
ctrl p q RUN apt update -y
RUN apt install apache2 -y
docker commit cont1 raham:v1 RUN apt install python3 -y
docker run -it --name cont2 raham:v1 CMD apt install mysql-server -y
apache2 -v
mysql-server --version
python3 --version docker build -t raham:v2 .
ls docker run -it --name cont2 raham:v2

EX-3:
DOCKERFILE: FROM ubuntu
it is an automation way to create image. COPY index.html /tmp
here we use components to create image. ADD
in Dockerfile D must be Capiatl. http://dlcdn.apache.org/tomcat/tomcat-9/v9.0.89/bin
Components also capital. /apache-tomcat-9.0.89.tar.gz /tmp
This Dockerfile will be Reuseable.
here we can create image directly without container docker build -t raham:v3 .
help. docker run -it --name cont3 raham:v3
Name: Dockerfile
EX-4:
docker kill $(docker ps -qa)
docker rm $(docker ps -qa) FROM ubuntu
docker rmi -f $(docker images -qa) COPY index.html /tmp
ADD
COMPONENTS: http://dlcdn.apache.org/tomcat/tomcat-9/v9.0.89/bin
/apache-tomcat-9.0.89.tar.gz /tmp
FROM : used to base image WORKDIR /tmp
RUN : used to run linux commands (During LABEL author rahamshaik
image creation)
CMD : used to run linux commands (After docker build -t raham:v4 .
container creation) docker run -it --name cont4 raham:v4
ENTRYPOINT : high priority than cmd
COPY : to copy local files to conatiner
ADD : to copy internet files to conatiner EX-5:
WORKDIR : to open req directory FROM ubuntu
LABEL : to add labels for docker images LABEL author rahamshaik
ENV : to set env variables (inside ENV client swiggy
container) ENV server appserver
at a time we can share single volume to single
docker build -t raham:v5 . container only.
docker run -it --name cont5 raham:v5 every volume will store under
/var/lib/docker/volumes

NETFLIX-DEPLOYMENT: METHOD-1:
DOCKER FILE:
yum install git -y
git clone https://github.com/RAHAMSHAIK007/netflix- FROM ubuntu
clone.git VOLUME ["/volume1"]
mv netflix-clone/*
docker build -t raham:v1 .
Dockerfile docker run -it --name cont1 raham:v1
cd volume1/
FROM ubuntu touch file{1..5}
RUN apt update cat>file1
RUN apt install apache2 -y ctrl p q
COPY * /var/www/html/
CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"] docker run -it --name cont2 --volumes-from cont1 --
privileged=true ubuntu
docker run -it --name cont3 --volumes-from cont1 --
docker build -t netflix:v1 . privileged=true ubuntu
docker run -it --name netflix1 -p 80:80 netflix:v1
METHOD-2:
FROM CLI:
MULTI-STAGE BUILD:
if we build image-1 from docker file and use that docker run -it --name cont4 -v volume2 ubuntu
image-1 to build other image. cd volume2/
touch java{1..5}
Dockerfile -- > image1 ctrl p q

Dockerfile docker run -it --name cont5 --volumes-from cont4


FROM image1 -- > multi-stage build ubuntu
----- cd volume2
------- ll
touch java{6..10}
ADV: ctrl p q
less time docker attach cont4
less work ls
less complexity

============================================ METHOD-3: VOLUME MOUNTING


===
docker volume ls : to list volumes
VOLUMES: docker volume create name : to create volume
It is used to store data inside container. docker volume inspect volume3 : to get info of
volume is a simple directory inside container. volume3
containers uses host resources (cpu, ram, rom). cd /var/lib/docker/volumes/volume3/_data
single volume can be shared to multiple containers. touch python{1..5}
ex: cont-1 (vol1) --- > cont2 (vol1) & cont3 (vol1) & docker run -it --name cont5 --mount
cont4 (vol1) source=volume3,destination=/volume3 ubuntu
docker volume rm : to delete volumes
docker volume prune : to delete unused volumes
docker build -t recharge:v1 .
HOST -- > CONTAINER: docker run -itd --name recharge -p 84:80 recharge:v1

cd /root docker ps -a -q : to list container ids


touch raham{1..5} docker kill $(docker ps -a -q) : to kill all containers
docker volume inspect volume4 docker rm $(docker ps -a -q) : to remove all containers
cp * /var/lib/docker/volumes/volume4/_data
docker attach cont5 Note: In the above process all the containers are
ls /volume4 managed and created one by one in real time we
manage all the continers at same time so for that
purpose we are going to use the concept called
RESOURCE MANAGEMENT: Docker compose.
By default, docker containers will not have any limits
for the resources like cpu ram and memory so we
need to restrict resource use for container.
DOCKER COMPOSE:
By default docker containers will use host It's a tool used to manage multiple containers in single
resources(cpu, ram, rom) host.
Resource limits of docker container should not exceed we can create, start, stop, and delete all containers
the docker host limits. together.
we write container information in a file called a
docker stats --> to check live cpu and memory compose file.
compose file is in YAML format.
docker run -it --name cont7 --cpus="0.1" -- inside the compose file we can give images, ports, and
memory="300mb" ubuntu volumes info of containers.
docker update cont7 --cpus="0.7" --memory="300mb" we need to download this tool and use it.

JENKINS SETUP BY DOCKER: INSTALLATION:


docker run -it --name jenkins -p 8080:8080 sudo curl -L
jenkins/jenkins:lts "https://github.com/docker/compose/releases/downl
oad/1.29.1/docker-compose-$(uname -s)-$(uname -
------------------------------------------------------------------------ m)" -o /usr/local/bin/docker-compose
---------- ls /usr/local/bin/
sudo ln -s /usr/local/bin/docker-compose
vim Dockerfile /usr/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
FROM ubuntu docker-compose version
RUN apt update -y
RUN apt install apache2 -y
COPY index.html /var/www/html In Linux majorly you are having two type of commands
CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"] first one is inbuilt commands which come with the
operating system by default
Index.html: take form w3 schools second one is download commands we are going to
download with the help of yum, apt or Amazon Linux
docker build -t movies:v1 . extras.
docker run -itd --name movies -p 81:80 movies:v1
some commands we can download on binary files.
docker build -t train:v1 .
docker run -itd --name train -p 82:80 train:v1 NOTE: linux will not give some commands, so to use
them we need to download seperately
docker build -t dth:v1 . once a command is downloaded we need to move it
docker run -itd --name dth -p 83:80 dth:v1 to /usr/local/bin
because all the user-executed commands in linux will
store in /usr/local/bin CHANGING THE DEFULT FILE:
executable permission need to execute the command
by default the docker-compose wil support the
following names
docker-compose.yml, docker-compose.yaml,
vim docker-compose.yml compose.yml, compose.yaml

version: '3.8' mv docker-compose.yml raham.yml


services: docker-compose up -d : throws an error
movies:
image: movies:v1 docker-compose -f raham.yml up -d
ports: docker-compose -f raham.yml ps
- "81:80" docker-compose -f raham.yml down
train:
image: train:v1
ports: images we create on server.
- "82:80" these images will work on only this server.
dth:
image: dth:v1 git (local) -- > github (internet) = to access by others
ports: image (local) -- > dockerhub (internet) = to access by
- "83:80" others
recharge:
image: recharge:v1 Replace your username
ports:
- "84:80" STEPS:
create dockerhub account
COMMANDS: create a repo
docker-compose up -d : to create and start
all containers docker tag movies:v1 vijaykumar444p/movies
docker-compose stop : to stop all containers docker login -- > username and password
docker-compose start : to start all containers docker push vijaykumar444p/movies
docker-compose kill : to kill all containers
docker-compose rm : to delete all
containers docker tag train:v1 vijaykumar444p/train
docker-compose down : to stop and delete all docker push vijaykumar444p/train
containers
docker-compose pause : to pause all
containers docker tag dth:v1 vijaykumar444p/dth
docker-compose unpause : to unpause docker push vijaykumar444p/dth
all containers
docker-compose ps -a : to list the containers docker tag recharge:v1 vijaykumar444p/recharge
managed by compose file docker push vijaykumar444p/recharge
docker-compose images : to list the images
managed by compose file docker rmi -f $(docker images -q)
docker-compose logs : to show logs of docker pull vijaykumar444p/movies:latest
docker compose
docker-compose top : to show the process
of compose containers DOCKER SAVE:
docker-compose restart : to restart all the docker image save swiggy:v1 > swiggy:v1.tar :covert
compose containers image to file
docker-compose scale train=10 : to scale the service docker image history swiggy:v1
docker rmi swiggy:v1
docker images in service we can create copy of conatiners.
docker image load < swiggy\:v1.tar that container copies will be distributed to all the
nodes.
COMMAND TO ZIP: gzip dummy:v5.tar abc.zip
DECOMPRESS COMMAND: gzip movies:latest.gz -d service -- > containers -- > distributed to nodes

COMPRESSING DOCKER IMAGE SIZE: docker service create --name movies --replicas 3 -p
1. push to dockerhub 81:80 vijaykumar444p/movies:latest
2. use multi stage docker build docker service ls : to list services
3. reduce layers docker service inspect movies : to get complete info
4. use tar balls of service
docker service ps movies : to list the containers
============================================ of movies
========================= docker service scale movies=10 : to scale in the
containers
High Avaliabilty: more than one server docker service scale movies=3 : to scale out the
why: if one server got deleted then other server will containers
gives the app docker service rollback movies : to go previous state
docker service logs movies : to see the logs
DOCKER SWARM: docker service rm movies : to delete the
its an orchestration tool for containers. services.
used to manage multiple containers on multiple
servers. when scale down it follows lifo pattern.
here we create a cluster (group of servers). LIFO MEANS LAST-IN FIRST-OUT.
in that clutser, we can create same container on
multiple servers. Note: if we delete a container it will recreate
here we have the manager node and worker node. automatically itself.
manager node will create & distribute the container to it is called as self healing.
worker nodes.
worker node's main purpose is to maintain the
container. CLUSTER ACTIVIES:
without docker engine we cant create the cluster. docker swarm leave (worker) : to make node
Port: 2377 inactive from cluster
worker node will join on cluster by using a token. To activate the node copy the token.
manager node will give the token. docker node rm node-id (manager): to delete worker
node which is on down state
SETUP: docker node inspect node_id : to get comple info of
create 3 servers worker node
install docker and start the service docker swarm join-token manager : to generate
hostnamectl set-hostname the token to join
manager/worker-1/worker-2
Enable 2377 port Note: we cant delete the node which is ready state
if we want to join the node to cluster again we need to
docker swarm init (manager) -- > copy-paste the token paste the token on worker node
to worker nodes
docker node ls

Note: individual containers are not going to replicate. DOCKER NETWORKING:


if we create a service then only containers will be Docker networks are used to make communication
distributed. between the multiple containers that are running on
same or different docker hosts.
SERVICE: it's a way of exposing and managing multiple
containers. We have different types of docker networks.
Bridge Network : SAME HOST 4. WE CANT PLACE CONATINER ON REQUITED SERVER.
Overlay network : DIFFERENT HOST 5. USED FOR EASY APPS.
Host Network
None network DOCKER ALTERNATIVES: containerd, rocket, cri-o
HISTORY:
BRIDGE NETWORK: It is a default network that Initially Google created an internal system called Borg
container will communicate with each other within (later called as omega) to manage its thousands of
the same host. applications later they donated the borg system to
cncf and they make it as open source.
OVERLAY NETWORK: Used to communicate containers initial name is Borg but later cncf rename it to
with each other across the multiple docker hosts. Kubernetes
the word kubernetes originated from Greek word
HOST NETWORK: When you Want your container IP called pilot or Hailsmen.
and ec2 instance IP same then you use host network Borg: 2014
K8s first version came in 2015.
NONE NETWORK: When you don’t Want The container
to get exposed to the world, we use none network. It
will not provide any network to our container. INTRO:

IT is an open-source container orchestration platform.


To create a network: docker network create It is used to automates many of the manual processes
network_name like deploying, managing, and scaling containerized
To see the list: docker network ls applications.
To delete a network: docker network rm Kubernetes was developed by GOOGLE using GO
network_name Language.
To inspect: docker network inspect network_name MEM -- > GOOGLE -- > CLUSTER -- > MULTIPLE APPS
To connect a container to the network: docker OF GOOGLE -- > BORG -- >
network connect network_name container_id/name Google donated Borg to CNCF in 2014.
docker exec -it cont1 /bin/bash 1st version was released in 2015.
apt update
apt install iputils-ping -y : command to install ping
checks ARCHITECTURE:
ping ip-address of cont2
DOCKER : CNCA
To disconnect from the container: docker network K8S: CNPCA
disconnect network_name container_name
To prune: docker network prune C : CLUSTER
N : NODE
P : POD
PROJECT REPO: C : CONTAINER
https://github.com/devopsbyraham/ A : APPLICATION
dockermsproject.git

============================================ COMPONENTS:
============================= MASTER:

1. API SERVER: communicate with user (takes


K8S: command execute & give op)
2. ETCD: database of cluster (stores complete info of a
LIMITATIONS OF DOCKER SWARM: cluster ON KEY-VALUE pair)
1. CANT DO AUTO-SCALING AUTOMATICALLY 3. SCHEDULER: select the worker node to shedule
2. CANT DO LOAD BALANCING AUTOMATICALLY pods (depends on hw of node)
3. CANT HAVE DEFAULT DASHBOARD
4. CONTROLLER: control the k8s objects (n/w, service, Kubectl is the command line tool for k8s
Node) if we want to execute commands we need to use
kubectl.
WORKER:
SETUP:
1. KUBELET : its an agent (it will inform all activites to sudo apt update -y
master) sudo apt upgrade -y
2. KUBEPROXY: it deals with nlw (ip, networks, ports) sudo apt install curl wget apt-transport-https -y
3. POD: group of conatiners (inside pod we have app) sudo curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Note: all components of a cluster will be created as a sudo curl -LO
pod. https://storage.googleapis.com/minikube/releases/lat
est/minikube-linux-amd64
sudo mv minikube-linux-amd64
CLUSTER TYPES: /usr/local/bin/minikube
sudo chmod +x /usr/local/bin/minikube
1. SELF MANAGED: WE NEED TO CREATE & MANAGE sudo minikube version
THEM sudo curl -LO "https://dl.k8s.io/release/$(curl -L -s
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/
minikube = single node cluster kubectl"
kubeadm = multi node cluster (manual) sudo curl -LO "https://dl.k8s.io/$(curl -L -s
kops = multi-node cluster (automation) https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/
kubectl.sha256"
2. CLOUD-BASED: CLOUD PROVIDERS WILL MANAGE sudo echo "$(cat kubectl.sha256) kubectl" |
THEM sha256sum --check
sudo install -o root -g root -m 0755 kubectl
AWS = EKS = ELASTIC KUBERNETES SERVICE /usr/local/bin/kubectl
AZURE = AKS = AZURE KUBERENETS SERVICE sudo minikube start --driver=docker --force
GOOGLE = GKS = GOOGLE KUBERENETS SERVICE
NOTE: When you download a command as binary file
it need to be on /usr/local/bin
because all the commands in linux will be on
MINIKUBE: /usr/local/bin
It is a tool used to setup single node cluster on K8's. and need to give executable permission for that binary
Here Master and worker runs on same machine file to work as a command.
It contains API Servers, ETDC database and container
runtime
It is used for development, testing, and
experimentation purposes on local. POD:
It is a platform Independent. It is a smallest unit of deployment in K8's.
Installing Minikube is simple compared to other tools. It is a group of containers.
Pods are ephemeral (short living objects)
NOTE: But we don't implement this in real-time Prod Mostly we can use single container inside a pod but if
we required, we can create multiple containers inside
REQUIREMENTS: a same pod.
when we create a pod, containers inside pods can
2 CPUs or more share the same network namespace, and can share
2GB of free memory the same storage volumes .
20GB of free disk space While creating pod, we must specify the image, along
Internet connection with any necessary configuration and resource limits.
Container or virtual machine manager, such as: K8's cannot communicate with containers, they can
Docker. communicate with only pods.
We can create this pod in two ways,
1. Imperative(command) All the pods will have same config.
2. Declarative (Manifest file) only pod names will be differnet.

IMPERATIVE: LABLES: individual pods are difficult to manage


because they have different names.
kubectl run pod1 --image so we give a common label to group them and work
vinodvanama/paytmmovies:latest with them together
kubectl get pods/pod/po SELECTOR: Used to select pods with same labels.
kubectl get pod -o wide
kubectl describe pod pod1
kubectl delete pod pod1 use kubectl api-resources for checking the objects info

DECRALATIVE: by using file called manifest file vim replicaset.yml

MANDATORY FEILDS: without these feilds we cant apiVersion: apps/v1


create manifest kind: ReplicaSet
metadata:
apiVersion: name: movies
kind: labels:
metadata: app: paytm
spec: spec:
replicas: 3
selector:
vim pod.yml matchLabels:
app: paytm
apiVersion: v1 template:
kind: Pod metadata:
metadata: labels:
name: pod1 app: paytm
spec: spec:
containers: containers:
- image: vinodvanama/paytmtrain:latest - name: cont1
name: cont1 image: yashuyadav6339/movies:latest

execution:
kubectl create -f pod.yml
kubectl get pods/pod/po
kubectl get pod -o wide To list rs :kubectl get rs/replicaset
kubectl describe pod pod1 To show addtional info :kubectl get rs -o wide
kubectl delete -f raham.yml To show complete info :kubectl describe rs name-of-
rs
DRAWBACK: once pod is deleted we can't retrieve the To delete the rs :kubectl delete rs name-of-rs
pod. to get lables of pods : kubectl get pods -l
app=Paytm
============================================ to delete pods : kubectl delete po -l
=============================== app=paytm
TO scale rs : kubectl scale rs/movies --
REPLICASET: replicas=10 (LIFO)
rs -- > pods
it will create multiple copies of same pod.
if we delete one pod automatically it will create new LIFO: LAST IN FIRST OUT.
pod.
IF A POD IS CREATED LASTLY IT WILL DELETE FIRST to show all pod labels :kubectl get pods --show-
WHEN SCALE OUT. labels
To delete all pods :kubectl delete pod --all
ADV:
Self healing kubectl rollout history deploy/movies
scaling kubectl rollout undo deploy/movies
kubectl rollout status deploy/movies
DRAWBACKS: kubectl rollout pause deploy/movies
1. we cant Rollin and rollout, we cant update the kubectl rollout resume deploy/movies
application in rs.
COMMANDS FOR SHORTCUTS:
DEPLOYMENT:
deploy -- > rs -- > pods vim .bashrc
we can update the application.
its high level k8s objects. alias kgp="kubectl get pods"
alias kgr="kubectl get rs"
vim deploy.yml alias kgd="kubectl get deploy"

apiVersion: apps/v1 source .bashrc


kind: Deployment
metadata: kgr
name: movies kgp
labels:
app: paytm ============================================
spec: ====================================
replicas: 3
selector: KOPS:
matchLabels: INFRASTRUCTURE: Resources used to run our
app: paytm application on cloud.
template: EX: Ec2, VPC, ALB, ASG-------------
metadata:
labels:
app: paytm Minikube -- > single node cluster
spec: All the pods on single node
containers: if that node got deleted then all pods will be gone.
- name: cont1
image: yashuyadav6339/movies:latest KOPS:
kOps, also known as Kubernetes operations.
it is an free and open-source tool.
used to create, destroy, upgrade, and maintain a
To list deployment :kubectl get deploy highly available, production-grade Kubernetes cluster.
To show addtional info :kubectl get deploy -o wide Depending on the requirement, kOps can also provide
To show complete info :kubectl describe deploy cloud infrastructure.
name-of-deployment kOps is mostly used in deploying AWS and GCE
To delete the deploy :kubectl delete deploy name- Kubernetes clusters.
of-deploy But officially, the tool only supports AWS. Support for
to get lables of pods :kubectl get pods -l other cloud providers (such as DigitalOcean, GCP, and
app=paytm OpenStack) are in the beta stage.
TO scale deploy :kubectl scale deploy/name-
of-deploy --replicas=10 (LIFO)
To edit deploy :kubectl edit deploy/name-of- ADVANTAGES:
deploy • Automates the provisioning of AWS and GCE
Kubernetes clusters
• Deploys highly available Kubernetes masters aws s3api put-bucket-versioning --bucket
• Supports rolling cluster updates rahamsdevopsbatchmay292024pm.k8s.local --region
• Autocompletion of commands in the us-east-1 --versioning-configuration Status=Enabled
command line export
• Generates Terraform and CloudFormation KOPS_STATE_STORE=s3://rahamsdevopsbatchmay292
configurations 024pm.k8s.local
• Manages cluster add-ons.
• Supports state-sync model for dry-runs and SETP-4: CREATING THE CLUSTER
automatic idempotency kops create cluster --name rahams.k8s.local --zones
• Creates instance groups to support us-east-1a --master-count=1 --master-size t2.medium
heterogeneous clusters --node-count=2 --node-size t2.micro
kops update cluster --name rahams.k8s.local --yes --
ALTERNATIVES: admin
Amazon EKS , MINIKUBE, KUBEADM, RANCHER,
TERRAFORM.
Suggestions:
* list clusters with: kops get cluster
STEP-1: GIVING PERMISSIONS * edit this cluster with: kops edit cluster
rahamdevops.k8s.local
KOps Is a third party tool if it want to create * edit your node instance group: kops edit ig --
infrastructure on aws name=rahamdevops.k8s.local nodes-ap-south-1a
aws need to give permission for it so we can use IAM * edit your master instance group: kops edit ig --
user to allocate permission for the kops tool name=rahamdevops.k8s.local master-ap-south-1a

IAM -- > USER -- > CREATE USER -- > NAME: KOPS -- >
Attach Polocies Directly -- > AdministratorAccess -- > ADMIN ACTIVITIES:
NEXT -- > CREATE USER To scale the worker nodes:
USER -- > SECURTITY CREDENTIALS -- > CREATE ACCESS kops edit ig --name=rahamdevops.k8s.local nodes-us-
KEYS -- > CLI -- > CHECKBOX -- > CREATE ACCESS KEYS east-1a
-- > DOWNLOAD kops update cluster --name rahamdevops.k8s.local --
yes --admin
aws configure (run this command on server) kops rolling-update cluster --yes

SETP-2: INSTALL KUBECTL AND KOPS ADMIN ACTIVITIES:


kops update cluster --name rahamdevops.k8s.local --
curl -LO "https://dl.k8s.io/release/$(curl -L -s yes
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/ kops rolling-update cluster
kubectl"
wget NOTE: In real time we use fine node cluster to master
https://github.com/kubernetes/kops/releases/downlo nodes and three worker nodes.
ad/v1.25.0/kops-linux-amd64
chmod +x kops-linux-amd64 kubectl NOTE: its My humble request for all of you not to
mv kubectl /usr/local/bin/kubectl delete the cluster manually and do not delete any
mv kops-linux-amd64 /usr/local/bin/kops server use the below command to delete the cluster.

vim .bashrc TO DELETE: kops delete cluster --name


export PATH=$PATH:/usr/local/bin/ -- > save and exit rahamdevops.k8s.local --yes
source .bashrc

SETP-3: CREATING BUCKET ============================================


aws s3api create-bucket --bucket ======================
rahamsdevopsbatchmay292024pm.k8s.local --region
us-east-1
NAMESPACES: kubectl config set-context --current --
namespace=test : to switch to the namespace
NAMESPACE: It is used to divide the cluster to multiple kubectl config view --minify | grep namespace : to see
teams on real time. current namespace
it is used to isolate the env. kubectl get po -n dev
kubectl delete pod dev1 -n dev
CLUSTER: HOUSE kubectl delete ns dev : to delete namespace
NAMESPACES: ROOM kubectl delete pod --all: to delete all pods
TEAM MATES: FAMILY MEM
NOTE: BY DEFAULT, K8S NAMESPACE WILL PROVIDE
Each namespace is isolated. ISOLATION BUT NOT RESTRICTION.
if you are room-1, are you able to see room-2. TO RESTRICT THE USER TO ACCESS A NAMESPACE IN
If dev team create a pod on dev ns testing team can’t REAL TIME WE USE RBAC.
able to access it. WE CREATE USER, WE GIVE ROLES AND ATTACH ROLE.
we can’t access the objects from one namespace to
another namespace.
SERVICE: It is used to expose the application in k8s.

TYPES: TYPES:
1. CLUSTERIP: It will work inside the cluster.
default : Is the default namespace, all objects it will not expose to outer world.
will create here only
kube-node-lease : it will store object which is taken apiVersion: apps/v1
from one node to another. kind: Deployment
kube-public : all the public objects will store here. metadata:
kube-system : default k8s will create some labels:
objects, those are storing on this ns. app: movies
name: movies-deploy
NOTE: Every component of Kubernetes cluster is going spec:
to create in the form of pod replicas: 10
And all these pods are going to store on KUBE-SYSTEM selector:
ns. matchLabels:
app: movies
kubectl get pod -n kube-system : to list all pods in template:
kube-system namespace metadata:
kubectl get pod -n default : to list all pods in labels:
default namespace app: movies
kubectl get pod -n kube-public : to list all pods in spec:
kube-public namespace containers:
kubectl get po -A : to list all pods in all - name: cont1
namespaces image: rahamshaik/moviespaytm:latest
kubectl get po --all-namespaces ports:
- containerPort: 80
kubectl create ns dev : to create namespace ---
kubectl config set-context --current --namespace=dev : apiVersion: v1
to switch to the namespace kind: Service
kubectl config view --minify | grep namespace : to see metadata:
current namespace name: service1
kubectl run dev1 --image nginx spec:
kubectl run dev2 --image nginx type: ClusterIP
kubectl run dev3 --image nginx selector:
kubectl create ns test : to create namespace app: movies
ports:
- port: 80 it will expose the application with dns [Domain Name
System] -- > 53
DRAWBACK: to crete dns we use Route53
We cannot use app outside.
apiVersion: apps/v1
2. NODEPORT: It will expose our application in a kind: Deployment
particular port. metadata:
Range: 30000 - 32767 (in sg we need to give all traffic) labels:
if we dont sepcify k8s service will take random port app: swiggy
number. name: swiggy-deploy
spec:
apiVersion: apps/v1 replicas: 3
kind: Deployment selector:
metadata: matchLabels:
labels: app: swiggy
app: movies template:
name: movies-deploy metadata:
spec: labels:
replicas: 10 app: swiggy
selector: spec:
matchLabels: containers:
app: movies - name: cont1
template: image: rahamshaik/trainservice:latest
metadata: ports:
labels: - containerPort: 80
app: movies ---
spec: apiVersion: v1
containers: kind: Service
- name: cont1 metadata:
image: rahamshaik/moviespaytm:latest name: abc
ports: spec:
- containerPort: 80 type: LoadBalancer
--- selector:
apiVersion: v1 app: swiggy
kind: Service ports:
metadata: - port: 80
name: service1 targetPort: 80
spec:
type: NodePort
selector: ============================================
app: movies ===========================================
ports:
- port: 80 scaling: increasing the count
nodePort: 31111 why to scale: to avoid the increasing load.

NOTE: UPDATE THE SG (REMOVE OLD TRAFFIC AND METRIC SERVER:


GIVE ALL TRAFFIC) it can collects metrics like cpu, ram -- from all the pods
DRAWBACK: and nodes in cluster.
EXPOSING PUBLIC-IP & PORT we can use kubectl top po/no to see metrics
PORT RESTRICTION. previously we can called it as heapster.

3. LOADBALACER: It will expose our app and distribute Metrics Server offers:
load blw pods.
A single deployment that works on most clusters kind: Deployment
(see Requirements) metadata:
Fast autoscaling, collecting metrics every 15 name: movies
seconds. labels:
Resource efficiency, using 1 milli core of CPU and 2 app: movies
MB of memory for each node in a cluster. spec:
Scalable support up to 5,000 node clusters. replicas: 3
selector:
matchLabels:
You can use Metrics Server for: app: movies
CPU/Memory based horizontal autoscaling (Horizontal template:
Autoscaling) metadata:
Automatically adjusting/suggesting resources needed labels:
by containers (Vertical Autoscaling) app: movies
spec:
containers:
Horizontal: New - name: cont1
Vertical: Existing image: yashuyadav6339/movies:latest

kubectl apply -f hpa.yml


In Kubernetes, a HorizontalPodAutoscaler kubectl get all
automatically updates a workload resource (such as a kubectl get deploy
Deployment or ReplicaSet), with the aim of kubectl autoscale deployment movies --cpu-
automatically scaling the workload to match demand. percent=20 --min=1 --max=10
kubectl get hpa
Example : if you have pod-1 with 50% load and pod2 kubectl desribe hpa movies
with 50% load then average will be (50+50/2=50) kubectl get al1
average value is 50
but if pod-1 is exceeding 60% and pod-2 50% then open second termina and give
average will be 55% (then here we need to create a kubectl get po --watch
pod-3 because its exceeding the average)
come to first terminal and go inside pod
Here we need to use metric server whose work is to kubectl exec mydeploy-6bd88977d5-7s6t8 -it --
collect the metrics (cpu & mem info) /bin/bash
metrics server is connected to the HPA and give
information to HPA apt update -y
Now HPA will analysis metrics for every 30 sec and apt install stress -y
create a new pod if needed. stress

stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout


COOLING PERIOD: amount of time taken to terminate 100s
the pod after load is decreased.
check terminal two to see live pods

scaling can be done only for scalable objects (ex: RS,


Deployment, RC ) DAEMONSET: used to create one pod on each worker
HPA is implemented as a K8S API Resources and a node.
controller. Its the old version of Deployment.
Controller Periodically adjust the number of replicas in if we create a new node a pod will be automatically
RS, RC and Deployment depends on average. created.
if we delete a old node a pod will be automatically
apiVersion: apps/v1 removed.
daemonsets will not be removed at any case in real
time. if you mention request and limit then everything is
Usecases: we can create pods for Logging, Monitoring fine
of nodes if you dont mention request and mention limit then
Even Metric server is created like a DaemonSet. Request=Limit
NOTE: in DaemonSet we dont specify the replicas. if you mention request and not mention limit then
Request=!Limit

apiVersion: apps/v1 IMPORTANT:


kind: DaemonSet Ever Pod in namespace must have CPU limts.
metadata: The amount of CPU used by all pods inside namespace
labels: must not exceed specified limit.
app: swiggy
name: swiggy-deploy DEFAULT RANGE:
spec: CPU :
selector: MIN = REQUEST = 0.5
matchLabels: MAX = LIMIT = 1
app: swiggy
template: MEMORY :
metadata: MIN = REQUEST = 500M
labels: MAX = LIMIT = 1G
app: swiggy
spec:
containers: kubectl create ns dev
- name: cont1 kubectl config set-context $(kubectl config current-
image: rahamshaik/moviespaytm:latest context) --namespace=dev
ports:
- containerPort: 80 vim dev-quota.yml

apiVersion: v1
============================================ kind: ResourceQuota
============================================ metadata:
====== name: dev-quota
namespace: dev
QUOTAS: spec:
k8s cluster can be divide into namespaces hard:
By default the pod in K8s will run with no limitations pods: "5"
of Memory and CPU limits.cpu: "1"
But we need to give the limit for the Pod limits.memory: 1Gi
It can limit the objects that can be created in a
namespace and total amount of resources. kubectl create -f dev-quota.yml
when we create a pod scheduler will check the limits kubectl get quota
of node to deploy pod on it.
here we can set limits to CPU, Memory and Storage
here CPU is measured on cores and memory in bytes. EX-1: Mentioning Limits = SAFE WAY
1 cpu = 1000 millicpus ( half cpu = 500 millicpus (or)
0.5 cpu) apiVersion: apps/v1
kind: Deployment
Here Request means how many we want metadata:
Limit means how many we can create maximum name: movies
labels:
limit can be given to pods as well as nodes app: movies
the default limit is 0 spec:
replicas: 3 name: movies
selector: labels:
matchLabels: app: movies
app: movies spec:
template: replicas: 3
metadata: selector:
labels: matchLabels:
app: movies app: movies
spec: template:
containers: metadata:
- name: cont1 labels:
image: yashuyadav6339/movies:latest app: movies
resources: spec:
limits: containers:
cpu: "1" - name: cont1
memory: 512Mi image: yashuyadav6339/movies:latest
resources:
kubectl create -f dep.yml limits:
cpu: "1"
apiVersion: apps/v1 memory: 1Gi
kind: Deployment requests:
metadata: cpu: "0.2"
name: movies memory: 100Mi
labels:
app: movies EX-3: MENTION only REQUESTS = NOT SAFE WAY
spec:
replicas: 3
selector: apiVersion: apps/v1
matchLabels: kind: Deployment
app: movies metadata:
template: name: movies
metadata: labels:
labels: app: movies
app: movies spec:
spec: replicas: 3
containers: selector:
- name: cont1 matchLabels:
image: yashuyadav6339/movies:latest app: movies
resources: template:
limits: metadata:
cpu: "0.2" labels:
memory: 100Mi app: movies
spec:
kubectl create -f dep.yml containers:
- name: cont1
image: yashuyadav6339/movies:latest
resources:
EX-2: MENTION LIMITS & REQUESTS = SAFE WAY requests:
cpu: "0.2"
memory: 100Mi
apiVersion: apps/v1
kind: Deployment
metadata:

You might also like