Kubernates

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 75

Kubernates

• For Cluster setup-


• https://devopscube.com/setup-kubernetes-cluster-kubeadm/
• What is Kubernetes?

• • Kubernetes is an orchestration engine and open-source platform for


managing containerized applications.
• • Responsibilities include container deployment, scaling & descaling of
containers & container load balancing.
• • Actually, Kubernetes is not a replacement for Docker, But Kubernetes can
be considered as a replacement for Docker Swarm, Kubernetes is
significantly more complex than Swarm, and requires more work to deploy.
• • Born in Google ,written in Go/Golang. Donated to CNCF(Cloud native
computing foundation) in 2014.
• • Kubernetes v1.0 was released on July 21, 2015.
The features of Kubernetes:
• AUTOMATED SCHEDULING: Kubernetes provides advanced scheduler to launch
container on cluster nodes based on their resource requirements and other constraints,
while not sacrificing availability

• SELF HEALING CAPABILITIES: Kubernetes allows to replaces and reschedules containers


when nodes die

• AUTOMATED ROLLOUTS & ROLLBACK: If something goes wrong, with Kubernetes you
can rollback the change.

• SERVICE DISCOVERY & LOAD BALANCING: Kubernetes will automatically assign IP


addresses to containers and a single DNS name for a set of containers, that can
load-balance traffic inside the cluster.
• Horizontal Scaling & Load Balancing: Kubernetes can scale up and scale
down the application as per the requirements.

• STORAGE ORCHESTRATION: With Kubernetes, you can mount the storage system
of your choice. You can either opt for local storage, or choose a public cloud
provider such as GCP or AWS, or perhaps use a shared network storage system
such as NFS, iSCSI, etc
Kubernetes Architecture
• Kubernetes follows a client-server architecture. It’s possible to have a multi-
master setup (for high availability), but by default there is a single master server
which acts as a controlling node and point of contact
• The master server consists of various components including a kube-apiserver, an
etcd storage, a kube-controller-manager, a cloud-controller-manager, a kube-
scheduler, and a DNS server for Kubernetes services.
• Worker Node components include kubelet and kube-proxy on top of
Docker(Container run time).
• Web UI (Dashboard):Dashboard is a web-based Kubernetes user interface. You can
use Dashboard to deploy containerized applications to a Kubernetes cluster,
troubleshoot your containerized application, and manage the cluster itself along
with its available resources.
• Kubectl :Kubectl is a command line configuration tool (CLI) for
Kubernetes used to interact with master node of kubernetes
• Kubectl has a config file called kubeconfig, this file has the information about
server and authentication information to access the API Server.

• It is command line tool that interacts with kube-apiserver and send commands to
the master node. Each command is converted into an API call.

• For configuration, kubectl looks for a file named config in the $HOME/.kube
directory
• Master Components
• It has below components that take care of communication, scheduling and
controllers.

• API Server:

• Kubernetes API server is the central management entity that receives all REST
requests for modifications (to pods, services, replication sets/controllers and
others), serving as frontend to the cluster.
• Also, this is the only component that communicates with the etcd cluster, making
sure data is stored in etcd and is in agreement with the service details of the
deployed pods
• Scheduler
• Scheduler watches the pods(Conatainers) and assigns the pods to run on specific
hosts. It helps schedule the pods

• It reads the service’s operational requirements and schedules it on the best fit
node.

• For example, if the application needs 1GB of memory and 2 CPU cores, then the
pods for that application will be scheduled on a node with at least those
resources.

• The scheduler runs each time there is a need to schedule pods. The scheduler
must know the total resources available as well as resources allocated to
existing workloads on each node.
• Controller Manager:
• kube-controller-manager – runs a number of distinct controller processes in the
background (for example, replication controller controls number of replicas in a
pod, endpoints controller populates endpoint objects like services and pods, and
others) to regulate the shared state of the cluster and perform routine tasks.

• When a change in a service configuration occurs (for example, replacing the image
from which the pods are running, or changing parameters in the configuration yaml
file), the controller spots the change and starts working towards the new desired
state.

• cloud-controller-manager – is responsible for managing controller processes with


dependencies on the underlying cloud provider (if applicable).

• For example, when a controller needs to check if a node was terminated or set up
routes, load balancers or volumes in the cloud infrastructure, all that is handled by
• Some of the controllers are,

• 1. Node controller - Its responsible for noticing and responding when nodes go
down.

• 2. Replication controllers - It maintains the number of pods. It controls how many


identical copies of a pod should be running somewhere on the cluster.

• 3. Endpoint controllers joins services and pods together.

• 4. Replicaset controllers ensure number of replication of pods running at all time.

• 5. Deployment controller provides declarative updates for pods and replicasets.

• 6. Daemonsets controller ensure all nodes run a copy of specific pods.

• 7. Jobs controller is the supervisor process for pods carrying out batch jobs
• ETCD::

• a simple, distributed key value storage which is used to store the Kubernetes
cluster data (such as number of pods, their state, namespace, etc), API objects
and service discovery details.

• It is only accessible from the API server for security reasons.

• ETCD enables notifications to the cluster about configuration changes with the
help of watchers. Notifications are API requests on each etcd cluster node to
trigger the update of information in the node’s storage.

• ETCD acts as the single source of truth (SSOT) for all Kubernetes cluster
components, responding to queries from the control plane and retrieving
various parameters of the state of the containers, nodes, and pods.

• ETCD is also used to store configuration details such as ConfigMaps, subnets,


and Secrets, along with cluster state data.
• Worker Nodes

• • Worker nodes are the nodes where the application actually running in
kubernetes cluster, it is also know as minion.

• These each worker nodes are controlled by the master node using kubelet
process.

• • Container Platform must be running on each worker nodes and it works


together with kubelet to run the containers,

• This is why we use Docker engine and takes care of managing images and
containers.
• • We can also use other container platforms like CoreOS, Rocket
• Node Components

• Kubelet

• • Kubelet is the primary node agent runs on each worker nodes and reads the
container manifests which ensures that containers are running and healthy.

• • It makes sure that containers are running in a pod. The kubelet doesn’t manage
containers which were not created by Kubernetes.

• This is the main service on a node, regularly taking in new or modified pod
specifications (primarily through the kube-apiserver) and ensuring that pods and
their containers are healthy and running in the desired state.

• This component also reports to the master on the health of the host where it is
running.
• Kube-proxy
• a proxy service that runs on each worker node to deal with individual host
subnetting and expose services to the external world.
• It performs request forwarding to the correct pods/containers across the various
isolated networks in a cluster.
• • kube-proxy enables the Kubernetes service abstraction by maintaining network
rules on the host and performing connection forwarding.
• • kube-proxy maintains network rules on nodes. These network rules allow
network communication to your Pods from inside or outside of your cluster.
• • It helps us to have network proxy and load balancer for the services in a single
worker node..
• • Service is just a logical concept, the real work is being done by the “kube-
proxy” pod that is running on each node.
• Kubeadm bootstraps a cluster. It’s designed to be a simple way for new users to
build clusters (more detail on this is in a later chapter).

• Kubectl is a tool for interacting with your existing cluster.

• Minikube is a tool that makes it easy to run Kubernetes locally. For Mac users,
HomeBrew makes using Minikube even simpler.
• Container Runtime

• • Each node must have a container runtime, such as Docker, rkt, or another
container runtime to process instructions from the master server to run
containers.

• Installation :: Different ways to install Kubernetes

• Play-with-k8s Google Kubernetes Engine(GKE)

• https://labs.play-with-k8s.com.

• Minikube Amazon EKS

• kubeadm Azure Kubernetes Service (AKS)


• Name Spaces:::

• Default name spaces are – default name space, kube-system ns, kube-
public ns
• When ever k8s cluster got created some of resources like pods,
rs,rc,deployemnts got created in default name space. As we are directly
interacting with default name space, these are also created in kube-
system name space also to prevent accidental deletion from default
name space
• We can create our own name space
• We can define quota of resources for name spaces. Name spaces are
completely isolated
• When ever any service is created ,a dns entry will be crested
automatically.
• The resources in different name spaces can communicate with each
• Resources with in a single name space can communicate with each
other simply by their name
• To communicate resources in other name space we have to append
name of name space.
• Kubectl create namespace <name space name> -- create name space
• Kubectl get pods - - namespace <namespace name>
• To create a pod/rs/etc in any specific name space – kubectl create –f
<pod-name> --<name-space name>
• To move permanently into a namespace – kubectl config set-context $
(kubectl config current-context) - - namespace=<namespace name>
• Kubectl get pods - -all-namespaces  to get pods from all name
spaces
• To limit resoureces in a name space we have to create resource
quota(here kind as resource quota and under spec we have to define
required memory,cpu resources)
• To get a pod from all namespaces– kubectl get pods - -all-namespaces
|grep <pod name>
• Kubernetes Objects

• • KubernetesObjects are persistent entities in the Kubernetes system. Kubernetes


uses these entities to represent the state of your cluster.

• • A Kubernetes object is a “record of intent”–once you create the object, the


Kubernetes system will constantly work to ensure that object exists.

• • To work with Kubernetes objects–whether to create, modify, or delete them–


you’ll need to use the Kubernetes API. When you use the kubectl command-line
interface, for example, the CLI makes the necessary Kubernetes API calls for you.
• POD

• • A Pod always runs on a Node.

• • A pod is the smallest building block or basic unit of scheduling in Kubernetes.

• • In a Kubernetes cluster, a pod represents a running process.

• • Inside a pod, you can have one or more containers. Those containers all share a
unique network IP, storage, network and any other specification applied to the pod.

• • POD is a group of one or more containers which will be running on some node.

• • Pods abstract network and storage away from the underlying container.

• • This lets you move containers around the cluster more easily.

• • Each Pod has its unique IP Address within the cluster.

• • Any data saved inside the Pod will disappear with pod
• Pod Lifecycle

• • Make a Pod reuqest to API server using a local pod defination file

• • The API server saves the info for the pod in ETCD

• • The scheduler finds the unscheduled pod and shedules it to node.

• • Kubelet running on the node, sees the pod sheduled and fires up CRT.

• • CRT runs the container

• • The entire lifecycle state of the pod is stored in ETCD.


• Pod Concepts

• • Pod is ephemeral(lasting for a very short time) and won’t be rescheduled to a

• new node once it dies.

• • You should not directly create/use Pod for deployment, Kubernetes have

• controllers like Replica Sets, Deployment, Deamon sets to keep pod alive.
• Pod model types

• Most often, when you deploy a pod to a Kubernetes cluster, it'll contain a single container.
But there are instances when you might need to deploy a pod with multiple containers.
• There are two model types of pod you can create:

• • One-container-per-pod. This model is the most popular. POD is the “wrapper”

• for a single container. Since pod is the smallest object that K8S recognizes, it manages the
pods instead of directly managing the containers.
• • Multi-container-pod or Sidecar containers In this model, a pod can hold multiple co-
located containers primary container And utility container that helps or enhances how an
application functions (examples of sidecar containers are log shippers/watchers and
monitoring agents)
• Service :

• A service is responsible for making our Pods discoverable inside the network or
exposing them to the internet. A Service identifies Pods by its LabelSelector.
• Types of services available:

• ClusterIP – Exposes the service on a cluster-internal IP. Service is only reachable


from within the cluster. This is the default Type.
• • When we create a service we will get one Virtual IP (Cluster IP) it will get
registered to the DNS(kube-dns). Using this Other PODS can find and talk the
pods of this service using service name.
• • Service is just a logical concept, the real work is being done by the “kube-
proxy” pod that is running on each node.
• • It redirect requests from Cluster IP(Virtual IP Address) to Pod IP.
• If you can’t access a ClusterIP service from the internet, why am I talking
about it? Turns out you can access it using the Kubernetes proxy!
• When would you use this?

• There are a few scenarios where you would use the Kubernetes proxy to access
your services.

• 1. Debugging your services, or connecting to them directly from your laptop


for some reason

• 2. Allowing internal traffic, displaying internal dashboards, etc.

• Because this method requires you to run kubectl as an authenticated user, you
should NOT use this to expose your service to the internet or use it for production
services.
• NodePort – Exposes the service on each Node’s IP at a static port. A ClusterIP
service, to which the NodePort service will route, is automatically created. You’ll
be able to contact the NodePort service, from outside the cluster, by using
• “<NodeIP>:<NodePort>”.

• A NodePort service is the most primitive way to get external traffic directly to
your service. NodePort, as the name implies, opens a specific port on all the
Nodes (the VMs), and any traffic that is sent to this port is forwarded to the
service.
• Note: If we don’t define nodePort value for NodePort Service. K8’s will randomly

• Allocate a nodePort with in 30000—32767.


• There are many downsides to this method:

• You can only have one service per port

• You can only use ports 30000–32767

• If your Node/VM IP address change, you need to deal with that

• For these reasons,I don’t recommend using this method in production to directly
expose your service. If you are running a service that doesn’t have to be always
available, or you are very cost sensitive, this method will work for you. A good
example of such an application is a demo app or something temporary.
• LoadBalancer – Exposes the service externally using a cloud provider’s load balancer.
• NodePort and ClusterIP services, to which the external load balancer will route, are
automatically created.
• If you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In
this case, there is no LoadBalancer integrated (unlike AWS EKS or Google
Cloud,KOPS,AKS). With this default setup, you can only use NodePort.
• A LoadBalancer service is the standard way to expose a service to the internet. On GKE,
this will spin up a Network Load Balancer that will give you a single IP address that will
forward all traffic to your service.
• When would you use this?

• If you want to directly expose a service, this is the default method. All traffic on the
port you specify will be forwarded to the service.

• There is no filtering, no routing, etc. This means you can send almost any kind of traffic
to it, like HTTP, TCP, UDP, Websockets, gRPC, or whatever.

• The big downside is that each service you expose with a LoadBalancer will get its own
IP address, and you have to pay for a LoadBalancer per exposed service, which can get
expensive!
Kubernetes Objects
• POD – no replicas possible
• Replication Controller – only equality based selectors
• Key=value –equality based
• Replica Set – equality based and set based selectors
• Key=[value1.value2,value3,….] set based
• DaemonSet – we cant scale up – only one replica- for monitoring purposes
• Deployment- Roll backs are possible – Two types of strategies-store version
information in ETCD
• Recreate and Rolling update- strategy-
• Service – Cluster IP/Node port(30000 to 32327)/Load balancer
• Volume
• Name space – It provides isolation between our services
• V3 – no issues
• V4 – recreate – it will delete all old pods – then it will start new pods
creation – application downtime
• Rolling update – it will delete one old pod and recreate one new pod

• 3 –one pod deleted – 2 old pods


• one new pod created
• Second old pod deleted – 1 old pod active
• one new pod created

• Rolling update – no downtime


• Docker run –name <container name> image name

• Create PODs – Manifest
• Manifest – 4 parts-
• API version- v1
• KIND- type of object
• MetaData- name of object, name spaces,labels selectors
• Namespace1-Login,
• Name space2-products info,
• Name space3- customers info, cost info
• SPEC- which image we have to use
• How many replicas

• Recreate- All old pods will die-after that new pods come up we will
get down time

• Rolling update – If one old pod dies one new pod come up – No down
time
• Recreate:: downtime for application
• V1- 3 pods all 3 old pods goes down
• V2 –New pods starts

• Rolling update: No down time


• V1- one pod goes down
• V2- One pod comes up
• # POD Manifest YML/YAML

• apiVersion: v1
• kind: Pod/rc/rs
• metadata:
• name: <PodName>
• labels:
• <Key>: <value1> selector=value1
• namespace: <nameSpaceName>
• spec:
• containers:
• - name: <NameOfTheCotnainer>
• image: <imagaName>
• ports:
• POD commands:
• kubectl apply -f <fileName.yml>
• kubectl get all
• kubectl get pods
• kubectl get pods --show-labels
• kubectl get pods -o wide
• kubectl get pods - o wide --show-labels
• kubectl describe pod <podName>
• kubectl describe pod <podName> -n <namespace>
• kubectl get pods -n <namespace>
• kubectl get pods -n <namespace> - o wide
• Service Commands:
• kubectl get svc
• kubectl get all
• kubectl describe service <serviceName>
• kubectl describe service <serviceName> -n <namespace>
• kubectl describe service <serviceName> -o wide
• kubectl get all --all-namespaces
• kubectl get all -n <namespace>
• kubectl get svc -n <namespace>
• Note: If we don't mention -n <namespace> it will refer default namespace.
• If required we can change name space 
• context.kubctl config set-context --curent --namespace=<namespace>
• ex:kubectl config set-context --curent --namespace=mksns
• After setting context if by default it will point to that namespace.
• RC commands
• kubectl get rc
• kubectl get rc -n <namespace>
• kubectl get all
• kubectl scale rc <rcName> --replicas <noOfReplicas>
• kubectl describe rc <rcName>
• kubectl delete rc <rcName>
• RS commands
• kubectl get rs
• kubectl get rs -n <namespace>
• kubectl get all
• kubectl scale rs <rsName> --replicas <noOfReplicas>
• kubectl describe rs <rsName>
• kubectl delete rs <rsName>
• Deamon Set Commans:
• kubectl get ds
• kubectl get ds -n <namespace>
• kubectl get all
• kubectl describe ds <dsName>
• kubectl delete ds <dsName>
• Deployment Commands:
• kubectl get deployment
• kubectl get rs
• kubectl get pods
• kubectl rollout status deployment <deploymentName>
• kubectl rollout history deployment <deploymentName>
• kubectl rollout history deployment <deploymentName> --revision 1
• kubectl rollout undo deployment <deploymentName> --to-revision
1
• kubectl scale deployment <deploymentName> --replicas
<noOfReplicas>
• kubectl apply -f <file.yml> | --record=true -->> it will show what is happening in
background
• # Update Deployment Image using command
• kubectl set image deployment <deploymentName>
<containerName>=<imageNameWithVersion> --record
• Maxserge:1 means it will create 1 new podMaxunavailable:1 means it will delete
one pod
• Kubectl create –f replicaset-file.yml
• Kubectl get rs
• Kubectl delete replicaset <replicaset-name>
• Kubectl replace –f <replicaset-file.yml> -- to update replica set like
number of replicas etc(to scale up/down etc)
• Kubectl scale –replicas=n(n can be scale up or scale down) – to scale
from command line
• Kubectl edit replicaset <replica set name> -- to change image – to
create pods with new image we have to delete old pods then new
pods will be created
• Kubectl create deployment <deployment name> --<image name> - it
will create deployment with single command
• https://kubernetes.io/docs/reference/kubectl/cheatsheet/ -- get
cheatsheet for k8s.
• https://kubernetes.io/docs/reference/kubectl/overview/
dry-run: By default as soon as the command is run, the resource will be created. If you simply
---

want to test your command , use the --dry-run=client option. This will not create the resource,
instead, tell you whether the resource can be created and if your command is right.

-o yaml: This will output the resource definition in YAML format on screen.

kubectl run nginx --image=nginx – creates nginx pod

kubectl run nginx --image=nginx --dry-run=client -o yaml -- Generate POD Manifest YAML
file (-o yaml). Don't create it(--dry-run)

kubectl create deployment --image=nginx nginx - - create deployment

kubectl create deployment --image=nginx nginx --dry-run –o - - Generate Deployment YAML


file (-o yaml). Don't create it(--dry-run)
• In k8s, entry point will be overridden by command and CMD will be
overridden by ARGs
• Blue- live - present
• Green – New code- test-pass live – blue
• Volumes::
• Host path- Volume with in the host. Drawback is if pods dies they will
be created in other node. So it will loose data . So better to use
Volumes from cloud
• NFS – NFS SERVER-/D/DATA/ - we can share this file system with N-
number of pods.
• AWS-EBS
• AZUREDISK
• PERSISTANCE Volumes(PV) –- It will exists independent of pod
lifecycle. (with out PV if we create any volume like host path or nfs
etc,if k8s object (rc,rs,ds)deleted then this volume information lost)
• PV keeps volume(hostpath,nfs etc) information
• PVs cant be accessed directly. We have to use PVC to claim PV
• We have to use PVC with pod. Even pod is delete PVC will have PV
information

• PVC – persistence volume claim- pvc binds PV to pod


• 1 Manual provisioning or static volumes – admin create volume
manually-
• AdminPV– PVCPV to POD
• 2. Dynamic provisioning or dynamic volumes -StorageClass – allows
for dynamic provisioning of PersistentVolumes.
• Adminstorage classPVC --- PVC PVPOD
• K8s will provision PV if not available
Access modes- --While creating PV
Claim Policies--- while creating PVC
• Access modes:::PV can be-
• RWO- read write once-only pods from one node can read and write– only
one node
• RWM- Read write many- many pods from many nodes can read and write
• ROM- Read only many – multiple nodes pods can read. No pod can write

• Claim policies::::PVC can be


• Retain: When the claim is deleted, the volume remains
• Delete: The persistent volume is deleted when the claim is deleted

• Recycle: When the claim is deleted the volume remains but in a state where
the data can be manually recovered
Volume Plugin ReadWriteOnce ReadOnlyMany ReadWriteMany ReadWriteOncePod
AWSElasticBlockStore ✓ - - -

AzureFile ✓ ✓ ✓ -

AzureDisk ✓ - - -

CephFS ✓ ✓ ✓ -

Cinder ✓ - - -

Quobyte ✓ ✓ ✓ -

NFS ✓ ✓ ✓ -
• Kubectl get pv
• Kubectl get pvc
• Static Volumes
• 1) Create PV
• apiVersion: v1
• kind: PersistentVolume
• metadata:
• name: pv-hostpath
• labels:
• type: local
• spec:
• storageClassName: manual
• capacity:
• storage: 1Gi
• accessModes:
- ReadWriteOnce
• hostPath:
• HPA- Horizontal POD autoscaling
• Resources: CPU and Memory- set while creating pods- request and
limit
• Request- initial-minimum-cpu/mem
• Limit- Max-cpu/mem
• 3 pods – avg cpu >=80 -- increase pods
• Vertical autoscaling- increasing-limits
• Horizontal autoscaling- Increasing no of replicas-preferable(HPA)
• Metric server to be installed to collect metric information
• Kubectl top pods -
• Kubecatl top nodes
Ingrees controller
• Ingress controller is typically a proxy service deployed in the cluster.
• It is nothing but a Kubernetes deployment exposed to a service.

• Following are the ingress controllers available for kubernetes.


1.Nginx Ingress Controller (Community & From Nginx Inc)
2.Traefik
3.HAproxy
4.Contour
5.GKE Ingress Controller
• Ingress controller:
• External LB will talk with Ingress LB. Ingress will talk with services.
• Ingress resource- traffic rules- it consist of routes for external traffic to
the services which runs in side the cluster. Ingress resource maintains
info about services and pods.
• Ingress controller- it is one pod which is running in k8s cluster. It is
layer7 LB(Application LB), which reads routing rules from ingress
resource.
• Layer 4 LB – It will route the traffic based on port i.e. tcp protocol.
• Layer 7 LB – It will route the traffic based on host or path of the
application.
• Out side cluster traffic- load balancer cluster direct-
• Config map :: If we want to pass configuration (environment)
variables at run time we can use config maps.
• A ConfigMap is an API object used to store non-confidential data in
key-value pairs.
• Pods can consume Config Maps as environment variables, command-
line arguments, or as configuration files in a volume.
• A Config Map allows us to decouple environment-specific
configuration from our container images, so that our applications are
easily portable.
• Secretes::
• A Secret is an object that contains a small amount of sensitive data
such as a password, a token, or a key.
• Such information might otherwise be put in a Pod specification or in a
container image.
• Using a Secret means that you don't need to include confidential data
in your application code.
• Secrets YML we cannot maintain in github as config map as it consist of
confidential data.
• To create secrete from command line: kubectl create secret
generic mysql-pass --from-literal=password=devdb@123
• Probes:
• Kubernetes controller will check the health of the process but it is not
checking health of the application
• Probes will check the health of the application
• Readiness probe
• Liveness probe
• If application is running then only service has to route the traffic. This
is done by readiness probe.
• If readiness probe fails the end point of application will be removed
so no traffic will be routed to application.
• If liveness probe failed it will try to restart the container so that
container will be started up
• What is pod
• Explain architecture
• What is service- identify pods-load balancer
• Deployment strategies- recreate-downtime and rolling upate-no downtime
• Default strategy-Rolling update
• What is difference between these two
• What is name space- Provides isolation bw services
• How to troubleshoot pods- describe-events and using logs
• What are resources- requests and limits
• Requests are min values(CPU and Memory)
• Limits are max values CPU and Memory
• What is HPA – how it works(min/max replicas)
• When we can use deamonset -
• What is label and selector-
• If pods are outside cluster- Node port- 30000 to 32727
• Access mode – RWO,RWM,ROM PV and
• policies-PVC Retain,Delete,Recreat
• PV and PVC
• Storage class
• What is Ingress controller – internal load balancer for K8S cluster
• Which type of Ingress controller you are using – nginx,
• What is the necessity of probes -health checks for application
• What is readiness and liveness probe
• What is config map and secret
• No of clusters- depends on env – dev/test/prod – 3 to 6
• Managers – 3 and worker nodes – 10
• How many services – 60 services
• How many pods in your env- 600
• Cluster is on premises or cloud-
• If pods keeps on restarting how to trouble shoot- crash loop backoff, OOM
• If mounting is not proper- if we use wrong pv/pvc , if vol not exits
• How to find which image is used in replicaset/rc/ds etc : by using
describe command
• Why PODs not ready- if image not exists, labels mismatch etc – from
events we can found the reason
• If we delete a pod from replica set out of 4 pods, how many pods we
will get – >4 pods

• Openshift- advancement of K8S


• Kubectl—OC
• Kubectl get pods – oc get pods

You might also like