Professional Documents
Culture Documents
Kubernaties
Kubernaties
Kubernaties
Mithun Technologies
Kubernetes
Agenda
• What is Kubernetes?
• Advantages of Kubernetes
• Kubernetes Architecture
• Kubernetes Components
• Kubernetes Cluster Setup
• Kubernetes Objects
• Demo
What is Kubernetes?
• Kubernetes is an orchestration engine and open-source platform for managing
containerized applications.
• Responsibilities include container deployment, scaling & descaling of containers
& container load balancing.
• Actually, Kubernetes is not a replacement for Docker, But Kubernetes can be
considered as a replacement for Docker Swarm, Kubernetes is significantly more
complex than Swarm, and requires more work to deploy.
• Born in Google ,written in Go/Golang. Donated to CNCF(Cloud native computing
foundation) in 2014.
• Kubernetes v1.0 was released on July 21, 2015.
• Current stable release v1.18.0.
Kubernetes Features
With Kubernetes, there is no need to worry about networking and communication because Kubernetes will automatically assign IP addresses to
containers and a single DNS name for a set of containers, that can load-balance traffic inside the cluster.
Containers get their own IP so you can put a set of containers behind a single DNS name for load balancing.
6. Storage Orchestration
With Kubernetes, you can mount the storage system of your choice. You can either opt for local storage, or choose a public cloud provider such as
GCP or AWS, or perhaps use a shared network storage system such as NFS, iSCSI, etc.
Kubernetes Architecture
Kubernetes implements a cluster computing background, everything works from inside a Kubernetes
Cluster. This cluster is hosted by one node acting as the ‘master’ of the cluster, and other nodes as
‘nodes’ which do the actual ‘containerization‘. Below is a diagram showing the same.
Kubernetes Components
Web UI (Dashboard)
Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a
Kubernetes cluster, troubleshoot your containerized application, and manage the cluster itself along with its available resources.
Kubectl
Kubectl is a command line configuration tool (CLI) for Kubernetes used to interact with master node of kubernetes. Kubectl has a
config file called kubeconfig, this file has the information about server and authentication information to access the API Server.
Master Node
The master node is responsible for the management of Kubernetes cluster. It is mainly the entry point for all administrative tasks.
It handles the orchestration of the worker nodes. There can be more than one master node in the cluster to check for fault
tolerance.
Master Components
It has below components that take care of communication, scheduling and controllers.
API Server:
Kube API Server interacts with API, Its a frontend of the kubernetes control plane. Communication
center for developers, sysadmin and other Kubernetes components
Scheduler
Scheduler watches the pods and assigns the pods to run on specific hosts.
Kubernetes Components
Controller Manager:
Controller manager runs the controllers in background which runs different tasks in Kubernetes cluster. Performs
cluster-level functions (replication, keeping track of worker nodes, handling nodes failures…).
Some of the controllers are,
1. Node controller - Its responsible for noticing and responding when nodes go down.
2. Replication controllers - It maintains the number of pods. It controls how many identical copies of a pod should be running
somewhere on the cluster.
3. Endpoint controllers joins services and pods together.
4. Replicaset controllers ensure number of replication of pods running at all time.
5. Deployment controller provides declarative updates for pods and replicasets.
6. Daemonsets controller ensure all nodes run a copy of specific pods.
7. Jobs controller is the supervisor process for pods carrying out batch jobs
etcd
etcd is a simple distribute key value store. kubernetes uses etcd as its database to store all cluster datas.
some of the data stored in etcd is job scheduling information, pods, state information and etc.
Kubernetes Components
Worker Nodes
• Worker nodes are the nodes where the application actually running in kubernetes cluster, it is also know as minion.
These each worker nodes are controlled by the master node using kubelet process.
• Container Platform must be running on each worker nodes and it works together
with kubelet to run the containers, This is why we use Docker engine
and takes care of managing images and containers.
• We can also use other container platforms like CoreOS, Rocket.
Node Components
Kubelet
• Kubelet is the primary node agent runs on each nodes and reads the container manifests which ensures that containers are running and healthy.
• It makes sure that containers are running in a pod. The kubelet doesn’t manage containers which were not created by Kubernetes.
Kube-proxy
• kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.
• kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from inside or outside of your cluster.
• It helps us to have network proxy and load balancer for the services in a single worker node..
• Service is just a logical concept, the real work is being done by the “kube-proxy” pod that is running on each node.
• It redirect requests from Cluster IP(Virtual IP Address) to Pod IP.
Container Runtime
• Each node must have a container runtime, such as Docker, rkt, or another container runtime to process instructions from the master server to run
containers.
Installation
• Pod
• Replication Controller
• ReplicaSet
• DaemonSet
• Deployment
• Service
• ClusterIP
• NodePort
• Volume
• Job
Kubernetes Objects
What is a Namespace?
You can think of a Namespace as a virtual cluster inside your Kubernetes cluster. You can have multiple namespaces inside a single
Kubernetes cluster, and they are all logically isolated from each other. They can help you and your teams with organization, security, and
even performance.
The first three namespaces created in a cluster are always default, kube-system, and kube-public. While you can technically deploy
within these namespaces, I recommend leaving these for system configuration and not for your projects.
• Default is for deployments that are not given a namespace, which is a quick way to create a mess that will be hard to clean up if you
do too many deployments without the proper information.
• Kube-system is for all things relating to the Kubernetes system. Any deployments to this namespace are playing a dangerous game and
can accidentally cause irreparable damage to the system itself.
• Kube-public is readable by everyone, but the namespace is reserved for system usage.
https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces
Kubernetes Objects
POD
• A Pod always runs on a Node.
• A pod is the smallest building block or basic unit of scheduling in Kubernetes.
• In a Kubernetes cluster, a pod represents a running process.
• Inside a pod, you can have one or more containers. Those containers all share a unique network IP, storage, network and any
other specification applied to the pod.
apiVersion: v1
• POD is a group of one or more containers which will be running on some node.
kind: Pod
• Pods abstract network and storage away from the underlying container. metadata:
• This lets you move containers around the cluster more easily. name: nginx-pod
labels:
• Each Pod has its unique IP Address within the cluster. app: nginx
• Any data saved inside the Pod will disappear without spec:
a persistent storage. containers:
- name: first-container
image: nginx
ports:
- containerPort: 80
Command
kubectl apply –f pod.yml
Kubernetes Objects
Pod Lifecycle
• Make a Pod reuqest to API server using a local pod defination file
• The API server saves the info for the pod in ETCD
• The scheduler finds the unscheduled pod and shedules it to node.
• Kubelet running on the node, sees the pod sheduled and fires up docker.
• Docker runs the container
• The entire lifecycle state of the pod is stored in ETCD.
Pod Concepts
• Pod is ephemeral(lasting for a very short time) and won’t be rescheduled to a
new node once it dies.
• You should not directly create/use Pod for deployment, Kubernetes have
controllers like Replica Sets, Deployment, Deamon sets to keep pod alive.
Pod model types Shared Network Namespace
All the containers in the pod share
Most often, when you deploy a pod to a the same network namespace
Kubernetes cluster, it'll contain a single
container. But there are instances when which means all containers can
you might need to deploy a pod with communicate with each other on
multiple containers. the localhost.
• The main use for static Pods is to run a self-hosted control plane: in other words, using the kubelet to supervise the individual control plane components.
• Static Pods are always bound to one Kubelet on a specific node.
# Service Cluster IP
apiVersion: v1
kind: Service
metadata:
name: nginxsvc
spec: APP
selector: 2
app: nginx POD
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
Kubernetes Objects
NodePort – Exposes the service on each Node’s IP at a static port. A ClusterIP service, to which the NodePort service will route,
is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by using
“<NodeIP>:<NodePort>”.
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
selector:
app: javawebapp
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30032
Note: If we don’t define nodePort value for NodePort Service. K8’s will randomly
Allocate a nodePort with in 30000—32767.
Kubernetes Objects
LoadBalancer – Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to
which the external load balancer will route, are automatically created.
If you are using a custom Kubernetes Cluster
(using minikube, kubeadm or the like). In this case, there is no
LoadBalancer integrated (unlike AWS EKS or Google
Cloud,KOPS,AKS). With this default setup, you can only
use NodePort.
Configure Load Balancer Externally.
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: javawebapp
type: LoadBalancer
Replication Controller
• Replication Controller is one of the key features of Kubernetes, which is responsible for managing the pod lifecycle. It is
responsible for making sure that the specified number of pod replicas are running at any point of time.
• A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of
pods always exists. If a pod does crash, the Replication Controller replaces it.
apiVersion: v1
• Replication Controllers and PODS are associated with labels. kind: ReplicationController
• Creating “RC” with count of 1 ensure that a POD is always available. metadata:
name: nginx-rc
spec:
replicas: 2
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
Command
image: nginx
kubectl apply –f rc.yml
ports:
- containerPort: 80
apiVersion: apps/v1
ReplicaSet kind: ReplicaSet
metadata:
• ReplicaSet is the next-generation Replication
name: nginx-replicaset
Controller.
spec:
• The only difference between a ReplicaSet and replicas: 2
a Replication Controller right now is the selector selector:
support. matchLabels:
app: nginx-rs-pod
• ReplicaSet supports the new set-based selector
matchExpressions:
requirements as described in the labels user
- key: env
guide whereas a Replication Controller only
operator: In
supports equality-based selector requirements.
values:
- dev
kubectl apply –f rs.yml
template:
kubectl scale rs nginx-replicaset --replicas 3
metadata:
labels:
app: nginx-rs-pod
env: dev
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
DaemonSet
A DaemonSet make sure that all or some kubernetes Nodes run a copy of a Pod.
When a new node is added to the cluster, a Pod is added to it to match the rest of the nodes and when a node is removed from
the cluster, the Pod is garbage collected.
Deleting a DaemonSet will clean up the Pods it created.
# Daemon Set
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
selector:
matchLabels:
app: nginx-pod
template:
metadata:
name: nginx-pod
labels:
app: nginx-pod
spec:
nodeSelector:
name: node1
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
Command
kubectl apply –f daemonset.yml
Kubernetes Objects
Deployment apiVersion: apps/v1
In Kubernetes, Deployment is the recommended way to deploy Pod or RS, simply because of kind: Deployment
the advance features it comes with. metadata:
Below are some of the key benefits. name: nginx
• Deploy a RS. spec:
• Updates pods (PodTemplateSpec). replicas: 3
• Rollback to older Deployment versions.
strategy:
• Scale Deployment up or down.
• Pause and resume the Deployment. type: ReCreate
• Use the status of the Deployment to determine state of replicas. selector:
• Clean up older RS that you don’t need anymore. matchLabels:
app: nginx
kubectl scale deployment [DEPLOYMENT_NAME] --replicas [NUMBER_OF_REPLICAS] template:
kubectl rollout status deployment nginx-deployment metadata:
labels:
kubectl get deployment
app: nginx
kubectl set image deployment nginx-deployment nginx-container=nginx:latest --record spec:
kubectl get replicaset containers:
kubectl rollout history deployment nginx-deployment - name: nginx
kubectl rollout history deployment nginx-deployment --revision=1 image: nginx:1.7.9
kubectl rollout undo deployment nginx-deployment --to-revision=1 ports:
- containerPort: 80
Kubernetes Objects # Deployments Rolling
Update
apiVersion: apps/v1
Deployment Strategies kind: Deployment
metadata:
There are several different types of deployment strategies you can take advantage of depending on your name: nginx-deployment
goal. spec:
replicas: 2
Rolling Deployment strategy:
type: RollingUpdate
The rolling deployment is the standard default deployment to Kubernetes. It works by slowly, one by one, rollingUpdate:
replacing pods of the previous version of your application with pods of the new version without any maxSurge: 1
cluster downtime. maxUnavailable: 1
minReadySeconds: 30
A rolling update waits for new pods to become ready before it starts scaling down the old ones. If there is selector:
a problem, the rolling update or deployment can be aborted without bringing the whole cluster down. In matchLabels:
the YAML definition file for this type of deployment, a new image replaces the old image. app: nginx
template:
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:1.7.9
ports:
- containerPort: 80
Kubernetes Objects apiVersion: apps/v1
kind: Deployment
metadata:
Deployment Strategies name: nginx
spec:
Recreate replicas: 3
strategy:
• In this type of very simple deployment, all of the old pods are killed all at once and
type: Recreate
get replaced all at once with the new ones. selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Kubernetes Objects
Deployment Strategies
Blue/ Green (or Red / Black) deployments
In a blue/green deployment strategy (sometimes referred to as red/black) the old version of the application (green) and
the new version (blue) get deployed at the same time. When both of these are deployed, users only have access to the
green; whereas, the blue is available to your QA team for test automation on a separate service or via direct port-
forwarding.
After the new version has been tested and is signed off for release, the service is switched to the blue version with the
old green version being scaled down:
apiVersion: v1
kind: Service
metadata:
name: awesomeapp
spec:
selector:
app: awesomeapp
version: "02"
Kubernetes Objects
Ingress – Kubernetes Ingress is a resource to add rules for routing traffic from external sources to the services in the
kubernetes cluster.
Kubernetes Ingress:
Kubernetes Ingress is a native kubernetes resource where you
can have rules to route traffic from an external source to service
endpoints residing inside the cluster. It requires an ingress
controller for routing the rules specified in the ingress object.
Thus, you should store the config(like db connection details, passwords, API Keys ..etc) outside of the application itself.
We externalize things that can change and this in turn helps keep our applications portable. This is critical in the Kubernetes
world where our applications are packaged as Docker images.
With Docker and containers this means we should try to keep configuration out of the container image. Developer should not
hard code configuration details instead they can refer/use Environment Variables.
Docker makes it easy to build containers with environment variables baked in. In your Dockerfile, you can specify them with
the ENV directive.You can also override the environment variables when running on the command line.
ConfigMaps and Secrets
An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes database connections details and
credentials or cloud credentials ..etc.
Kubernetes allows you to provide configuration information to our applications through ConfigMaps or secret resources. The main differentiator between the two is
the way a pod stores the receiving information and how the data is stored in the etcd data store.
Both, ConfigMaps and Secrets store data as a key value pair. The major difference is, Secrets store data in base64 format meanwhile ConfigMaps store data in a plain
text.
If you have some critical data like, keys, passwords, service accounts credentials, db connection string, etc then you should always go for Secrets rather than Configs.
And if you want to do some application configuration using environment variables which you don't want to keep secret/hidden like, app base platform url, etc then
you can go for ConfigMaps.
Sample Manifest Links: https://github.com/MithunTechnologiesDevOps/Kubernates-Manifests/blob/master/mysql-
deployment-configmap.yml
https://github.com/MithunTechnologiesDevOps/Kubernates-Manifests/blob/master/mysql-deployment-configSecret.yml
Secret:
docker-registry
This is used by the kubelet when passed in a pod template if there is an imagePullsecret to provide the credentials needed to
authenticate to a private Docker registry:
kubectl create secret docker-registry registryKey --docker-server <PrivateRepoURL> --docker-username <userName> --docker-
password <password> --docker-email <email>
https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps
https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b
Liveness And Readiness Probes
Liveness and Readiness probes are used to control the health of an application running inside a Pod’s container.
Liveness Probe
• Suppose that a Pod is running our application inside a container, but due to some reason let’s say memory leak, cpu usage, application deadlock etc the
application is not responding to our requests, and stuck in error state.
• Liveness probe checks the container health as we tell it do, and if for some reason the liveness probe fails, it restarts the container.
Readiness Probe
• This type of probe is used to detect if a container is ready to accept traffic. You can use this probe to manage which pods are used as
backends for load balancing services. If a pod is not ready, it can then be removed from the list of load balancers .
https://docs.okd.io/latest/dev_guide/application_health.html
https://medium.com/spire-labs/utilizing-kubernetes-liveness-and-readiness-probes-to-automatically-recover-from-failure-2fe0314f2b2e
https://www.weave.works/blog/resilient-apps-with-liveness-and-readiness-probes-in-kubernetes
Stateful Sets
Deployments are usually used for stateless applications.
However, you can save the state of deployment by
attaching a Persistent Volume to it and make it stateful,
but all the pods of a deployment will be sharing the
same Volume and data across all of them will be same.
• SatefulSet is a Kubernetes resource used to manage stateful applications.
It manages the deployment and scaling of a set of Pods, and provides
guarantee about the ordering and uniqueness of these Pods.
• StatefulSet is also a Controller but unlike Deployments, it doesn’t create
ReplicaSet rather itself creates the Pod with a unique naming convention.
e.g. If you create a StatefulSet with name mongo, it will create a pod with
name mongo-0, and for multiple replicas of a statefulset, their names will
increment like mongo-0, mongo-1, mongo-2, etc
• Every replica of a stateful set will have its own state, and each of the pods
will be creating its own PVC(Persistent Volume Claim). So a statefulset
with 3 replicas will create 3 pods, each having its own Volume, so total 3
PVCs.
• By far the most common way to run a database, StatefulSets is a feature
fully supported as of the Kubernetes 1.9 release. Using it, each of your
pods is guaranteed the same network identity and disk across restarts,
even if it's rescheduled to a different physical machine.
Stateful Sets
• StatefulSet Deployments provide:
• Stable, unique network identifiers: Each pod in a StatefulSet is given a
hostname that is based on the application name and increment. For example,
mongo-1, mongo-2 and mongo-3 for a StatefulSet named “mongo” that has 3
instances running.
• Stable, persistent storage: Each and every pod in the cluster is given its own
persistent volume based on the storage class defined, or the default, if none are
defined. Deleting or scaling down pods will not automatically delete the volumes
associated with them- so that the data persists. you could scale the StatefulSet
down to 0 first, prior to deletion of the unused pods.
• Ordered, graceful deployment and scaling: Pods for the StatefulSet are created
and brought online in order, from 1 to n, and they are shut down in reverse
order to ensure a reliable and repeatable deployment and runtime. The
StatefulSet will not even scale until all the required pods are running, so if one
dies, it recreates the pod before attempting to add additional instances to meet
the scaling criteria.
• Ordered, automated rolling updates: StatefulSets have the ability to handle
upgrades in a rolling manner where it shuts down and rebuilds each node in the
order it was created originally, continuing this until all the old versions have
been shut down and cleaned up. Persistent volumes are reused, and data is
automatically migrated to the upgraded version.
Questions ?