k8s1 PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

kubernetes

==========
Definitions :
- kubernetes commonly stylized as k8s
- k8s is an open source orchestration tool developed by google for managing
microservices or contarization applications across a distributed cluster of nodes.
or
- k8s is an open source container orchestration system for autimation application
deployments scaling and management.
- k8s created from Brog & omega project by google as they use it orchestrate they
datacenter since 2003
- Google open sourceed kubernetes at 2014.

Why do we need K8s and what can it do?


- k8s has number of features
- a container platform.
- a micro-services platform.
- a portable cloud platform and a lot more.
- k8s provides a container-centric management environment.
- it orchestrates computing, networking and storage infrastructure on behalf of
user workloads.

- K8s is loosely complued and extensible to meet different workloads.

Famous Container Orchestrator :


- Docker swarm - Mesos - normand - cloud Foundry
- cloud (AWS,Azure,GCP,Alibaba,IBM)

K8S architecture :
------------------
- k8s follows a client-server architecture.
- A kubernetes cluster consistes of at least one master and multiple worker nodes
(or minions).
- It's also possible to set up multi-master setup for high availability By default,
there is a single master node which acts as controlling node and point of contact.
- Master Node :
- server that will create
- and it has all the component and service that manage,plan,schedule and
monitor all the worker nodes.
- Master node consists of following components such as api-server,
controller-manager, etcd, scheduler
- Worker Node
- the server that has host the applications as pod and containers.
- Worker node consists of following components such as Docker, kubelet,
kubeproxy.

Refer the diagram

Master Components :
-------------------
- Api-server :
- Api-server is the primary components of k8s and is responsible for
orchestrating all operations in the cluster.
- it serves as front end to the cluster.
- This is the only component that communicates with the etcd cluster,making
sure data is stored in etcd and
is in agreement with service details of the deployment pods.
- kubeconfig is a package along with the server side tools that can be used
for communication.it exposes kubernetes-api.

- Controller-manager :
- it is responsible for most of the collectors that regulates the state of
cluster and performs a task.
- when a change in a service configuration occurs, the controller spots the
change and starts working towards the new desired state.

- Kube-scheduler :
- it is responsible for distributing (or scheduled) workloads on various
worker nodes based on resource utilization.

- etcd cluster :
- it is an open source, highly available, distributed key value storage
which is used to store the k8s cluster data,Api object and service
discovery details.
- it is accessible only by kubernetes api server for security reasons.etcd
enables notifications to the cluster about configuration changes
with the help of watchers.
- Notifications are Api request on each etcd cluster node to trigger the
update of information in the node's storage.

Worker Node components :


------------------------
- Docker engine :
- First requirement of worker node is Docker.
- Docker is responsible for pulling down and running containers from Docker
images.

- Kubelet :
- it a main service of node, connect b/w master and node
- ensuring that pods and their container are healthy and running in desired
state.
- it also reports to the master on the health of the host where it's
running.

- kube-proxy :
- it is a proxy service that runs on each worker node helps in making
service available to the external hosts.
- it performs request forwarding to the correct pos/containers across the
varipus isolated networks in a cluster.
- it manages pods on node,volume,secrets,creating a new container health
checkup, etc.

Kubectl :
- kubectl is a command line interface that interacts with api server and
sends commands to the master node.
- Each command will convert into an api call.

K8s concepts or Workloads :


---------------------------
A containerized application can deploy on kubernetes using either "pods or
workloads"

- pod :
- one or more containers that should be controlled as a single application.
- A pod is smallest and simplest unit that you create or deploy in k8s.
- A pod represents a single instance of an application in k8s.

Manifest for a pod


------------------
apiversion: v1
kind: pod
metadata:
name:nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagepullpolicy: IfznotPresent
ports:
- containerPort:80

--> create a pod : kubectl create -f nginx.yaml


--> verify the pod is running : kubectl get pod nginx
--> Edit pod configuration : kubectl edit pod nginx
--> Delete the pod : kubectl delete -f nginx.yaml

Pod Lifecycle :
---------------
Pending :
- The pod has been accepted by the kubernetes system,but one or more of the
container Images has been created.
- This includes time before being scheduled as well as time spent
downloading images over the network which could take a while.
Running :
- The pod has been bound to a node,and all of the containers have been
created.
- Atleast one container is still running, or is in the process of starting
or restarting.
succeeded :
- All the containers in the pod have terminated in success, and will not be
restarted.
failed :
- All the containers in the pod have terminated, and at least one container
has terminated in failure.
- That is,the container either exited with non-zero status or was
terminated by the system.
unknown :
- For some resons the state of pod could not be obtained typically due to
an error in communicating with the host of the pod.
completed :
- The pod has run the completion as there is nothing to keep it running
eg:completed jobs
CrashLoopBackOff :
- This means the one of the containers in the pod has exited unexpectedly,
and perhaps with non-zero error code even after restarting sue to restartpolicy.

Multi-container pod :
---------------------
- pod are designed to support multicontainers.The contains in a pod are
automatically co-located and co-scheduled on the same nodes in the cluster.
- The containers can share resources and dependencied , communicate with
one another, and coordinated when and they are terminated.
- The 'one container per pod' model is the most common use cases and k8s
manages the pod rather than the container directly.
Multi-container pod :
---------------------
apiVersion: v1
kind: pod
metadata:
name: nginx-web
spec:
volumes:
- name: shared-files
emptyDir: {}
- name: nginx-config-volume
configMap
name: nginx-config
containers:
- name: app
image: php-app:1.0
volumeMounts:
- name: shared-files
mountpath: /var/www/hrml
- name: nginx-config-volume
mountpath: /etc/nginx/nginx.conf
subpath: nginx.conf

init containers :
-----------------
- it's also have one or more containers, which are run before the
application containers are started.
- init container are always run to completion.
- Each init container must comple successfully before the next one starts.
init containers :
-----------------
apiVersion: v1
kind: pod
metadata:
name: wordpress
spec:
containers:
- images: wordpress:latest
name: wordpress
ports:
- containerPort: 80
name: wordpress
initcontainers:
- name: change-permission
image: busy-box
command: ['sh', '-c', 'chown www-data:www-data /var/www.html && chmod -R 755
/var/www/html']

- Workloads
- workloads are controller objects that set deployment rules for pods.

- Types of workloads
- The most popular types supported by kubernetes are :
- Deployments
- Daemonsets
- Statefulsets
- Replica sets
- jobs
- Cronjobs

- Deployments:
- Deployments controller provides declarative updates for pods and it
manage stateless applictions running on your cluster.
- Deployments represents a set of multiple, identical pods and upgrade them
in a controlled way, performing a rolling update by default.
- A Deployment runs multiple replicas of your application and automatically
replaces any instances that fails or become unresponsive.
- In this ways, Deployments ensure that one or more instances of your
application are available to server user requests.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx-1.7.9
ports:
- containerPort: 80

--> Create a deployment : kubectl create -f nginx-deployment.yaml


--> Display your deployments: kubectl get deployments
--> get details of a deployment: kubectl describe deployment nginx-deployment
--> check the pods: kubectl get pods
--> check the status of the deployment: kubectl rollout status
deployment/nginx-deployment
--> update the deployment: kubectl set image deployment/nginx-deployment
nginx=nginx:latest
--> Rollback to previous revision: kubectl rollout undo deployment/nginx-deployment
kubectl rollout status
deployment/nginx-deployment
--> check Rollback history: kubectl rollout history deployment/nginx-deployment
--> scale a deployment: kubectl scale deployment/nginx-deployment --replica=5
kubectl get pods
--> Edit the deployment: kubectl edit deployment nginx-deployment
--> Delete the deployment: kubectl delete deployment nginx-deployment

Writing a Deployment Spec :


---------------------------
A deployment manifest needs
- apiVersion
- kind
- metadata
- spec
The metadata field have name, labels, annotation and other information.
The spec field have replicas, deployment strategy, pod template, selector and other
details.

1.Pod Template :
----------------
- The spec.tempalte is the only required field of the .spec.
- The spec.templated is a pod template.
Pod Template :
--------------
spec:
template:
metadata:
labels:
app: frontend

2.Restart Policy :
------------------
- only a spec.template.spec.restartPolicy equal to Always is allowed,which
is the default if not specified.
Restart Policy :
----------------
spec:
template:
metadata:
labels:
app: frontend
spec:
restartPolicy : Always
containers:

3.Replicas :
------------
- spec.replicas is an optional field that specifies the number of desired
pods.it defaults to 1
Replicas :
----------
spec:
replicas: 3

4.selector :
------------
- spec.selector is an optional field that specifies a label selector for
the pods targeted bt the deployment.
selector :
----------
spec:
replicas: 3
selector:
matchLabels:
app: frontend

- spec.selector must match .spec.template.metadata.labels or it will be


rejected by the API
selector :
----------
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend

5.Deployment strategy:
----------------------
- spec.strategyspecifies the strategy used to replace old pods by new ones.
- spec.strategy.type can be "Recreate" or "RollingUpdate"."RollingUpdate"
is the default value.
strategy:
---------
spec:
replicas: 3
strategy:
type: RollingUpdate

Note : Deployment failure:


==========================
- your deployment may get stuck trying to deploy it's ReplicaSet without
ever completing.
- This can occurs due to some of the following factors :
- insufficient quota
- Readiness probe failure
- image pull errors
- insufficient permissions
- Limit ranges
- Application run-time misconfiguration.

- DaemonSets
------------
- Like other controllers, DaemonSets manage groups of replicated pods.
- However, DaemonSet ensures that all or selected worker Nodes run a copy
of a pod (one-pod-per-node).
- As you add nodes, DaemonSets automatically add pods to the new nodes.As
the nodes removed from the cluster, those pods are garbage collected.

Manifest of DaemonSet:
----------------------
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd
spec:
selector:
matchLabels:
name: fluentd
template:
metadata:
labels:
name: fluentd
spec:
containers:
- name: fluentd
image: fluentd:latest

--> Create a daemonset: kubectl create -f daemonset.yaml


--> Check the pod running: kubectl get pod -n kube-system
--> Check no. of nodes: kubectl get nodes
--> Display your daemonsets: kubectl get daemonsets
--> Get details of a daemonset: kubectl describe daemonset fluentd
--> Edit a daemonset: kubeclt edit daemonset fluentd
--> Delete a daemonset: kubectl delete daemonset fluentd

DaemonSets uses:
----------------
- To run a daemon for "cluster storage" on each node, such as 'glusterd'
- To run a daemon for "log collection" on each node, such as 'logstash'
- To run a daemon for "node monitoring" on each node, such as 'collectd'

- StatefulSets :
----------------
- StatefulSets represent the set of pods with unique, persistent identities
and stable hostnames.
- It provides the guarantees about the ordering of deployment and scaling.
- StatefulSets are valuable for application that requires one or more of
the following:
- stable,unique network identifiers
- stable, persistent storage
- Ordered, graceful deployment and scaling
- Ordered, graceful deletion and termination

statefulset components
- A Headless service
- A StatefulSet
- A PersistentVolume

Below are maifests of a Service, StatefulSet and persistent volume:

StatefulSets :
--------------
apiVersion: v1
Kind: service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
matadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaims:
claimName: myclaim

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReasWriteMany
resources:
requests:
storage: 8Gi
Create and manage StatefulSets
------------------------------

--> Create a statefulset : kubectl create -f statefulset.yaml ; kubectl get pods ;


kubectl get svc nginx
--> list your stateful sets: kubectl get statefulsets
--> get details of a statefulsets : kubectl describe statefulset web
--> Edit a statefulset: kubectl edit statefulset web
--> Scaling a statefulset:
- Scaling a statefulset refers to increasing or decreasing the number of
replicas.
--> scale up a statefulset: kubectl scale statefulset web --replicas=5 ; kubectl
get pods -l app=nginx
--> scale down a statefulset: kubectl scale statefulset web --replicas=2 ; kubectl
get pods -w -l app=nginx ; kubectl get pods -l app=nginx
--> Delete the statefulset : kubectl delete statefulset web
--> Delete the service manually : kubectl delete service nginx

- ReplicaSets :
---------------
- A ReplicsSets's purpose to run a specified number of pods at any given
time.
- While ReplicaSets can be used independently.
- today it's mainly used by deployments as a machanism to orchestrate pod
creation,deletion and update.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx-1.7.9
ports:
- containerPort: 80
Manifests of ReplicaSets:
-------------------------
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp
labels:
app: webapp
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: webapp
image: webapp:2.0

--> Create a replicaset : kubectl create -f replicaSet.yaml ; kubectl get rs ;


kubectl get pods
--> Display your replisets: kubectl get replicasets
--> Get details of a replicasets: kubectl describe replicaset webapp
--> Edit a replicaset : kubectl edit replicaser webapp
--> scale a replicaset : kubectl scale --replicas=5 rs webapp ; kubectl get pods
--> Delete a replicaset : kubectl delete replicaset/webapp
--> you can Delete a replicaset without affecting any of it pods using kubectl
delete with the -cascade=false option
kubectl delete replicaset/webapp -cascade=false

- jobs :
--------
- you might also need to run large computation or batch procession
workloads in your cluster.For this,job controller is useful.
- A job creates one or more pods running in parallel.you can specify how
many number of pods needs to complete in this job.

Manifests for a job :


----------------------
apiVersion: batch/v1
kind: job
metadata:
name: example-job
spec:
template:
metadata:
name: example-job
spec:
containers:
- name: pi
image: perl
command: ["perl"]
args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never

--> Create a job: kubectl create -f example-job.yaml


--> Display your job : kubectl get jobs ; kubectl get pods --watch ; kubectl get
jobs:
--> Get details of a job: kubectl describe job example-job
--> Edit a jod : kubectl edit job example-job
--> Delete a job : kubectl delete job example-job

- Cronjobs :
------------
- A cron job creates jobs on a time-based schedule.
- A CronJob object is just likely an entery in crontab in unix/linux.
- it runs a job periodically on a give schedulw.
- you need a wprking k8s cluster at version >=1.8 (for cronjob)
- For previous version of the cluster (<1.8) you need to explictly enable
batch/v2alpha1 API
by passing -runtime-config=bstch/v2alpha1=true to the api server

Manifests for a Cronjob :


-------------------------
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the kubernetes cluster
restartPolicy: OnFailure

--> Create a Cron Job: kubectl create -f cronjob.yaml ; kubectl get cronjobs hello
; kubectl get jobs --watch
--> Get details of a cronjob: kubectl describe cronjob hello
--> Edit a cronjob : kubectl edit cronjob hello
--> Delete a cronjob : kubectl delete cronjob hello

writing a Cron Job Spec


- As with all other kubernetes configs, a cron job needs apiVersion, kind
and metadata fields.
- schedule
The .spec.schedule is a required field of the .spec.
it takes a cron formate string such as 0**** or @hourly,as schedule
time of its jobs to be created and executed.

===================================================================================
==================================================

- Metadata :
------------
- Metadata contains important information about kuberenetes objects.
- There are many attributes can be specified as metadata.
- But following are most common used attributes:
- name
- namespace
- labels
- annotations

1.metadata.name :
---------------
- metadata.name is only required string when creating or modifying the k8s
objects
such as pod,Deployment, service, configs and volumes etc.,

apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
......
........

apiVersion: v1
kind: Service
metadata:
name: frontend-service
......
........

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
......
........
- Kubectl query for the objects by using their names.
--> kubectl get deployments frontend-app ; kubectl describe
deployments app-frontend-app
--> kubectl get service frontend-service ; kubectl describe service
frontend-service
--> kubectl get configmsp nginx-config ; kubectl describe configmap
nginx-config

2.metadata.namespace :
----------------------
- Each k8s objects is scoped to a namespace.
- metadata.namespace attribute specifies which namespace the object belong
to.
- k8s objects are uniquely identified within namespace by their name.

apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
namespace: development
......
........

apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: development
......
........

note : if namespace attribute is omitted from your specification, the namespace


"default" is used.

3.metadata.labels :
-------------------
- labels are key/value pairs that are attached to kubernetes objects.
- lables are typically used to specify identifying attributes of objects
that might be used to identify it,
or to select is as a member of some logical grouping of objects.
- Labels can be attached to objects at creation time and subsequently added
and modified at any time.
- Each object can have a set of key/value labels defined.
- Each key must be unique for a given object.

apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
namespace: development
labels:
tier: frontend
env: development
release: release-2
version: v1.8
......
........

4.metadata.annotations :
------------------------
- Annotations used to attach arbitrary non-identifying metadata to the
objects.
- Annotations are also key/value pairs that can be used by external tools
and libraries.

apiVersion: extentions/v1beta1
kind: ingress
metadata:
name: nginx-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: 'letsencrypt-prod'
ingress.kubernetes.io/force-ssl-redirect: 'true'

- Labels and Selectors:


-----------------------
- labels are key/value pairs that are attached to identify the objects.
- But labels do not provide uniqueness.In general, we expect many objects
to carry the same labels(s).

- Selectors:
------------
- Via, a selector, the client/user can identid=fy a set of objects.
- The selector is the core grouping primitive in k8s.
- The API currently supports two types od selectors
- Equality-based
- Set-based
- Equality-based : Equality based labels allow filtering by key and value.
The supported operator are =,==,!=.
- Set-based: set-based labels allows filtering keys according to a set of
values.
The supported operator are in, notin and exists.

Create a Deployment:
--------------------

apiVersion: apps/v1
kind: Deployment
metadata:
name: app-frontend
labels:
app: website
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
app: website
tier: frontend
template:
metadata:
labels:
app: website
tier: frontend
spec:
containers:
- name: frontend-website
image: learninghub/website:1.0
ports:
- containerPort: 80

--> Create a Deployment: kubectl create -f app-deployment.yaml

Create a pod:
-------------

apiVersion: v1
kind: pod
metadata:
name: nginx
labels:
app: webserver
tier: frontend
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html

--> Create a pod : kubectl create -f nginx-demo.yaml


--> to show the lables : kubectl get pods --show-labels.

Equality-based:
--> get all the web server pods: kubectl get pods -l app=webserver
--> get all the frontend pod but not web server :kubectl get pods -l
tier=frontend,app!=webserver

set-based :
--> Get all frontend pod but not website : kubectl fet pods -l 'tier in
(frontend),app notin(website)'

selection via Fields(Field selector)


--> kubectl get pod --field-selector metadata.name=nginx
--> kubectl get pod --field-selector metadata.namespace=default

- Specifying selector in service:


---------------------------------

Deployment metadata:

metadata:
labels:
app: nginx
tier: frontend
stage: production

service Manifest:

kind: service
apiVersion: v1
metadata:
name: nginx-service
spec
selector:
app: nginx
stage: production
ports:
- port: 80
protocol: TCP

- Node Selector:
----------------
- Node selector is the simplest recommended from of node selector
constraint.
- Node selector is the field of podSpec.it specifies a map of key-value
pairs.

Pod Manifest:
-------------
apiVersion: vi
kind: pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
nodeselector:
env: prod

if you are using workload controller for your application,you have to specify
nodeselector in pod template(spec.templte.spec.nodeselector)

spec:
replicas: 3
selector:
matchLabels:
app: webapp
tier: frontend
template:
metadata:
labels:
app: webapp
tier: frontend
spec:
containers:
- name: webapp
image: webapp:1.0
ports:
- containerports: 80
nodeselector:
env: prod

===================================================================================
===========================

- Services :
-------------
- A kubernetes service is an abstruct way to access an application running
on a set of pods.
- The set of pods targeted by a service is determined by a Label Selector.
- Service provides features that are standardized across the cluster
- loadbalancing
- service discovery between applications
- features to support zero-downtime application deployments.
- Types in service spec
- ClusterIP
- Nodeport
- LoadBalancer

create a service :
- we can create a service in two ways, using kubectl expose commands and
with declarative from using yaml/json files.

--> create a deployment : kubectl create -f app-deployment.yaml


--> create a service for app-frontend
1.using kubectl expose : kubectl expose deployment app-frontend.yaml \
--port=80 \
--target-port=9000
kubectl get svc/app-frontend
2.using Yaml files:

kind: service
apiVersion: v1
metadata:
name : app-service
spec:
selector:
app: webapp
tier: frontend
ports:
- port: 80
name: web
protocol: TCP
targetPort: 9000

list service : kubectl get svc/app-service


--> Multi-port service : if you have application tha expose multi portd,you can
create a service with multiple ports.

Kind: service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: backend
tier: backend
ports:
- port: 80
name: web
protocol: TCP
targetPort: 9000
- port: 9090
name: api
protocol: TCP
targetPort: 9090

--> Delete the service : kubectl delete service app-frontend ; kubectl delete -f
app-service.yaml
1. ClusterIP :
--------------
- ClusterIP is the default service type.
- ClusterIP exposes the service on a cluster internal IP
- choosing this value makes the service only reachable from within the
cluster.

--> create ClusterIP service : kubectl expose deployment app-frontend --port=80


--target-port=80
--> list a service : kubectl get service app-frontend
--> view a service : kubectl describe service app-frontend

port forward:
-------------
- The kubectl port-forward command allows you to access the application
from your local computer.
- it forward connects to a local port to a port on a pod.
- it is very useful for testing/debugging purposes spo you can access your
service locally without exposing it to external.

--> Forward a local port to a port on the pod : kubectl port-forward


pods/nginx-7b9899ff5f-nhgtm 8080:80
or
kubectl port-forward
deployment/nginx 8080:80
or
kubectl port-forward svc/nginx
8080:80

2.NodePort :
------------
- A NodePort service is the very basic to get external traffic directly to
your service.
- NodePort opens a specific port on your node/vm and when that port gets
traffic, that traffic is forwardr directly to service.
- For the NodePort service, K9s allocated a port from a config range
(Default 30000-32767) and each node forwards that port,which is same on each node,
to the service.
- it is possible to define a specific port number, but you should take care
to avoid potential port conflict.

NodePort service.yaml:
----------------------
kind: service
apiVersion: v1
metadata:
name: backend-service
spec:
selector:
app: backend
tier: backend
type:NodePort
ports:
- port: 80
name: backend
protocol: TCP
targetPort: 8080

--> Create NodePort service : kubectl apply -f backend-service.yaml


or
kubectl expose deployment backend \
--port=80 --target-port=8080 \
--type=NodePort
--> list a service : kubectl get service backend-service
--> view a service : kubectl describe svc/app-service

Note:There are few limitations and hence it's not advised to use NodePort service.
- only one service per port.
- you can only use ports 30000-32767
- Dealing with changing node/vm ip is difficult.

3.LoadBalancer :
----------------
- A LoadBalancer service is the standard way to expose a service to the
internet(outside the world).
- By setting type field to LoadBalancer will provision a load Balancer for
your services.
- On AWS, they will create an ELB with DNS name that will forward all
traffic to your service.

--> create LoadBalancer service : kubectl expose deployment app-frontend \


--port=80 \
--target-port=9000 \
--type=LoadBalancer
-->List service : kubectl get svc

LoadBalancer Yaml configuration:


--------------------------------
kind: service
apiVersion: v1
metadata:
name: app-service-1b
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 9000
type: LoadBalancer
-->commands : kubectl create -f app-service.yaml ; kubectl get svc/app-service-1b

4.ExternalName :
----------------
- An ExtenalName service is a special service that doesn't have selectors
and uses DNS names instead.
- This requires version 1.7 or higher of kube-dbs.
- The easier and right way to access external services from your pods is to
create Extername service.
- you have an external database like an AWS RDS hosted by amazon and you
wants your application to use the hostname as 'database'
which will redirect it to aws RDS instance.

Create an ExternalName service :

apiVersion: v1
kind: service
metadata:
name: database
spec:
type: ExternalName
externalname: mysql-instance.123456789012.us-east-1.rds.amazopnaws.com

--> commands: kubectk apply -f database-service.yaml

list ExternalName service: kubectl get service database

- if you don't have a domain name or need to do port remapping, simply add
the IP addresses to an endpoints and use that instead.

apiVersion: v1
kind: Endpoints
metadata:
name: database
subsets:
- addresses:
- ip: 33.134.23.105
ports:
- port: 3306
name: mysql

--> create an Endpoint : kubectl apply -f database-endpoint.yaml


--> list an endpoint : kubectl get endpoints

===================================================================================
======================================================

- kubernetes Ingress :
----------------------

- k8s has a built-in configuration object for HTTP load balacing called
ingress.
- it defines rules for external connectivity to the pod represented by one
or more k8s services.
- Ingress provide SSL termination and name based virtual hosting.
- The traffic routing is controlled by rules defined on the ingress
resource.

Ingress resource example:


-------------------------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- http:
paths:
- path: /
pathtype: Prefix
backend:
service
name: app-service
port:
number: 80

- ingress reources only supports rules by directing HTTP traffic.


- The ingress spec has all the information needed to configure a load
Balancer or proxy server.
- Most importantly, it contains a list of rules matched against all
incoming requests.

Ingress rules :
--------------
- Each http rule contains an optional host, a lost of paths each of which
has an associated backend defined with a serviceName and servicePort.
- If traffic path is not matched to any rules, then traffic send to default
backend.

Default backend:
---------------
- The default backend is typically a configuration option of the ingress
controller and is not specified in your ingress resources.
- if none of the nosts or paths match the HTTP requests in the ingress
objects, the traffic is routed to your default backend.

- Types of Ingress:
-------------------
1. single Service Ingress :
===========================
- if doesn't have any rules and it send traffic to a single service.you can
use this to create a default backend with no rules.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
spec:
backend:
serviceName: frontend-service
servicePort: 80

2.simple fanout :
=================
- A fanout configuration route traffic to more than one service, based on
the HTTP URL being requested.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
ingressClassName: nginx
rules:
- host: shopping.example.com
http:
paths:
- path: /clothes
pathType: Prefix
backend:
service
name: clothes-service
port:
number: 8080
- path: /kitchen
pathType: Prefix
backend:
service
name: kitchen-service
port:
number: 8081

3.Name based virtual Hosting :


==============================
- Name-based virtual Hosts support routing HTTP traffic to multiple host
names.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
ingressClassName: nginx
rules:
- host: shopping.example.com
http:
paths:
- path: /clothes
pathType: Prefix
backend:
service
name: clothes-service
port:
number: 8080
- path: /kitchen
pathType: Prefix
backend:
service
name: kitchen-service
port:
number: 8081
- host: music.example.com
http:
paths:
- path: /fr
pathType: Prefix
backend:
service
name: french-service
port:
number: 9090
- path: /en
pathType: Prefix
backend:
service
name: english-service
port:
number: 9091

- Ingress Controller :
======================
- In order to work the ingress resource, the kubernetes cluster must have
an ingress controller running.
- it runs as part of the kube-controller-manager and are typically started
automatically with a cluster.
- There are so many ingress controller implementation and choose the best
fits your cluster.
Ex. Ingress with Nginx ingress controller:

metadata:
name: my-ingress
annotations:
kubernetes.ip/ingress.class:nginx

- if you do not define ingress class,your cloud provider will use a default
ingress provider.

- Nginx Ingress Controller :


============================
- The Nginx ingress controller for k8s provides enterprise-grade delivery
services for k8s applications,with benefits for users of both open source nginx and
nginx plus.
- with Nginx ingress controller, tou get basic load balancing, SSL/TLS
termination, support for URL rewrites and upstream SSL/TLS encryption.

--> clone nginx controller repository : git clone


https://github.com/srinibook/kubernetes-nginx-controller.git
--> install nginx controller : cd kubernetes-nginx-controller ; kubectl apply -f
nginx-ingress.yaml
--> check the pods : kubectl get pods -n ingress-nginx
--> check the services : kubectl get services -n ingress-nginx
--> using Helm Chart : helm install --install --name nginx-ingress-controller \
stable/nginx-ingress

- SSL/TLS Certificates:
=======================
- you can secure an application running on k8s by creating a secret that
contains a TLS (transport layer security) private key and certificate.
- Currently, Ingress supports a single TLS port, 443, and assume TLS
termination.
- The TLS secret must contain keys namesd tls.crt and tls.key that contain
the certificate and private key to user for TLS.

Create TLS secret:

using kubectl:
$kubectl creats secret tls my-tls-secret \ --key < private key filename > \ --cert
< certificate filename >

using yaml file

apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: secret
metadata:
name: my-tls-secret
namespace: default
type: kubernetes.io/tls

Ingress with TLS:


-----------------

apiVersion:v1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- mydomain.com
secretName: my-tls-secret
rules:
- host: mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service
name: my-service
port:
number: 80

Self signed Certificate:


------------------------
- A self-signed SSL certificate is an SSL certificate that is issued by the
person creating it rather than trusted certificate authority.
- This can be goof gor Testing environment.

--> Generate a CA private key : openssl genrsa -out ca.key 2048


--> create a Self-signed certificate, Valid for 365 days:
$ openssl req -x509 \ -new -nodes \ -days 365 \ -key ca.key \ -out ca.crt \ -subj
"/CN-yourdomain.com"
--> Now,create tls secret using kubectl command or using yaml definition.
$ kubectl create secret tls my-tls-secret \ --key ca.key \ -- cert ca.crt
--> check the secret : kubectl get secrets/my-tls-secret
--> Describe the secret : kubectl describe secrets/my-tls-secret

SSL-Let's Encrypt :
===================
- Let's Encrypt is a free automated and a non-profit certificate authority
- The certificate provided by the Let's Encrypt are valid for 90 days at no
charge, and you can renewal at any time.
- The certificate generation and renewal can be automated using cert-bot
and cert-manager (for k8s).
Cert-manager:
------------
- cert-manager is a kubernetes tool that issues certificated from various
certificate including Let's Encrypt

--> Install the customResourceDefinition resources: kubectl apply --validate=false


\ -f
https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yam
l
--> create namespace for cert-manager : kubectl create ns cert-manager
--> add the jetstack Helm repository and update your local helm chart repo cache
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
--> Install the cert-manager Helm chart : helm install --name cert-manager \
--namespace cert-manager \ --version v0.15..0 jetstack\cert-manager
--> Now verify the installation : kubectl get pods --namespace cert-manager

issuers:
--------
- Issuers (and clusterIssuers) represent a certificate authority from which
signed x509 certificates can be obtained,such as Let's Encrypt.
- you will need at least one issuer or ClusterIssuer in order to begin
issuing certificates within your cluster.
- An Issuer is a namespaced resource, you will need to create an issuer in
each namespace you wish to obtain certificate in.
- if you want to create a single issuer that can be consumed in multiple
namespaces you should consider creating a ClustrtIssuer resource.

- Create a ClusterIssuer resource for Lets ENCRYPT certificate:

apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: http://acme-v02.api.letencrypt.otg/directory
email: < your-name@domain.com >
privateKeySectetRef:
name:letsencrypt-prod
http01: {}

--> kubectl apply -f cluster-issuer-prod.yaml

Note : Provide valid email address.oyou will recieve email notification on


certificates renewals.

--> list cluster-issue : kubectl get clusterissuers


Ingress with cert-manager:
--------------------------
-you must add an annotation in the ingress configuration with issuer or
clusterissuer name.

apiVersion: extensions/v1beta1
kind: ingress
metadata:
name: frontend
annotations:
kubernetes.io/ingress.class" nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- app.mydomain.com
secretName: app-mydomain-com
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80

- once the ingress is created there should be a tls secret and certificate
created.

--> $ kubectl get secrets


--> $ kubectl get certificates.

- if all goes well, you will able to see the site over a secure TLS
connection and you don't have to worry about the renewal as well.

===================================================================================
================================================================================

Configurations:
--------------

1. secrets :
============
- Kubernetes secrets are secure objects which store sensitive data, such as
password OAuth tokens and SSK keys etc. with encryption in your clusters.
- Using secrets give you more flexible in a Pod Life cycle definition, and
control over how senitive dat is used.
- it reduces the risk exposing the data to unauthorized users.
- secrets are namespaced objects.
- secrets can be mounted as data volumes or environment variables
to be used by a container in a pod.
- API server stores secrets as plain text in etcd.
- A per-secret size limit of 1 MB

Create a secret :
-----------------
Create username.txt and password.txt files.

echo -n 'root' > ./username.txt


echo -n 'Mq2D#(8gf09' > ./password.txt

kubectl create secret generic db-cerds \


--from-file=./username.txt \
--from-file=./password.txt
secret "db-cerds" created

List secret:
------------
kubectl get secret/db-cerds
NAME TYPE DATA AGE
db-cerds Opaque 2 26s

View secret:
------------
kubectl describe secret/db-cerds
Name: db-cerds
Namespace: default
Labels:
Annotations:

Type: Opaque

Data
====
password.txt: 11 bytes
username.txt: 4 bytes

Using YAML file:


----------------
The Secret contains two maps: data and stringData. The data field is used to store
arbitrary data, encoded using base64.

echo -n 'root' | base64


cm9vdA==

echo -n 'Mq2D#(8gf09' | base64


TXEyRCMoOGdmMDk=

Write a Secret yaml file


------------------------
apiVersion: v1
kind: Secret
metadata:
name: database-creds
type: Opaque
data:
username: cm9vdA==
password: TXEyRCMoOGdmMDk=

Create the Secret using kubectl create


--------------------------------------
kubectl create -f creds.yaml
secret "database-creds" created

kubectl get secret/database-creds


NAME TYPE DATA AGE
database-creds Opaque 2 1m

View secret:
------------
kubectl get secret/database-creds -o yaml
apiVersion: v1
data:
password: TXEyRCMoOGdmMDk=
username: cm9vdA==
kind: Secret
metadata:
creationTimestamp: 2019-02-25T06:22:37Z
name: database-creds
namespace: default
resourceVersion: "2657"
selfLink: /api/v1/namespaces/default/secrets/database-creds
uid: bf0cef90-38c5-11e9-8c95-42010a800068
type: Opaque

Decoding secret values:


-----------------------
echo -n "cm9vdA==" | base64 --decode
root

echo -n "TXEyRCMoOGdmMDk=" | base64 --decode


Mq2D#(8gf09

Usage of Secrets
----------------
- A Secret can be used with your workloads in two ways:
- specify environment variables that reference the Secret's values
- mount a volume containing the Secret.

Environment variables:
----------------------
apiVersion: v1
kind: Pod
metadata:
name: php-mysql-app
spec:
containers:
- name: php-app
image: php:latest
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: database-creds
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: database-creds
key: password

Secret as Volume:
-----------------
apiVersion: v1
kind: Pod
metadata:
name: redis-pod
spec:
containers:
- name: redis-pod
image: redis
volumeMounts:
- name: dbcreds
mountPath: "/etc/dbcreds"
readOnly: true
volumes:
- name: dbcreds
secret:
secretName: database-creds

Additional Info :
-----------------
Secret creation syntax

kubectl create secret [TYPE] [NAME] [DATA]


TYPE can be one of the following:
- generic: Create a Secret from a local file, directory, or literal value.
- docker-registry: Creates a dockercfg Secret for use with a Docker
registry. Used to authenticate against Docker registries.
- tls: Create a TLS secret from the given public/private key pair. The
public/private key pair must exist beforehand. The public key certificate must be
.PEM encoded and match the given private key.
- DATA can be one of the following:
--from-file

kubectl create secret generic credentials \


--from-file=username=./username.txt \
--from-file=password=./password.txt
--from-env-file

cat credentials.txt
username=admin
password=Ex67Hn*9#(jw

kubectl create secret generic credentials \


--from-env-file ./credentials.txt

--from-literal flags
kubectl create secret generic literal-token \
--from-literal user=admin \
--from-literal password="Ex67Hn*9#(jw"

===================================================================================
===============================================================

2. ConfigMaps :
===============
- ConfigMaps are Kubernetes objects that allows you to separate
configuration data/files from image content to keep containerized applications
portable.
- ConfigMaps bind configuration files, command-line arguments, environment
variables, port numbers, and other configuration artifacts to your Pods containers
and system components at run-time.
- ConfigMaps are very useful for storing and sharing non-sensitive,
unencrypted configuration information.
- Like Secrets, you can create configmaps from files and with yaml
declaration. We can use configmaps by referring with its name and as a volume.

Create a configmap:
-------------------
You can create configmaps from directories, files, or literal values using kubectl
create configmap.

$ cat app.properties
environment=production
logging=INFO
logs_path=$APP_HOME/logs/
parllel_jobs=3
wait_time=30sec
kubectl create configmap app-config \
--from-file configs/app.properties
configmap "app-config" created

which is same as

kubectl create configmap app-config \


--from-file configs/

kubectl create configmap app-config \


--from-literal environment=production \
--from-literal logging=INFO
.......

List configmap:
---------------
kubectl get configmap/app-config
NAME DATA AGE
app-config 1 1m

View configmap:
---------------
kubectl describe configmap/app-config
Name: app-config
Namespace: default
Labels: < none >
Annotations: < none >

Data
====
app.properties:
----
environment=production
logging=INFO
logs_path=$APP_HOME/logs/
parllel_jobs=3
wait_time=30sec

Events: < none >

Using YAML declaration:


-----------------------
The configmap YAML file will looks like below

kind: ConfigMap
apiVersion: v1
metadata:
name: app-config
namespace: default
data:
app.properties: |
environment=production
logging=INFO
logs_path=$APP_HOME/logs/
parllel_jobs=3
wait_time=30sec

Here is the basic nginx configmap for PHP application

kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
nginx.conf: |
events {
}
http {
server {
listen 80 default_server;
listen [::]:80 default_server;

index index.html index.htm index.php;

root /var/www/html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
}
}
}

Usage of ConfigMaps :
---------------------
- ConfigMaps can be used to populate individual environment variables as
shown in below :

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
env:
- name: ENVIRONMENT
valueFrom:
configMapKeyRef:
name: app-config
key: environment
- name: LOG_PATH
valueFrom:
configMapKeyRef:
name: app-config
key: logs_path
- name: THREDS_CLOUNT
valueFrom:
configMapKeyRef:
name: app-config
key: parllel_jobs

- ConfigMaps can also be consumed in volumes.


- The most basic way is to populate the volume with files where the key is
the filename and the content of the file is the value of the key:

apiVersion: v1
kind: Pod
metadata:
name: nginx-web
spec:
volumes:
- name: nginx-config
configMap:
name: nginx-config
containers:
- image: nginx:1.7.9
name: nginx
ports:
- containerPort: 443
name: nginx-https
- containerPort: 80
name: nginx-http
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf

===================================================================================
=================================================================

3.Command and Arguments :


-------------------------
- When you create a Pod, it runs the container image's default Entrypoint
and passes the default Cmd as arguments.
- To override the image's default Entrypoint and Cmd, include the command
and args fields in the configuration file.
- The field names used by Docker and Kubernetes:
Docker Kubernetes
------ ----------
Entrypoint command
Cmd args

- The configuration file for the Pod defines a command and two arguments:

apiVersion: v1
kind: Pod
metadata:
name: command-demo
spec:
containers:
- name: debian
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
restartPolicy: OnFailure

- You can also use environment variables in the arguments as below:

env:
- name: FILE_PATH
value: "/data/backup/"
command: ["rm -rf"]
args: ["$(FILE_PATH)"]

- In some cases, you need run commands in a shell. To run the commands in a
shell, wrap it like this:

command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]

===================================================================================
===============================================================

Kubernetes Volumes :
====================

Volume overview :
-----------------
- Data stored in Docker containers is ephemeral i.e. it only exists until
the container is alive.
- When Kubernetes restart a failed or crashed container, you will lose any
data stored in the container filesystem. Kubernetes solves this problem with the
help of Volumes.
- In Kubernetes, a volume is essentially a directory accessible to all
containers running in a pod and the data in volumes is preserved across container
restarts.
- The medium backing a volume and its contents are determined by the volume
type.
- To use a volume, a Pod specifies what volumes to provide for the Pod and
where to mount those into Containers.

kind: Pod
apiVersion: v1
metadata:
name: nginx-webserver
labels:
name: webserver
spec:
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: "/usr/local/nginx/html"
name: app-data
volumes:
- name: app-data
emptyDir: {}

- Kubernetes supports different kinds of Volumes including external cloud


storage wherein the pod can use multiple of them at the same time.

Kubernetes supported Volumes:


-----------------------------
- emptyDir - It is a type of volume that is created as empty when a Pod is
first assigned to a Node.It remains active as long as the Pod is running on that
node.
Once the Pod is removed from the node, the data in the emptyDir is erased.
- hostPath - This type of volume mounts a file or directory from the host
node's filesystem into your pod.
- nfs - An nfs volume allows an existing NFS (Network File System) to be
mounted into your pod.
- The data in an nfs volume is not erased when the Pod is removed from the
node. The volume is only unmounted.
- secret - A secret volume is used to pass sensitive information, such as
passwords, to pods.
- configMap - A configMap resource provides a way to inject configuration
data into Pods.The data stored in a ConfigMap object can be referenced in a volume
of type configMap and then consumed by applications running in a Pod.
- gcePersistentDisk - This type of volume mounts a Google Compute Engine
(GCE) Persistent Disk into your Pod.The data in a gcePersistentDisk remains intact
when the Pod is removed from the node.
- awsElasticBlockStore - This type of volume mounts an Amazon Web Services
(AWS) Elastic Block Store into your Pod.The data in an awsElasticBlockStore remains
intact when the Pod is removed from the node.
- azureDiskVolume - An AzureDiskVolume is used to mount a Microsoft Azure
Data Disk into a Pod.
- azureFile - A azureFile is used to mount a Microsoft Azure File Volume
(SMB 2.1 and 3.0) into a Pod.
- local - A local volume represents a mounted local storage device such as
a disk, partition or directory.
- portworxVolume - A portworxVolume is an elastic block storage layer that
runs hyperconverged with Kubernetes.
- iscsi - An iscsi volume allows an existing iSCSI (SCSI over IP) volume to
be mounted into your pod.
- flocker - It is an open-source clustered container data volume manager.
It is used for managing data volumes. A flocker volume allows a Flocker dataset to
be mounted into a pod.
- glusterfs - Glusterfs is an open-source networked filesystem. A glusterfs
volume allows a glusterfs volume to be mounted into your pod.
- rbd - RBD stands for Rados Block Device. An rbd volume allows a Rados
Block Device volume to be mounted into your pod.
- cephfs - A cephfs volume allows an existing CephFS volume to be mounted
into your pod. Data remains intact after the Pod is removed from the node.
- downwardAPI - A downwardAPI volume is used to make downward API data
available to applications.It mounts a directory and writes the requested data in
plain text files.
- persistentVolumeClaim - A persistent volume claim volume is used to mount
a PersistentVolume into a pod.Persistent Volumes are a way for users to "claim"
durable storage without knowing the details of the particular cloud environment.

===================================================================================
=============================================================

1.Persistent Volumes :
----------------------
- A PersistentVolume (PV) is a storage resource in the cluster that has
been provisioned by an administrator or dynamically provisioned using Storage
Classes.

a.Static Provisioning :
-----------------------
- A cluster administrator creates a number of PVs. They carry the details
of the real storage, which is available for use by cluster users.

awsElasticBlockStore: Before you can use an EBS volume with a Pod, you need to
create it.

aws ec2 create-volume \


--availability-zone=eu-west-1a \
--size=100 \
--volume-type=gp2

PersistentVolume spec:

apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
storageClassName: ebs-disk
awsElasticBlockStore:
volumeID:
fsType: ext4

gcePersistentDisk: Before creating a PersistentVolume, you must create the PD.

gcloud beta compute disks create --size=200GB my-data-disk \


--region us-central1 \
--replica-zones us-central1-a,us-central1-b

PersistentVolume spec:

apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
storageClassName: gcp-disk
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4

kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
STORAGECLASS REASON AGE
test-volume 200Gi RWO Delete Available
gcp-disk 6s

azureDisk: Before creating a PersistentVolume, you must create a virtual disk in


Azure.

PersistentVolume spec:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
storageClassName: azure-disk
azureDisk:
diskName: test.vhd
diskURI: https://someaccount.blob.microsoft.net/vhds/test.vhd

azureFile: You will need to create a Kubernetes secret that holds both the account
name and key.

kubectl create secret generic azure-secret \


--from-literal=azurestorageaccountname=< ... > \
--from-literal=azurestorageaccountkey=< ... >

Before creating a PersistentVolume, create Azure Files share.

PersistentVolume spec:

apiVersion: v1
kind: PersistentVolume
metadata:
name: sample-storage
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: azure-file-share
azureFile:
secretName: azure-secret
shareName: k8stest # File share name
readOnly: false

NFS: Before creating a PersistentVolume, You will need NFS server details.

PersistentVolume spec:

apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
server: nfs-server.mydomain.com
path: "/"

b.Dynamic Provisioning:
-----------------------
- When none of the static PVs match a user's PersistentVolumeClaim, the
cluster may try to dynamically provision a volume, especially for the PVC.
- This provisioning is based on StorageClasses, the PVC must request a
storage class and the administrator must have created and configured that class for
dynamic provisioning to occur.

StorageClasses:
---------------
- Volume implementations are configured through StorageClass resources.
- If you set up a Kubernetes cluster on GCP, AWS, Azure or any other cloud
platforms, a default StorageClass creates for you which uses the standard
persistent disk type.

List storage class:


-------------------
AWS:
kubectl get storageclass
NAME PROVISIONER AGE
default (default) kubernetes.io/aws-ebs 3d

GCP:
kubectl get storageclass
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 3d

StorageClass Configuration:
---------------------------
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
volumeBindingMode: Immediate

- Capacity:Generally, a PV will have specific storage capacity. This is set


using the PV's capacity attribute.Currently, storage size is the only resource that
can be set or requested.
- Provisioner:Storage classes have a provisioner that determines what
volume plugin is used for provisioning PVs.
- Reclaim Policy: It can be either Delete or Retain. Default is Delete.
- Volume Binding Mode:The volumeBindingMode field controls when volume
binding and dynamic provisioning should occur. Immediate is default and specifying
the WaitForFirstConsumer mode.

-The following plugins support Wait For First Consumer with dynamic
provisioning:
- AWSElasticBlockStore
- GCEPersistentDisk
- AzureDisk
- Access Modes:PersistentVolumes support the following access modes:
ReadWriteOnce: The Volume can be mounted as a read-write by a
single node.
ReadOnlyMany: The Volume can be mounted read-only by many nodes.
ReadWriteMany: The Volume can be mounted as a read-write by many
nodes.

- PersistentVolumes that are backed by Compute Engine persistent disks


don't support this access mode.

===================================================================================
==========================================================

2.Persistent Volume Claims :


----------------------------
- A persistent volume claim (PVC) is a request for storage by a user from a
PV. Claims can request specific size and access modes (e.g: they can be mounted
once read/write or many times read-only).
- If none of the static persistent volumes match the user's PVC request,
the cluster may attempt to dynamically create a PV that matches the PVC request
based on storage class.

List PVs:
---------
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
REASON AGE
test-volume 200Gi RWO Delete Available gcp-disk
6s

Persistent volume claim manifest:


---------------------------------
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
# Request storage from test-volume PV
# which has defined storage class as "gcp-disk"
storageClassName: gcp-disk
resources:
requests:
storage: 200Gi

Create PVC:
-----------
kubectl create -f test-pvc.yaml
persistentvolumeclaim/test-pvc created

List PVCs:
----------
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-pvc Bound test-volume 200Gi RWO gcp-disk 7s

Use PVC to a Pod:


-----------------
kind: Pod
apiVersion: v1
metadata:
name: nginx-webserver
labels:
name: webserver
spec:
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: "/usr/local/nginx/html"
name: app-data
volumes:
- name: app-data
persistentVolumeClaim:
claimName: test-pvc

Create PVC without a static PV:


-------------------------------
- You can create a PVC based on storage class specification. If you omit
the storage class, it will use the default storage class.

kubectl get storageclass


NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 29m
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: wordpress-pvc
spec:
accessModes:
- ReadWriteOnce
# You can omit storageClassName to use default storage class
storageClassName: standard
resources:
requests:
storage: 300Gi

kubectl apply -f wordpress-pvc.yaml


persistentvolumeclaim/wordpress-pvc created

List dynamically created PVC and PV:


------------------------------------
kubectl get pvc wordpress-pvc
NAME STATUS VOLUME CAPACITY ACCESS
MODES STORAGECLASS AGE
wordpress-pvc Bound pvc-325160ee-fb3a-11e9-903e-42010a800149 300Gi RWO
standard 19s

kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM STORAGECLASS REASON AGE
pvc-325160ee-fb3a-11e9-903e-42010a800149 300Gi RWO Delete
Bound default/wordpress-pvc standard 43s

Wordpress deployment with PVC:


------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress:latest
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-pvc

Volume Claim Template :


-----------------------
- Volume claim templates is a list of claims that pods are allowed to
reference. The StatefulSet controller is responsible for mapping network identities
to claims in a way that maintains the identity of a pod.
- Every claim in this list must have at least one matching (by name)
volumeMount in one container in the template.
- You can also define storage class to leverage dynamic provisioning of
persistent volumes so you won't have to create them manually.

volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi

Wordpress StatefulSet with volume claim templates:


--------------------------------------------------
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
serviceName: wordpress
replicas: 3
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress:latest
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-data
mountPath: /var/www/html
volumeClaimTemplates:
- metadata:
name: wordpress-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 10Gi

===================================================================================

Advanced Topics:
================

1.Health checks :
-----------------
- Kubernetes provides a health checking mechanism to verify if a container
in a pod is working or not working.
- Kubernetes gives you two types of health checks performed by the
kubelet.They are:
- Liveness Probe
- Readiness Probe
a.Liveness Probe :
------------------
- Liveness probe checks the status of the container (whether it is running
or not).
- If livenessProbe fails, then the container is subjected to its restart
policy.

Define a liveness command


-------------------------
livenessProbe:
exec:
command:
- sh
- /tmp/status_check.sh
initialDelaySeconds: 10
periodSeconds: 5

Define a liveness HTTP request


------------------------------
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 3

Define a TCP liveness probe


---------------------------
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20

b.Readiness Probe :
-------------------
- Readiness probe checks whether your application is ready to serve the
requests.
- When the readiness probe fails, the pod's IP is removed from the endpoint
list of the service.
- There are three types of actions kubelet performs on a pod, which are:
- Executes a command inside the container
- Checks for a state of a particular port on the container
- Performs a GET request on container's IP

- Readiness probes are configured similarly to liveness probes.


- The only difference is that you use the readinessProbe field instead of
the livenessProbe field.

Define readiness probe


----------------------
readinessProbe:
exec:
command:
- sh
- /tmp/status_check.sh
initialDelaySeconds: 5
periodSeconds: 5

Configure Probes
----------------
Probes have a number of fields that you can use to more precisely control the
behavior of liveness and readiness checks:

initialDelaySeconds: Number of seconds after the container has started before


liveness or readiness probes are initiated.
Defaults to 0 seconds. Minimum value is 0.

periodSeconds: How often (in seconds) to perform the probe.


Default to 10 seconds. Minimum value is 1.

timeoutSeconds: Number of seconds after which the probe times out.


Defaults to 1 second. Minimum value is 1.

successThreshold: Minimum consecutive successes for the probe to be considered


successful after having failed.
Defaults to 1. Must be 1 for liveness. Minimum value is 1.

failureThreshold: Minimum consecutive fails for the probe to be considered


restarting the container. In case of readiness probe, the Pod will be marked
Unready.
Defaults to 3. Minimum value is 1.

Nginx deployment
================
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-webserver
labels:
app: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 3
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 3

httpGet have additional fields that can be set:


- path: Path to access on the HTTP server.
- port: Name or number of the port to access the container. Number must be
in the range 1 to 65535.
- host: Hostname to connect to, defaults to the pod IP. You probably want
to set "Host" in httpHeaders instead.
- httpHeaders: Custom headers to set in the request. HTTP allows repeated
headers.
- scheme: Scheme to use for connecting to the host (HTTP or HTTPS).
Defaults to HTTP.

===================================================================================
===========================

2.Resource Limits :
-------------------
- When Kubernetes schedules a Pod, the containers must have enough
resources to run.
- If the pod scheduled on a node with limited resources, it is possible for
the node to run out of memory or CPU resources and for things to stop working!
- It's also possible for applications to take up more resources than they
should due to a bad configuration and it goes out of control and uses 100% of the
available CPU.
- You can solve these problems by specifying resource requests and limits.

Requests
--------
- When you specify a Pod, you can optionally specify how much CPU and
memory each container needs.
- Requests are what the container is guaranteed to get. When containers
have resource requests specified, the scheduler can make better decisions about
which nodes to place Pods on.
- Memory requests: Used for finding nodes with enough memory and
making better scheduling decisions.
- CPU requests: Maps to the docker flag --cpu-shares, which defines
a relative weight of that container for CPU time.

Limits
------
- Limits define the upper bound of resources a container can use. The
container is only allowed to go up to the limit, and then it is restricted.
- Limits must always be greater or equal to requests. The behavior differs
between CPU and memory.
- Memory limits: Maps to the docker flag --memory, which means
processes in the container get killed by the kernel if they hit that memory usage
(OOMKilled).
- CPU limits: Maps to the docker flag --cpu-quota, which limits the
CPU time of that container's processes.
- A typical Pod spec for resources might look something like this.

containers:
- name: database
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "Ss&*@UES"
resources:
requests:
memory: 64Mi
cpu: 250m
limits:
memory: 128Mi
cpu: 500m
- name: frontend
image: wordpress
resources:
requests:
memory: 64Mi
cpu: 250m
limits:
memory: 128Mi
cpu: 500m

Notes:
------
CPU resources are defined in millicores. If your container needs one full core to
run, you would put the value "1000m".
If your container only needs 1/4 of a core, you would put a value of "250m".

Memory resources are defined in bytes. Normally, you give a mebibyte value for
memory, but you can give anything from bytes to petabytes.

Default Requests and Limits


---------------------------
- Kubernetes allows configuring default requests and limits for a
namespace.
- If a container is created in a namespace and the container does not
specify its own requests and limits, then the container is assigned the default
requests and limits.
- To establish default limits you create the LimitRange object in the
namespace.

apiVersion: v1
kind: LimitRange
metadata:
name: default-limit
spec:
limits:
- default:
memory: 100Mi
cpu: 100m
defaultRequest:
memory: 50Mi
cpu: 50m
type: Container

kubectl apply -f default-limit.yaml


limitrange/default-limit created

===================================================================================
=========================================

3.Resource Quotas :
-------------------
- When several teams of users share Kubernetes cluster, it is typically a
requirement to divide the computing resources. If not one team could use more
resources than its fair share of resources.
- Kubernetes namespaces help with this by creating logically isolated work
environments. But namespaces do not enforce limitations / quotas.
- Resource quotas are a tool for administrators to address this concern.

a.Resource Quotas:
------------------
- resource quota, defined by a ResourceQuota object, provides constraints
that limit aggregate resource consumption per namespace.
- It can limit the number of objects that can be created in a namespace by
type, as well as the total amount of computing resources and storage that may be
consumed by resources in that namespace.

Compute Resource Quota:


-------------------------
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: project-a
spec:
hard:
pods: 10
requests.cpu: 1
requests.memory: 1Gi
limits.cpu: 2
limits.memory: 2Gi

Object Resource Quota


---------------------
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: project-a
spec:
hard:
configmaps: 10
replicationcontrollers: 20
secrets: 10
services: 10
services.loadbalancers: 2

Storage Resource Quota


----------------------
apiVersion: v1
kind: ResourceQuota
metadata:
name: storage-consumption
namespace: project-a
spec:
hard:
persistentvolumeclaims: 10
requests.storage: 50Gi

Note:
-----
Resource Quota objects are independent of the Cluster Capacity. They are expressed
in absolute units.
So, if you add nodes to your cluster, this does not automatically give each
namespace the ability to consume more resources.

===================================================================================
========================================
4.Kubernetes Autoscaling :
--------------------------
- Autoscaling is one of the key features in the Kubernetes cluster that
auto-scales the pods.
- This is achieved via a Kubernetes resource called Horizontal Pod
Autoscaler (HPA).
- The Horizontal Pod Autoscaler automatically scales the number of pods in
a deployment, statefulset or replica set based on observed metrics such as average
CPU utilization, average memory utilization, or any other custom metric you
specify.

Note: Horizontal Pod Autoscaling does not apply to objects that can't be scaled,
for example, DaemonSets.
--> Create HPA using kubectl autoscale command:

kubectl autoscale deployment webapp \


--cpu-percent=70 \
--min=1 \
--max=5

horizontalpodautoscaler.autoscaling/webapp autoscaled

--> Check the status of autoscaler

kubectl get hpa


NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
webapp Deployment/webapp 0%/70% 1 5 1 110s

Declarative HPA:
----------------
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webapp
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 70

kubectl apply -f webapp-hpa.yaml

- Next, see how the autoscaler reacts to increased load. To do this, create
a different Pod to run in an infinite loop, sending queries to the webapp service.

kubectl run -i --tty load-generator --rm \


--image=busybox:1.28 \
--restart=Never \
-- /bin/sh -c "while sleep 0.01; do wget -q -O- http://webapp; done"

- Open new shell or command window and watch the hpa during load and after
stopped (crtl + c) the load.

kubectl get hpa php-apache --watch


NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
webapp Deployment/webapp 0%/70% 1 5 1 55m
webapp Deployment/webapp 249%/70% 1 5 1 56m
webapp Deployment/webapp 85%/70% 1 5 4 57m
webapp Deployment/webapp 62%/70% 1 5 5 58m
webapp Deployment/webapp 19%/70% 1 5 5 59m
webapp Deployment/webapp 0%/70% 1 5 5 60m
webapp Deployment/webapp 0%/70% 1 5 5 64m
webapp Deployment/webapp 0%/70% 1 5 2 65m
webapp Deployment/webapp 0%/70% 1 5 1 65m

Autoscaling on multiple metrics:


--------------------------------
- By using autoscaling/v2 API version, we can add multiple metrics for
autoscaling.
- Declarative HPA with v2:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webapp
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageValue: 10Mi

- You can specify resource metrics in terms of direct values using


target.averageValue field instead of the target.averageUtilization.
- There are two other types of metrics, both of which are considered custom
metrics:
- pod metrics
- object metrics
- These metrics may have names which are cluster specific, and require a
more advanced cluster monitoring setup.

Pod metrics looks like below:


-----------------------------
type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 2k

Object metrics looks like below:


--------------------------------
type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1
kind: Ingress
name: main-route
target:
type: Value
value: 2k

===================================================================================
================================

5. Kubernetes RBAC :
--------------------
- Kubernetes includes a built-in role-based access control (RBAC) mechanism
that allows you to regulate access to Kubernetes objects or resources based on the
roles of individual users.
- RBAC uses the rbac.authorization.k8s.io API Group to drive authorization
decisions, allowing admins to dynamically configure policies through the Kubernetes
API.
- RBAC is a stable feature from Kubernetes 1.8 and it is enabled by
default.
- The RBAC model in Kubernetes is based on three elements:
- Roles or ClusterRole: definition of the permissions for each
Kubernetes resource type
- Subjects: users (human or machine users) or groups of users
- RoleBindings or ClusterRoleBindings: definition of what Subjects
have which Roles

Default Roles and Role Bindings:


--------------------------------
- API servers create a set of default ClusterRole and ClusterRoleBinding
objects. Modifications to these resources can result in non-functional clusters.
- Many of these are system: prefixed, which indicates that the resource is
"owned" by the infrastructure.
- All of the default cluster roles and rolebindings are labeled with
kubernetes.io/bootstrapping=rbac-defaults

List roles and cluster roles:


-----------------------------
kubectl get roles
kubectl get clusterroles

List rolebindings and clusterrolebindings


-----------------------------------------
kubectl get rolebindings
kubectl get clusterrolebindings

a.Roles or ClusterRole:
-----------------------
- A Role can only be used to grant access to resources within a single
namespace, while a ClusterRole defines access to resources in the entire cluster.
- Define a role to be used to grant read access to pods in the default
namespace:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]

- Define a cluster role to be used to grant read access to pods


cluster-wide:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
# namespace omitted since ClusterRoles are not namespaced
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]

apiGroups: List of API groups to allow users or groups. Example: apps, extensions,
batch etc..
If specify " " api group that indicates the core API group.
resources: List of resources to allow users or groups.
Example: pods, nodes, service, configmaps, deployments, and PVC etc..

verbs: List of operations over these resources are:


create
get
delete
list
update
edit
watch
exec

b.RoleBinding and ClusterRoleBinding:


-------------------------------------
- A RoleBinding or ClusterRoleBinding grants the permissions defined in a
role to a list of subjects (users, groups, or service accounts).
- Permissions can be granted within a namespace with a RoleBinding, or
cluster-wide with a ClusterRoleBinding.
- Bind the user "mark" to the Role created above named "pod-reader":

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: mark
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

c.ClusterRoleBinding:
---------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-pods
subjects:
- kind: User
name: mark
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Bind the frontend-admins group:
-------------------------------
subjects:
- kind: Group
name: frontend-admins
apiGroup: rbac.authorization.k8s.io

Bind the ServiceAccount "example-sa" to the Role


------------------------------------------------
subjects:
- kind: ServiceAccount
name: example-sa
namespace: mynamespace # Omit namespace in ClusterRoleBinding
apiGroup: rbac.authorization.k8s.io

ServiceAccount :
----------------
- Kubernetes enables access control for pods by providing service accounts.
A service account provides an identity for processes that run in a Pod.
- When a process is authenticated through a service account, it can contact
the API server and access cluster resources.
- Every namespace has a default service account resource called default.

kubectl get serviceaccounts


NAME SECRETS AGE
default 1 1d

- You can create additional ServiceAccount objects like this:

apiVersion: v1
kind: ServiceAccount
metadata:
name: example-sa

kubectl create -f example-sa.yaml


serviceaccounts/example-sa created

Add ServiceAccount to Pod spec:


-------------------------------
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: example-sa

Bind all the service accounts in a namespace:


---------------------------------------------
subjects:
- kind: Group
name: system:serviceaccounts:my-namespace
apiGroup: rbac.authorization.k8s.io

Bind all the service accounts (all namespaces):


-----------------------------------------------
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io

===================================================================================
===================================

You might also like