Professional Documents
Culture Documents
k8s1 PDF
k8s1 PDF
k8s1 PDF
==========
Definitions :
- kubernetes commonly stylized as k8s
- k8s is an open source orchestration tool developed by google for managing
microservices or contarization applications across a distributed cluster of nodes.
or
- k8s is an open source container orchestration system for autimation application
deployments scaling and management.
- k8s created from Brog & omega project by google as they use it orchestrate they
datacenter since 2003
- Google open sourceed kubernetes at 2014.
K8S architecture :
------------------
- k8s follows a client-server architecture.
- A kubernetes cluster consistes of at least one master and multiple worker nodes
(or minions).
- It's also possible to set up multi-master setup for high availability By default,
there is a single master node which acts as controlling node and point of contact.
- Master Node :
- server that will create
- and it has all the component and service that manage,plan,schedule and
monitor all the worker nodes.
- Master node consists of following components such as api-server,
controller-manager, etcd, scheduler
- Worker Node
- the server that has host the applications as pod and containers.
- Worker node consists of following components such as Docker, kubelet,
kubeproxy.
Master Components :
-------------------
- Api-server :
- Api-server is the primary components of k8s and is responsible for
orchestrating all operations in the cluster.
- it serves as front end to the cluster.
- This is the only component that communicates with the etcd cluster,making
sure data is stored in etcd and
is in agreement with service details of the deployment pods.
- kubeconfig is a package along with the server side tools that can be used
for communication.it exposes kubernetes-api.
- Controller-manager :
- it is responsible for most of the collectors that regulates the state of
cluster and performs a task.
- when a change in a service configuration occurs, the controller spots the
change and starts working towards the new desired state.
- Kube-scheduler :
- it is responsible for distributing (or scheduled) workloads on various
worker nodes based on resource utilization.
- etcd cluster :
- it is an open source, highly available, distributed key value storage
which is used to store the k8s cluster data,Api object and service
discovery details.
- it is accessible only by kubernetes api server for security reasons.etcd
enables notifications to the cluster about configuration changes
with the help of watchers.
- Notifications are Api request on each etcd cluster node to trigger the
update of information in the node's storage.
- Kubelet :
- it a main service of node, connect b/w master and node
- ensuring that pods and their container are healthy and running in desired
state.
- it also reports to the master on the health of the host where it's
running.
- kube-proxy :
- it is a proxy service that runs on each worker node helps in making
service available to the external hosts.
- it performs request forwarding to the correct pos/containers across the
varipus isolated networks in a cluster.
- it manages pods on node,volume,secrets,creating a new container health
checkup, etc.
Kubectl :
- kubectl is a command line interface that interacts with api server and
sends commands to the master node.
- Each command will convert into an api call.
- pod :
- one or more containers that should be controlled as a single application.
- A pod is smallest and simplest unit that you create or deploy in k8s.
- A pod represents a single instance of an application in k8s.
Pod Lifecycle :
---------------
Pending :
- The pod has been accepted by the kubernetes system,but one or more of the
container Images has been created.
- This includes time before being scheduled as well as time spent
downloading images over the network which could take a while.
Running :
- The pod has been bound to a node,and all of the containers have been
created.
- Atleast one container is still running, or is in the process of starting
or restarting.
succeeded :
- All the containers in the pod have terminated in success, and will not be
restarted.
failed :
- All the containers in the pod have terminated, and at least one container
has terminated in failure.
- That is,the container either exited with non-zero status or was
terminated by the system.
unknown :
- For some resons the state of pod could not be obtained typically due to
an error in communicating with the host of the pod.
completed :
- The pod has run the completion as there is nothing to keep it running
eg:completed jobs
CrashLoopBackOff :
- This means the one of the containers in the pod has exited unexpectedly,
and perhaps with non-zero error code even after restarting sue to restartpolicy.
Multi-container pod :
---------------------
- pod are designed to support multicontainers.The contains in a pod are
automatically co-located and co-scheduled on the same nodes in the cluster.
- The containers can share resources and dependencied , communicate with
one another, and coordinated when and they are terminated.
- The 'one container per pod' model is the most common use cases and k8s
manages the pod rather than the container directly.
Multi-container pod :
---------------------
apiVersion: v1
kind: pod
metadata:
name: nginx-web
spec:
volumes:
- name: shared-files
emptyDir: {}
- name: nginx-config-volume
configMap
name: nginx-config
containers:
- name: app
image: php-app:1.0
volumeMounts:
- name: shared-files
mountpath: /var/www/hrml
- name: nginx-config-volume
mountpath: /etc/nginx/nginx.conf
subpath: nginx.conf
init containers :
-----------------
- it's also have one or more containers, which are run before the
application containers are started.
- init container are always run to completion.
- Each init container must comple successfully before the next one starts.
init containers :
-----------------
apiVersion: v1
kind: pod
metadata:
name: wordpress
spec:
containers:
- images: wordpress:latest
name: wordpress
ports:
- containerPort: 80
name: wordpress
initcontainers:
- name: change-permission
image: busy-box
command: ['sh', '-c', 'chown www-data:www-data /var/www.html && chmod -R 755
/var/www/html']
- Workloads
- workloads are controller objects that set deployment rules for pods.
- Types of workloads
- The most popular types supported by kubernetes are :
- Deployments
- Daemonsets
- Statefulsets
- Replica sets
- jobs
- Cronjobs
- Deployments:
- Deployments controller provides declarative updates for pods and it
manage stateless applictions running on your cluster.
- Deployments represents a set of multiple, identical pods and upgrade them
in a controlled way, performing a rolling update by default.
- A Deployment runs multiple replicas of your application and automatically
replaces any instances that fails or become unresponsive.
- In this ways, Deployments ensure that one or more instances of your
application are available to server user requests.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx-1.7.9
ports:
- containerPort: 80
1.Pod Template :
----------------
- The spec.tempalte is the only required field of the .spec.
- The spec.templated is a pod template.
Pod Template :
--------------
spec:
template:
metadata:
labels:
app: frontend
2.Restart Policy :
------------------
- only a spec.template.spec.restartPolicy equal to Always is allowed,which
is the default if not specified.
Restart Policy :
----------------
spec:
template:
metadata:
labels:
app: frontend
spec:
restartPolicy : Always
containers:
3.Replicas :
------------
- spec.replicas is an optional field that specifies the number of desired
pods.it defaults to 1
Replicas :
----------
spec:
replicas: 3
4.selector :
------------
- spec.selector is an optional field that specifies a label selector for
the pods targeted bt the deployment.
selector :
----------
spec:
replicas: 3
selector:
matchLabels:
app: frontend
5.Deployment strategy:
----------------------
- spec.strategyspecifies the strategy used to replace old pods by new ones.
- spec.strategy.type can be "Recreate" or "RollingUpdate"."RollingUpdate"
is the default value.
strategy:
---------
spec:
replicas: 3
strategy:
type: RollingUpdate
- DaemonSets
------------
- Like other controllers, DaemonSets manage groups of replicated pods.
- However, DaemonSet ensures that all or selected worker Nodes run a copy
of a pod (one-pod-per-node).
- As you add nodes, DaemonSets automatically add pods to the new nodes.As
the nodes removed from the cluster, those pods are garbage collected.
Manifest of DaemonSet:
----------------------
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd
spec:
selector:
matchLabels:
name: fluentd
template:
metadata:
labels:
name: fluentd
spec:
containers:
- name: fluentd
image: fluentd:latest
DaemonSets uses:
----------------
- To run a daemon for "cluster storage" on each node, such as 'glusterd'
- To run a daemon for "log collection" on each node, such as 'logstash'
- To run a daemon for "node monitoring" on each node, such as 'collectd'
- StatefulSets :
----------------
- StatefulSets represent the set of pods with unique, persistent identities
and stable hostnames.
- It provides the guarantees about the ordering of deployment and scaling.
- StatefulSets are valuable for application that requires one or more of
the following:
- stable,unique network identifiers
- stable, persistent storage
- Ordered, graceful deployment and scaling
- Ordered, graceful deletion and termination
statefulset components
- A Headless service
- A StatefulSet
- A PersistentVolume
StatefulSets :
--------------
apiVersion: v1
Kind: service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
matadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaims:
claimName: myclaim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReasWriteMany
resources:
requests:
storage: 8Gi
Create and manage StatefulSets
------------------------------
- ReplicaSets :
---------------
- A ReplicsSets's purpose to run a specified number of pods at any given
time.
- While ReplicaSets can be used independently.
- today it's mainly used by deployments as a machanism to orchestrate pod
creation,deletion and update.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx-1.7.9
ports:
- containerPort: 80
Manifests of ReplicaSets:
-------------------------
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp
labels:
app: webapp
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: webapp
image: webapp:2.0
- jobs :
--------
- you might also need to run large computation or batch procession
workloads in your cluster.For this,job controller is useful.
- A job creates one or more pods running in parallel.you can specify how
many number of pods needs to complete in this job.
- Cronjobs :
------------
- A cron job creates jobs on a time-based schedule.
- A CronJob object is just likely an entery in crontab in unix/linux.
- it runs a job periodically on a give schedulw.
- you need a wprking k8s cluster at version >=1.8 (for cronjob)
- For previous version of the cluster (<1.8) you need to explictly enable
batch/v2alpha1 API
by passing -runtime-config=bstch/v2alpha1=true to the api server
--> Create a Cron Job: kubectl create -f cronjob.yaml ; kubectl get cronjobs hello
; kubectl get jobs --watch
--> Get details of a cronjob: kubectl describe cronjob hello
--> Edit a cronjob : kubectl edit cronjob hello
--> Delete a cronjob : kubectl delete cronjob hello
===================================================================================
==================================================
- Metadata :
------------
- Metadata contains important information about kuberenetes objects.
- There are many attributes can be specified as metadata.
- But following are most common used attributes:
- name
- namespace
- labels
- annotations
1.metadata.name :
---------------
- metadata.name is only required string when creating or modifying the k8s
objects
such as pod,Deployment, service, configs and volumes etc.,
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
......
........
apiVersion: v1
kind: Service
metadata:
name: frontend-service
......
........
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
......
........
- Kubectl query for the objects by using their names.
--> kubectl get deployments frontend-app ; kubectl describe
deployments app-frontend-app
--> kubectl get service frontend-service ; kubectl describe service
frontend-service
--> kubectl get configmsp nginx-config ; kubectl describe configmap
nginx-config
2.metadata.namespace :
----------------------
- Each k8s objects is scoped to a namespace.
- metadata.namespace attribute specifies which namespace the object belong
to.
- k8s objects are uniquely identified within namespace by their name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
namespace: development
......
........
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: development
......
........
3.metadata.labels :
-------------------
- labels are key/value pairs that are attached to kubernetes objects.
- lables are typically used to specify identifying attributes of objects
that might be used to identify it,
or to select is as a member of some logical grouping of objects.
- Labels can be attached to objects at creation time and subsequently added
and modified at any time.
- Each object can have a set of key/value labels defined.
- Each key must be unique for a given object.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
namespace: development
labels:
tier: frontend
env: development
release: release-2
version: v1.8
......
........
4.metadata.annotations :
------------------------
- Annotations used to attach arbitrary non-identifying metadata to the
objects.
- Annotations are also key/value pairs that can be used by external tools
and libraries.
apiVersion: extentions/v1beta1
kind: ingress
metadata:
name: nginx-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: 'letsencrypt-prod'
ingress.kubernetes.io/force-ssl-redirect: 'true'
- Selectors:
------------
- Via, a selector, the client/user can identid=fy a set of objects.
- The selector is the core grouping primitive in k8s.
- The API currently supports two types od selectors
- Equality-based
- Set-based
- Equality-based : Equality based labels allow filtering by key and value.
The supported operator are =,==,!=.
- Set-based: set-based labels allows filtering keys according to a set of
values.
The supported operator are in, notin and exists.
Create a Deployment:
--------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-frontend
labels:
app: website
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
app: website
tier: frontend
template:
metadata:
labels:
app: website
tier: frontend
spec:
containers:
- name: frontend-website
image: learninghub/website:1.0
ports:
- containerPort: 80
Create a pod:
-------------
apiVersion: v1
kind: pod
metadata:
name: nginx
labels:
app: webserver
tier: frontend
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
Equality-based:
--> get all the web server pods: kubectl get pods -l app=webserver
--> get all the frontend pod but not web server :kubectl get pods -l
tier=frontend,app!=webserver
set-based :
--> Get all frontend pod but not website : kubectl fet pods -l 'tier in
(frontend),app notin(website)'
Deployment metadata:
metadata:
labels:
app: nginx
tier: frontend
stage: production
service Manifest:
kind: service
apiVersion: v1
metadata:
name: nginx-service
spec
selector:
app: nginx
stage: production
ports:
- port: 80
protocol: TCP
- Node Selector:
----------------
- Node selector is the simplest recommended from of node selector
constraint.
- Node selector is the field of podSpec.it specifies a map of key-value
pairs.
Pod Manifest:
-------------
apiVersion: vi
kind: pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
nodeselector:
env: prod
if you are using workload controller for your application,you have to specify
nodeselector in pod template(spec.templte.spec.nodeselector)
spec:
replicas: 3
selector:
matchLabels:
app: webapp
tier: frontend
template:
metadata:
labels:
app: webapp
tier: frontend
spec:
containers:
- name: webapp
image: webapp:1.0
ports:
- containerports: 80
nodeselector:
env: prod
===================================================================================
===========================
- Services :
-------------
- A kubernetes service is an abstruct way to access an application running
on a set of pods.
- The set of pods targeted by a service is determined by a Label Selector.
- Service provides features that are standardized across the cluster
- loadbalancing
- service discovery between applications
- features to support zero-downtime application deployments.
- Types in service spec
- ClusterIP
- Nodeport
- LoadBalancer
create a service :
- we can create a service in two ways, using kubectl expose commands and
with declarative from using yaml/json files.
kind: service
apiVersion: v1
metadata:
name : app-service
spec:
selector:
app: webapp
tier: frontend
ports:
- port: 80
name: web
protocol: TCP
targetPort: 9000
Kind: service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: backend
tier: backend
ports:
- port: 80
name: web
protocol: TCP
targetPort: 9000
- port: 9090
name: api
protocol: TCP
targetPort: 9090
--> Delete the service : kubectl delete service app-frontend ; kubectl delete -f
app-service.yaml
1. ClusterIP :
--------------
- ClusterIP is the default service type.
- ClusterIP exposes the service on a cluster internal IP
- choosing this value makes the service only reachable from within the
cluster.
port forward:
-------------
- The kubectl port-forward command allows you to access the application
from your local computer.
- it forward connects to a local port to a port on a pod.
- it is very useful for testing/debugging purposes spo you can access your
service locally without exposing it to external.
2.NodePort :
------------
- A NodePort service is the very basic to get external traffic directly to
your service.
- NodePort opens a specific port on your node/vm and when that port gets
traffic, that traffic is forwardr directly to service.
- For the NodePort service, K9s allocated a port from a config range
(Default 30000-32767) and each node forwards that port,which is same on each node,
to the service.
- it is possible to define a specific port number, but you should take care
to avoid potential port conflict.
NodePort service.yaml:
----------------------
kind: service
apiVersion: v1
metadata:
name: backend-service
spec:
selector:
app: backend
tier: backend
type:NodePort
ports:
- port: 80
name: backend
protocol: TCP
targetPort: 8080
Note:There are few limitations and hence it's not advised to use NodePort service.
- only one service per port.
- you can only use ports 30000-32767
- Dealing with changing node/vm ip is difficult.
3.LoadBalancer :
----------------
- A LoadBalancer service is the standard way to expose a service to the
internet(outside the world).
- By setting type field to LoadBalancer will provision a load Balancer for
your services.
- On AWS, they will create an ELB with DNS name that will forward all
traffic to your service.
4.ExternalName :
----------------
- An ExtenalName service is a special service that doesn't have selectors
and uses DNS names instead.
- This requires version 1.7 or higher of kube-dbs.
- The easier and right way to access external services from your pods is to
create Extername service.
- you have an external database like an AWS RDS hosted by amazon and you
wants your application to use the hostname as 'database'
which will redirect it to aws RDS instance.
apiVersion: v1
kind: service
metadata:
name: database
spec:
type: ExternalName
externalname: mysql-instance.123456789012.us-east-1.rds.amazopnaws.com
- if you don't have a domain name or need to do port remapping, simply add
the IP addresses to an endpoints and use that instead.
apiVersion: v1
kind: Endpoints
metadata:
name: database
subsets:
- addresses:
- ip: 33.134.23.105
ports:
- port: 3306
name: mysql
===================================================================================
======================================================
- kubernetes Ingress :
----------------------
- k8s has a built-in configuration object for HTTP load balacing called
ingress.
- it defines rules for external connectivity to the pod represented by one
or more k8s services.
- Ingress provide SSL termination and name based virtual hosting.
- The traffic routing is controlled by rules defined on the ingress
resource.
Ingress rules :
--------------
- Each http rule contains an optional host, a lost of paths each of which
has an associated backend defined with a serviceName and servicePort.
- If traffic path is not matched to any rules, then traffic send to default
backend.
Default backend:
---------------
- The default backend is typically a configuration option of the ingress
controller and is not specified in your ingress resources.
- if none of the nosts or paths match the HTTP requests in the ingress
objects, the traffic is routed to your default backend.
- Types of Ingress:
-------------------
1. single Service Ingress :
===========================
- if doesn't have any rules and it send traffic to a single service.you can
use this to create a default backend with no rules.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
spec:
backend:
serviceName: frontend-service
servicePort: 80
2.simple fanout :
=================
- A fanout configuration route traffic to more than one service, based on
the HTTP URL being requested.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
ingressClassName: nginx
rules:
- host: shopping.example.com
http:
paths:
- path: /clothes
pathType: Prefix
backend:
service
name: clothes-service
port:
number: 8080
- path: /kitchen
pathType: Prefix
backend:
service
name: kitchen-service
port:
number: 8081
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
ingressClassName: nginx
rules:
- host: shopping.example.com
http:
paths:
- path: /clothes
pathType: Prefix
backend:
service
name: clothes-service
port:
number: 8080
- path: /kitchen
pathType: Prefix
backend:
service
name: kitchen-service
port:
number: 8081
- host: music.example.com
http:
paths:
- path: /fr
pathType: Prefix
backend:
service
name: french-service
port:
number: 9090
- path: /en
pathType: Prefix
backend:
service
name: english-service
port:
number: 9091
- Ingress Controller :
======================
- In order to work the ingress resource, the kubernetes cluster must have
an ingress controller running.
- it runs as part of the kube-controller-manager and are typically started
automatically with a cluster.
- There are so many ingress controller implementation and choose the best
fits your cluster.
Ex. Ingress with Nginx ingress controller:
metadata:
name: my-ingress
annotations:
kubernetes.ip/ingress.class:nginx
- if you do not define ingress class,your cloud provider will use a default
ingress provider.
- SSL/TLS Certificates:
=======================
- you can secure an application running on k8s by creating a secret that
contains a TLS (transport layer security) private key and certificate.
- Currently, Ingress supports a single TLS port, 443, and assume TLS
termination.
- The TLS secret must contain keys namesd tls.crt and tls.key that contain
the certificate and private key to user for TLS.
using kubectl:
$kubectl creats secret tls my-tls-secret \ --key < private key filename > \ --cert
< certificate filename >
apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: secret
metadata:
name: my-tls-secret
namespace: default
type: kubernetes.io/tls
apiVersion:v1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- mydomain.com
secretName: my-tls-secret
rules:
- host: mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service
name: my-service
port:
number: 80
SSL-Let's Encrypt :
===================
- Let's Encrypt is a free automated and a non-profit certificate authority
- The certificate provided by the Let's Encrypt are valid for 90 days at no
charge, and you can renewal at any time.
- The certificate generation and renewal can be automated using cert-bot
and cert-manager (for k8s).
Cert-manager:
------------
- cert-manager is a kubernetes tool that issues certificated from various
certificate including Let's Encrypt
issuers:
--------
- Issuers (and clusterIssuers) represent a certificate authority from which
signed x509 certificates can be obtained,such as Let's Encrypt.
- you will need at least one issuer or ClusterIssuer in order to begin
issuing certificates within your cluster.
- An Issuer is a namespaced resource, you will need to create an issuer in
each namespace you wish to obtain certificate in.
- if you want to create a single issuer that can be consumed in multiple
namespaces you should consider creating a ClustrtIssuer resource.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: http://acme-v02.api.letencrypt.otg/directory
email: < your-name@domain.com >
privateKeySectetRef:
name:letsencrypt-prod
http01: {}
apiVersion: extensions/v1beta1
kind: ingress
metadata:
name: frontend
annotations:
kubernetes.io/ingress.class" nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- app.mydomain.com
secretName: app-mydomain-com
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- once the ingress is created there should be a tls secret and certificate
created.
- if all goes well, you will able to see the site over a secure TLS
connection and you don't have to worry about the renewal as well.
===================================================================================
================================================================================
Configurations:
--------------
1. secrets :
============
- Kubernetes secrets are secure objects which store sensitive data, such as
password OAuth tokens and SSK keys etc. with encryption in your clusters.
- Using secrets give you more flexible in a Pod Life cycle definition, and
control over how senitive dat is used.
- it reduces the risk exposing the data to unauthorized users.
- secrets are namespaced objects.
- secrets can be mounted as data volumes or environment variables
to be used by a container in a pod.
- API server stores secrets as plain text in etcd.
- A per-secret size limit of 1 MB
Create a secret :
-----------------
Create username.txt and password.txt files.
List secret:
------------
kubectl get secret/db-cerds
NAME TYPE DATA AGE
db-cerds Opaque 2 26s
View secret:
------------
kubectl describe secret/db-cerds
Name: db-cerds
Namespace: default
Labels:
Annotations:
Type: Opaque
Data
====
password.txt: 11 bytes
username.txt: 4 bytes
View secret:
------------
kubectl get secret/database-creds -o yaml
apiVersion: v1
data:
password: TXEyRCMoOGdmMDk=
username: cm9vdA==
kind: Secret
metadata:
creationTimestamp: 2019-02-25T06:22:37Z
name: database-creds
namespace: default
resourceVersion: "2657"
selfLink: /api/v1/namespaces/default/secrets/database-creds
uid: bf0cef90-38c5-11e9-8c95-42010a800068
type: Opaque
Usage of Secrets
----------------
- A Secret can be used with your workloads in two ways:
- specify environment variables that reference the Secret's values
- mount a volume containing the Secret.
Environment variables:
----------------------
apiVersion: v1
kind: Pod
metadata:
name: php-mysql-app
spec:
containers:
- name: php-app
image: php:latest
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: database-creds
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: database-creds
key: password
Secret as Volume:
-----------------
apiVersion: v1
kind: Pod
metadata:
name: redis-pod
spec:
containers:
- name: redis-pod
image: redis
volumeMounts:
- name: dbcreds
mountPath: "/etc/dbcreds"
readOnly: true
volumes:
- name: dbcreds
secret:
secretName: database-creds
Additional Info :
-----------------
Secret creation syntax
cat credentials.txt
username=admin
password=Ex67Hn*9#(jw
--from-literal flags
kubectl create secret generic literal-token \
--from-literal user=admin \
--from-literal password="Ex67Hn*9#(jw"
===================================================================================
===============================================================
2. ConfigMaps :
===============
- ConfigMaps are Kubernetes objects that allows you to separate
configuration data/files from image content to keep containerized applications
portable.
- ConfigMaps bind configuration files, command-line arguments, environment
variables, port numbers, and other configuration artifacts to your Pods containers
and system components at run-time.
- ConfigMaps are very useful for storing and sharing non-sensitive,
unencrypted configuration information.
- Like Secrets, you can create configmaps from files and with yaml
declaration. We can use configmaps by referring with its name and as a volume.
Create a configmap:
-------------------
You can create configmaps from directories, files, or literal values using kubectl
create configmap.
$ cat app.properties
environment=production
logging=INFO
logs_path=$APP_HOME/logs/
parllel_jobs=3
wait_time=30sec
kubectl create configmap app-config \
--from-file configs/app.properties
configmap "app-config" created
which is same as
List configmap:
---------------
kubectl get configmap/app-config
NAME DATA AGE
app-config 1 1m
View configmap:
---------------
kubectl describe configmap/app-config
Name: app-config
Namespace: default
Labels: < none >
Annotations: < none >
Data
====
app.properties:
----
environment=production
logging=INFO
logs_path=$APP_HOME/logs/
parllel_jobs=3
wait_time=30sec
kind: ConfigMap
apiVersion: v1
metadata:
name: app-config
namespace: default
data:
app.properties: |
environment=production
logging=INFO
logs_path=$APP_HOME/logs/
parllel_jobs=3
wait_time=30sec
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
nginx.conf: |
events {
}
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
}
}
}
Usage of ConfigMaps :
---------------------
- ConfigMaps can be used to populate individual environment variables as
shown in below :
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
env:
- name: ENVIRONMENT
valueFrom:
configMapKeyRef:
name: app-config
key: environment
- name: LOG_PATH
valueFrom:
configMapKeyRef:
name: app-config
key: logs_path
- name: THREDS_CLOUNT
valueFrom:
configMapKeyRef:
name: app-config
key: parllel_jobs
apiVersion: v1
kind: Pod
metadata:
name: nginx-web
spec:
volumes:
- name: nginx-config
configMap:
name: nginx-config
containers:
- image: nginx:1.7.9
name: nginx
ports:
- containerPort: 443
name: nginx-https
- containerPort: 80
name: nginx-http
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
===================================================================================
=================================================================
- The configuration file for the Pod defines a command and two arguments:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
spec:
containers:
- name: debian
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
restartPolicy: OnFailure
env:
- name: FILE_PATH
value: "/data/backup/"
command: ["rm -rf"]
args: ["$(FILE_PATH)"]
- In some cases, you need run commands in a shell. To run the commands in a
shell, wrap it like this:
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
===================================================================================
===============================================================
Kubernetes Volumes :
====================
Volume overview :
-----------------
- Data stored in Docker containers is ephemeral i.e. it only exists until
the container is alive.
- When Kubernetes restart a failed or crashed container, you will lose any
data stored in the container filesystem. Kubernetes solves this problem with the
help of Volumes.
- In Kubernetes, a volume is essentially a directory accessible to all
containers running in a pod and the data in volumes is preserved across container
restarts.
- The medium backing a volume and its contents are determined by the volume
type.
- To use a volume, a Pod specifies what volumes to provide for the Pod and
where to mount those into Containers.
kind: Pod
apiVersion: v1
metadata:
name: nginx-webserver
labels:
name: webserver
spec:
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: "/usr/local/nginx/html"
name: app-data
volumes:
- name: app-data
emptyDir: {}
===================================================================================
=============================================================
1.Persistent Volumes :
----------------------
- A PersistentVolume (PV) is a storage resource in the cluster that has
been provisioned by an administrator or dynamically provisioned using Storage
Classes.
a.Static Provisioning :
-----------------------
- A cluster administrator creates a number of PVs. They carry the details
of the real storage, which is available for use by cluster users.
awsElasticBlockStore: Before you can use an EBS volume with a Pod, you need to
create it.
PersistentVolume spec:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
storageClassName: ebs-disk
awsElasticBlockStore:
volumeID:
fsType: ext4
PersistentVolume spec:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
storageClassName: gcp-disk
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
STORAGECLASS REASON AGE
test-volume 200Gi RWO Delete Available
gcp-disk 6s
PersistentVolume spec:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
storageClassName: azure-disk
azureDisk:
diskName: test.vhd
diskURI: https://someaccount.blob.microsoft.net/vhds/test.vhd
azureFile: You will need to create a Kubernetes secret that holds both the account
name and key.
PersistentVolume spec:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sample-storage
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: azure-file-share
azureFile:
secretName: azure-secret
shareName: k8stest # File share name
readOnly: false
NFS: Before creating a PersistentVolume, You will need NFS server details.
PersistentVolume spec:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
server: nfs-server.mydomain.com
path: "/"
b.Dynamic Provisioning:
-----------------------
- When none of the static PVs match a user's PersistentVolumeClaim, the
cluster may try to dynamically provision a volume, especially for the PVC.
- This provisioning is based on StorageClasses, the PVC must request a
storage class and the administrator must have created and configured that class for
dynamic provisioning to occur.
StorageClasses:
---------------
- Volume implementations are configured through StorageClass resources.
- If you set up a Kubernetes cluster on GCP, AWS, Azure or any other cloud
platforms, a default StorageClass creates for you which uses the standard
persistent disk type.
GCP:
kubectl get storageclass
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 3d
StorageClass Configuration:
---------------------------
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
volumeBindingMode: Immediate
-The following plugins support Wait For First Consumer with dynamic
provisioning:
- AWSElasticBlockStore
- GCEPersistentDisk
- AzureDisk
- Access Modes:PersistentVolumes support the following access modes:
ReadWriteOnce: The Volume can be mounted as a read-write by a
single node.
ReadOnlyMany: The Volume can be mounted read-only by many nodes.
ReadWriteMany: The Volume can be mounted as a read-write by many
nodes.
===================================================================================
==========================================================
List PVs:
---------
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
REASON AGE
test-volume 200Gi RWO Delete Available gcp-disk
6s
Create PVC:
-----------
kubectl create -f test-pvc.yaml
persistentvolumeclaim/test-pvc created
List PVCs:
----------
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-pvc Bound test-volume 200Gi RWO gcp-disk 7s
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM STORAGECLASS REASON AGE
pvc-325160ee-fb3a-11e9-903e-42010a800149 300Gi RWO Delete
Bound default/wordpress-pvc standard 43s
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
===================================================================================
Advanced Topics:
================
1.Health checks :
-----------------
- Kubernetes provides a health checking mechanism to verify if a container
in a pod is working or not working.
- Kubernetes gives you two types of health checks performed by the
kubelet.They are:
- Liveness Probe
- Readiness Probe
a.Liveness Probe :
------------------
- Liveness probe checks the status of the container (whether it is running
or not).
- If livenessProbe fails, then the container is subjected to its restart
policy.
b.Readiness Probe :
-------------------
- Readiness probe checks whether your application is ready to serve the
requests.
- When the readiness probe fails, the pod's IP is removed from the endpoint
list of the service.
- There are three types of actions kubelet performs on a pod, which are:
- Executes a command inside the container
- Checks for a state of a particular port on the container
- Performs a GET request on container's IP
Configure Probes
----------------
Probes have a number of fields that you can use to more precisely control the
behavior of liveness and readiness checks:
Nginx deployment
================
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-webserver
labels:
app: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 3
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 3
===================================================================================
===========================
2.Resource Limits :
-------------------
- When Kubernetes schedules a Pod, the containers must have enough
resources to run.
- If the pod scheduled on a node with limited resources, it is possible for
the node to run out of memory or CPU resources and for things to stop working!
- It's also possible for applications to take up more resources than they
should due to a bad configuration and it goes out of control and uses 100% of the
available CPU.
- You can solve these problems by specifying resource requests and limits.
Requests
--------
- When you specify a Pod, you can optionally specify how much CPU and
memory each container needs.
- Requests are what the container is guaranteed to get. When containers
have resource requests specified, the scheduler can make better decisions about
which nodes to place Pods on.
- Memory requests: Used for finding nodes with enough memory and
making better scheduling decisions.
- CPU requests: Maps to the docker flag --cpu-shares, which defines
a relative weight of that container for CPU time.
Limits
------
- Limits define the upper bound of resources a container can use. The
container is only allowed to go up to the limit, and then it is restricted.
- Limits must always be greater or equal to requests. The behavior differs
between CPU and memory.
- Memory limits: Maps to the docker flag --memory, which means
processes in the container get killed by the kernel if they hit that memory usage
(OOMKilled).
- CPU limits: Maps to the docker flag --cpu-quota, which limits the
CPU time of that container's processes.
- A typical Pod spec for resources might look something like this.
containers:
- name: database
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "Ss&*@UES"
resources:
requests:
memory: 64Mi
cpu: 250m
limits:
memory: 128Mi
cpu: 500m
- name: frontend
image: wordpress
resources:
requests:
memory: 64Mi
cpu: 250m
limits:
memory: 128Mi
cpu: 500m
Notes:
------
CPU resources are defined in millicores. If your container needs one full core to
run, you would put the value "1000m".
If your container only needs 1/4 of a core, you would put a value of "250m".
Memory resources are defined in bytes. Normally, you give a mebibyte value for
memory, but you can give anything from bytes to petabytes.
apiVersion: v1
kind: LimitRange
metadata:
name: default-limit
spec:
limits:
- default:
memory: 100Mi
cpu: 100m
defaultRequest:
memory: 50Mi
cpu: 50m
type: Container
===================================================================================
=========================================
3.Resource Quotas :
-------------------
- When several teams of users share Kubernetes cluster, it is typically a
requirement to divide the computing resources. If not one team could use more
resources than its fair share of resources.
- Kubernetes namespaces help with this by creating logically isolated work
environments. But namespaces do not enforce limitations / quotas.
- Resource quotas are a tool for administrators to address this concern.
a.Resource Quotas:
------------------
- resource quota, defined by a ResourceQuota object, provides constraints
that limit aggregate resource consumption per namespace.
- It can limit the number of objects that can be created in a namespace by
type, as well as the total amount of computing resources and storage that may be
consumed by resources in that namespace.
Note:
-----
Resource Quota objects are independent of the Cluster Capacity. They are expressed
in absolute units.
So, if you add nodes to your cluster, this does not automatically give each
namespace the ability to consume more resources.
===================================================================================
========================================
4.Kubernetes Autoscaling :
--------------------------
- Autoscaling is one of the key features in the Kubernetes cluster that
auto-scales the pods.
- This is achieved via a Kubernetes resource called Horizontal Pod
Autoscaler (HPA).
- The Horizontal Pod Autoscaler automatically scales the number of pods in
a deployment, statefulset or replica set based on observed metrics such as average
CPU utilization, average memory utilization, or any other custom metric you
specify.
Note: Horizontal Pod Autoscaling does not apply to objects that can't be scaled,
for example, DaemonSets.
--> Create HPA using kubectl autoscale command:
horizontalpodautoscaler.autoscaling/webapp autoscaled
Declarative HPA:
----------------
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webapp
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 70
- Next, see how the autoscaler reacts to increased load. To do this, create
a different Pod to run in an infinite loop, sending queries to the webapp service.
- Open new shell or command window and watch the hpa during load and after
stopped (crtl + c) the load.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webapp
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageValue: 10Mi
===================================================================================
================================
5. Kubernetes RBAC :
--------------------
- Kubernetes includes a built-in role-based access control (RBAC) mechanism
that allows you to regulate access to Kubernetes objects or resources based on the
roles of individual users.
- RBAC uses the rbac.authorization.k8s.io API Group to drive authorization
decisions, allowing admins to dynamically configure policies through the Kubernetes
API.
- RBAC is a stable feature from Kubernetes 1.8 and it is enabled by
default.
- The RBAC model in Kubernetes is based on three elements:
- Roles or ClusterRole: definition of the permissions for each
Kubernetes resource type
- Subjects: users (human or machine users) or groups of users
- RoleBindings or ClusterRoleBindings: definition of what Subjects
have which Roles
a.Roles or ClusterRole:
-----------------------
- A Role can only be used to grant access to resources within a single
namespace, while a ClusterRole defines access to resources in the entire cluster.
- Define a role to be used to grant read access to pods in the default
namespace:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
# namespace omitted since ClusterRoles are not namespaced
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
apiGroups: List of API groups to allow users or groups. Example: apps, extensions,
batch etc..
If specify " " api group that indicates the core API group.
resources: List of resources to allow users or groups.
Example: pods, nodes, service, configmaps, deployments, and PVC etc..
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: mark
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
c.ClusterRoleBinding:
---------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-pods
subjects:
- kind: User
name: mark
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Bind the frontend-admins group:
-------------------------------
subjects:
- kind: Group
name: frontend-admins
apiGroup: rbac.authorization.k8s.io
ServiceAccount :
----------------
- Kubernetes enables access control for pods by providing service accounts.
A service account provides an identity for processes that run in a Pod.
- When a process is authenticated through a service account, it can contact
the API server and access cluster resources.
- Every namespace has a default service account resource called default.
apiVersion: v1
kind: ServiceAccount
metadata:
name: example-sa
===================================================================================
===================================